1887
Volume 26, Issue 2
  • ISSN 1572-0373
  • E-ISSN: 1572-0381
USD
Buy:$35.00 + Taxes

Abstract

Abstract

The increasing integration of Artificial Intelligence (AI) into human teams necessitates a deeper understanding of how to foster effective collaboration. This study investigates how incorporating emojis, as a representation of emotional intelligence, into AI communication influences human-AI team dynamics. Specifically, the study examined how emojis impact human trust in AI teammates, whether different types of emojis yield varied outcomes, and how emoji use affects the perceived performance of both AI and human teammates. A controlled experiment was conducted with participants who collaborated with a simulated AI teammate on a geographic location identification task. The AI teammate’s reliability and the use of emojis were manipulated across different experimental conditions. Results showed that neither the AI teammate’s reliability nor the use of emojis significantly influenced participants’ explicit trust ratings in the AI teammate. These findings highlight the complex interplay of trust, perception, and emotional cues in HAT collaboration.

Loading

Article metrics loading...

/content/journals/10.1075/is.24045.bai
2026-02-27
2026-03-17
Loading full text...

Full text loading...

References

  1. Ahanin, Z., & Ismail, M. A.
    (2022) A multi-label emoji classification method using balanced pointwise mutual information-based feature selection. Computer Speech and Language, 731. 10.1016/j.csl.2021.101330
    https://doi.org/10.1016/j.csl.2021.101330 [Google Scholar]
  2. Bailey, M. E., & Pollick, F. E.
    (2023) Social Intelligence towards Human-AI Teambuilding. Proceedings of the 37th AAAI Conference on Artificial Intelligence, AAAI 2023, 371.
    [Google Scholar]
  3. Bansal, G., Nushi, B., Kamar, E., Horvitz, E., & Weld, D. S.
    (2021) Is the Most Accurate AI the Best Teammate? Optimizing AI for Teamwork. 35th AAAI Conference on Artificial Intelligence, AAAI 2021, 13A. 10.1609/aaai.v35i13.17359
    https://doi.org/10.1609/aaai.v35i13.17359 [Google Scholar]
  4. Bansal, G., Nushi, B., Kamar, E., Lasecki, W. S., Weld, D. S., & Horvitz, E.
    (2019) Beyond Accuracy: The Role of Mental Models in Human-AI Team Performance. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, 71. 10.1609/hcomp.v7i1.5285
    https://doi.org/10.1609/hcomp.v7i1.5285 [Google Scholar]
  5. Bansal, G., Nushi, B., Kamar, E., Weld, D. S., Lasecki, W. S., & Horvitz, E.
    (2019) Updates in human-ai teams: Understanding and addressing the performance/compatibility tradeoff. 33rd AAAI Conference on Artificial Intelligence, AAAI 2019, 31st Innovative Applications of Artificial Intelligence Conference, IAAI 2019 and the 9th AAAI Symposium on Educational Advances in Artificial Intelligence, EAAI 2019. 10.1609/aaai.v33i01.33012429
    https://doi.org/10.1609/aaai.v33i01.33012429 [Google Scholar]
  6. Bansal, G., Wu, T., & Zhou, J.
    (2021) Does the whole exceed its parts? The efect of ai explanations on complementary team performance. Conference on Human Factors in Computing Systems — Proceedings. 10.1145/3411764.3445717
    https://doi.org/10.1145/3411764.3445717 [Google Scholar]
  7. Barr, D. J.
    (2013) Random effects structure for testing interactions in linear mixed-effects models. Frontiers in Psychology, 41. 10.3389/fpsyg.2013.00328
    https://doi.org/10.3389/fpsyg.2013.00328 [Google Scholar]
  8. Bartneck, C., Kulić, D., Croft, E., & Zoghbi, S.
    (2009) Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots. InInternational Journal of Social Robotics (Vol.1, Issue1). 10.1007/s12369‑008‑0001‑3
    https://doi.org/10.1007/s12369-008-0001-3 [Google Scholar]
  9. Bates, D., Mächler, M., Bolker, B. M., & Walker, S. C.
    (2015) Fitting linear mixed-effects models using lme4. Journal of Statistical Software, 67(1). 10.18637/jss.v067.i01
    https://doi.org/10.18637/jss.v067.i01 [Google Scholar]
  10. Beattie, A., Edwards, A. P., & Edwards, C.
    (2020) A Bot and a Smile: Interpersonal Impressions of Chatbots and Humans Using Emoji in Computer-mediated Communication. Communication Studies, 71(3). 10.1080/10510974.2020.1725082
    https://doi.org/10.1080/10510974.2020.1725082 [Google Scholar]
  11. Berretta, S., Tausch, A., Ontrup, G., Gilles, B., Peifer, C., & Kluge, A.
    (2023) Defining human-AI teaming the human-centered way: a scoping review and network analysis. Frontiers in Artificial Intelligence, 61. 10.3389/frai.2023.1250725
    https://doi.org/10.3389/frai.2023.1250725 [Google Scholar]
  12. Chai, D. S., Hwang, S. J., & Joo, B. K.
    (2017) Transformational Leadership and Organizational Commitment in Teams: The Mediating Roles of Shared Vision and Team-Goal Commitment. Performance Improvement Quarterly, 30(2). 10.1002/piq.21244
    https://doi.org/10.1002/piq.21244 [Google Scholar]
  13. Cheng, X., Zhang, X., Cohen, J., & Mou, J.
    (2022) Human vs. AI: Understanding the impact of anthropomorphism on consumer response to chatbots from the perspective of trust and relationship norms. Information Processing and Management, 59(3). 10.1016/j.ipm.2022.102940
    https://doi.org/10.1016/j.ipm.2022.102940 [Google Scholar]
  14. Chiou, E. K., Lee, J. D., & Su, T.
    (2019) Negotiated and reciprocal exchange structures in human-agent cooperation. Computers in Human Behavior, 901. 10.1016/j.chb.2018.08.012
    https://doi.org/10.1016/j.chb.2018.08.012 [Google Scholar]
  15. Cooke, N. J., Salas, E., Cannon-Bowers, J. A., & Stout, R. J.
    (2000) Measuring team knowledge. Human Factors, 42(1). 10.1518/001872000779656561
    https://doi.org/10.1518/001872000779656561 [Google Scholar]
  16. Costantini, S., De Gasperis, G., & Olivieri, R.
    (2019) Digital forensics and investigations meet artificial intelligence. Annals of Mathematics and Artificial Intelligence, 86(1–3). 10.1007/s10472‑019‑09632‑y
    https://doi.org/10.1007/s10472-019-09632-y [Google Scholar]
  17. de Visser, E. J., Monfort, S. S., McKendrick, R., Smith, M. A. B., McKnight, P. E., Krueger, F., & Parasuraman, R.
    (2016) Almost human: Anthropomorphism increases trust resilience in cognitive agents. Journal of Experimental Psychology: Applied, 22(3). 10.1037/xap0000092
    https://doi.org/10.1037/xap0000092 [Google Scholar]
  18. de Visser, E. J., Peeters, M. M. M., Jung, M. F., Kohn, S., Shaw, T. H., Pak, R., & Neerincx, M. A.
    (2020) Towards a Theory of Longitudinal Trust Calibration in Human–Robot Teams. International Journal of Social Robotics, 12(2). 10.1007/s12369‑019‑00596‑x
    https://doi.org/10.1007/s12369-019-00596-x [Google Scholar]
  19. Derks, D., Fischer, A. H., & Bos, A. E. R.
    (2008) The role of emotion in computer-mediated communication: A review. InComputers in Human Behavior (Vol.24, Issue3). 10.1016/j.chb.2007.04.004
    https://doi.org/10.1016/j.chb.2007.04.004 [Google Scholar]
  20. Döppner, D. A., Derckx, P., & Schoder, D.
    (2019) Symbiotic co-evolution in collaborative human-machine decision making: Exploration of a multi-year design science research project in the air cargo industry. Proceedings of the Annual Hawaii International Conference on System Sciences, 2019-January. 10.24251/HICSS.2019.033
    https://doi.org/10.24251/HICSS.2019.033 [Google Scholar]
  21. Endsley, M. R.
    (2023) Supporting Human-AI Teams:Transparency, explainability, and situation awareness. Computers in Human Behavior, 1401. 10.1016/j.chb.2022.107574
    https://doi.org/10.1016/j.chb.2022.107574 [Google Scholar]
  22. Epley, N., Waytz, A., & Cacioppo, J. T.
    (2007) On Seeing Human: A Three-Factor Theory of Anthropomorphism. Psychological Review, 114(4). 10.1037/0033‑295X.114.4.864
    https://doi.org/10.1037/0033-295X.114.4.864 [Google Scholar]
  23. Fadhil, A., Schiavo, G., Wang, Y., & Yilma, B. A.
    (2018) The effect of emojis when interacting with conversational interface assisted health coaching system. ACM International Conference Proceeding Series. 10.1145/3240925.3240965
    https://doi.org/10.1145/3240925.3240965 [Google Scholar]
  24. Fügener, A., Grahl, J., Gupta, A., & Ketter, W.
    (2022) Cognitive Challenges in Human–Artificial Intelligence Collaboration: Investigating the Path Toward Productive Delegation. Information Systems Research, 33(2). 10.1287/isre.2021.1079
    https://doi.org/10.1287/isre.2021.1079 [Google Scholar]
  25. Gesselman, A. N., Ta, V. P., & Garcia, J. R.
    (2019) Worth a thousand interpersonal words: Emoji as affective signals for relationship-oriented digital communication. PLoS ONE, 14(8). 10.1371/journal.pone.0221297
    https://doi.org/10.1371/journal.pone.0221297 [Google Scholar]
  26. Ghosh, R., Shuck, B., & Petrosko, J.
    (2012) Emotional intelligence and organizational learning in work teams. Journal of Management Development, 31(6). 10.1108/02621711211230894
    https://doi.org/10.1108/02621711211230894 [Google Scholar]
  27. Glikson, E., & Woolley, A. W.
    (2020) Human trust in artificial intelligence: Review of empirical research. Academy of Management Annals, 14(2), 627–660. 10.5465/annals.2018.0057
    https://doi.org/10.5465/annals.2018.0057 [Google Scholar]
  28. Guidotti, R., Monreale, A., Ruggieri, S., Turini, F., Giannotti, F., & Pedreschi, D.
    (2018) A survey of methods for explaining black box models. ACM Computing Surveys, 51(5). 10.1145/3236009
    https://doi.org/10.1145/3236009 [Google Scholar]
  29. Hamza, A.
    (2016) Are Emojis Creating a New or Old Visual Language for New Generations? A Socio-semiotic Study. Advances in Language and Literary Studies, 7(6). 10.7575/aiac.alls.v.7n.6p.56
    https://doi.org/10.7575/aiac.alls.v.7n.6p.56 [Google Scholar]
  30. Hancock, P. A., Billings, D. R., Schaefer, K. E., Chen, J. Y. C., De Visser, E. J., & Parasuraman, R.
    (2011) A meta-analysis of factors affecting trust in human-robot interaction. Human Factors, 53(5). 10.1177/0018720811417254
    https://doi.org/10.1177/0018720811417254 [Google Scholar]
  31. Kamar, E.
    (2016) Directions in hybrid intelligence: Complementing AI systems with human intelligence. IJCAI International Joint Conference on Artificial Intelligence, 2016-January.
    [Google Scholar]
  32. Kim, D., Song, Y., Kim, S., Lee, S., Wu, Y., Shin, J., & Lee, D.
    (2023) How should the results of artificial intelligence be explained to users? — Research on consumer preferences in user-centered explainable artificial intelligence. Technological Forecasting and Social Change, 1881. 10.1016/j.techfore.2023.122343
    https://doi.org/10.1016/j.techfore.2023.122343 [Google Scholar]
  33. Körber, M.
    (2019) Theoretical considerations and development of a questionnaire to measure trust in automation. Advances in Intelligent Systems and Computing, 8231. 10.1007/978‑3‑319‑96074‑6_2
    https://doi.org/10.1007/978-3-319-96074-6_2 [Google Scholar]
  34. Kulms, P., & Kopp, S.
    (2019) More human-likeness, more trust? The effect of anthropomorphism on self-reported and behavioral trust in continued and interdependent human–agent cooperation. ACM International Conference Proceeding Series. 10.1145/3340764.3340793
    https://doi.org/10.1145/3340764.3340793 [Google Scholar]
  35. Kunar, M. A., & Watson, D. G.
    (2023) Framing the fallibility of Computer-Aided Detection aids cancer detection. Cognitive Research: Principles and Implications, 8(1). 10.1186/s41235‑023‑00485‑y
    https://doi.org/10.1186/s41235-023-00485-y [Google Scholar]
  36. Lee, C., & Wong, C. S.
    (2019) The effect of team emotional intelligence on team process and effectiveness. Journal of Management and Organization, 25(6). 10.1017/jmo.2017.43
    https://doi.org/10.1017/jmo.2017.43 [Google Scholar]
  37. Lenth, R., Piaskowski, J.
    (2025) emmeans: Estimated Marginal Means, aka Least-Squares Means. R packa 2.0.1, https://rvlenth.github.io/emmeans/
    [Google Scholar]
  38. Liang, C., Proft, J., Andersen, E., & Knepper, R. A.
    (2019) Implicit Communication of Actionable Information in Human-AI teams. Conference on Human Factors in Computing Systems — Proceedings. 10.1145/3290605.3300325
    https://doi.org/10.1145/3290605.3300325 [Google Scholar]
  39. McNeese, N. J., Demir, M., Cooke, N. J., & Myers, C.
    (2018) Teaming With a Synthetic Teammate: Insights into Human-Autonomy Teaming. Human Factors, 60(2). 10.1177/0018720817743223
    https://doi.org/10.1177/0018720817743223 [Google Scholar]
  40. Merritt, S. M., Heimbaugh, H., Lachapell, J., & Lee, D.
    (2013) I trust it, but i don’t know why: Effects of implicit attitudes toward automation on trust in an automated system. Human Factors, 55(3). 10.1177/0018720812465081
    https://doi.org/10.1177/0018720812465081 [Google Scholar]
  41. Merritt, T. R., Tan, K. B., Ong, C., Thomas, A., Chuah, T. L., & McGee, K.
    (2011) Are artificial team-mates scapegoats in computer games. Proceedings of the ACM Conference on Computer Supported Cooperative Work, CSCW. 10.1145/1958824.1958945
    https://doi.org/10.1145/1958824.1958945 [Google Scholar]
  42. Oh, C., Song, J., Choi, J., Kim, S., Lee, S., & Suh, B.
    (2018) I lead, you help but only with enough details: Understanding the user experience of co-creation with artificial intelligence. Conference on Human Factors in Computing Systems — Proceedings, 2018-April. 10.1145/3173574.3174223
    https://doi.org/10.1145/3173574.3174223 [Google Scholar]
  43. O’Neill, T. A., Flathmann, C., McNeese, N. J., & Salas, E.
    (2023) Human-autonomy Teaming: Need for a guiding team-based framework?Computers in Human Behavior, 1461. 10.1016/j.chb.2023.107762
    https://doi.org/10.1016/j.chb.2023.107762 [Google Scholar]
  44. Ong, C., McGee, K., & Chuah, T. L.
    (2012) Closing the human-AI team-mate gap: How changes to displayed information impact player behavior towards computer teammates. Proceedings of the 24th Australian Computer-Human Interaction Conference, OzCHI 2012. 10.1145/2414536.2414604
    https://doi.org/10.1145/2414536.2414604 [Google Scholar]
  45. OpenAI
    OpenAI (2024) ChatGPT (3.5).
    [Google Scholar]
  46. Peirce, J., Gray, J. R., Simpson, S., MacAskill, M., Höchenberger, R., Sogo, H., Kastman, E., & Lindeløv, J. K.
    (2019) PsychoPy2: Experiments in behavior made easy. Behavior Research Methods, 51(1). 10.3758/s13428‑018‑01193‑y
    https://doi.org/10.3758/s13428-018-01193-y [Google Scholar]
  47. Pinski, M., Adam, M., & Benlian, A.
    (2023) AI Knowledge: Improving AI Delegation through Human Enablement. Conference on Human Factors in Computing Systems — Proceedings. 10.1145/3544548.3580794
    https://doi.org/10.1145/3544548.3580794 [Google Scholar]
  48. Ribeiro, M. T., Singh, S., & Guestrin, C.
    (2016) “Why should i trust you?” Explaining the predictions of any classifier. Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, 13–17-August-2016. 10.18653/v1/N16‑3020
    https://doi.org/10.18653/v1/N16-3020 [Google Scholar]
  49. Rix, J.
    (2022) From Tools to Teammates: Conceptualizing Humans’ Perception of Machines as Teammates with a Systematic Literature Review. Proceedings of the 55th Hawaii International Conference on System Sciences. 10.24251/HICSS.2022.048
    https://doi.org/10.24251/HICSS.2022.048 [Google Scholar]
  50. Roy, R., & Naidoo, V.
    (2021) Enhancing chatbot effectiveness: The role of anthropomorphic conversational styles and time orientation. Journal of Business Research, 1261. 10.1016/j.jbusres.2020.12.051
    https://doi.org/10.1016/j.jbusres.2020.12.051 [Google Scholar]
  51. Salovey, P., & Mayer, J. D.
    (1990) Emotional Intelligence. Imagination, Cognition and Personality, 9(3), 185–211. 10.2190/DUGG‑P24E‑52WK‑6CDG
    https://doi.org/10.2190/DUGG-P24E-52WK-6CDG [Google Scholar]
  52. Schelble, B. G., Flathmann, C., McNeese, N. J., Freeman, G., & Mallick, R.
    (2022) Let’s Think Together! Assessing Shared Mental Models, Performance, and Trust in Human-Agent Teams. Proceedings of the ACM on Human-Computer Interaction, 6(GROUP). 10.1145/3492832
    https://doi.org/10.1145/3492832 [Google Scholar]
  53. Schmidt, P., Biessmann, F., & Teubner, T.
    (2020) Transparency and trust in artificial intelligence systems. Journal of Decision Systems, 29(4). 10.1080/12460125.2020.1819094
    https://doi.org/10.1080/12460125.2020.1819094 [Google Scholar]
  54. Stoianov, D., Kemp, N., Wegener, S., & Beyersmann, E.
    (2024) Emojis and affective priming in visual word recognition. Cognition and Emotion. 10.1080/02699931.2024.2402492
    https://doi.org/10.1080/02699931.2024.2402492 [Google Scholar]
  55. Sung, Y. T., & Wu, J. S.
    (2018) The Visual Analogue Scale for Ratin Comparison (VAS-RRP): A new technique for psychological measurement. Behavior research methods. 10.3758/s13428‑018‑1041‑8
    https://doi.org/10.3758/s13428-018-1041-8 [Google Scholar]
  56. Ulfert, A. S., Georganta, E., Centeio Jorge, C., Mehrotra, S., & Tielman, M.
    (2023) Shaping a multidisciplinary understanding of team trust in human-AI teams: a theoretical framework. European Journal of Work and Organizational Psychology. 10.1080/1359432X.2023.2200172
    https://doi.org/10.1080/1359432X.2023.2200172 [Google Scholar]
  57. von Eschenbach, W. J.
    (2021) Transparency and the Black Box Problem: Why We Do Not Trust AI. Philosophy and Technology, 34(4). 10.1007/s13347‑021‑00477‑0
    https://doi.org/10.1007/s13347-021-00477-0 [Google Scholar]
  58. Waytz, A., Heafner, J., & Epley, N.
    (2014) The mind in the machine: Anthropomorphism increases trust in an autonomous vehicle. Journal of Experimental Social Psychology, 521. 10.1016/j.jesp.2014.01.005
    https://doi.org/10.1016/j.jesp.2014.01.005 [Google Scholar]
  59. Williams, J., Fiore, S. M., & Jentsch, F.
    (2022) Supporting Artificial Social Intelligence With Theory of Mind. InFrontiers in Artificial Intelligence (Vol.51). 10.3389/frai.2022.750763
    https://doi.org/10.3389/frai.2022.750763 [Google Scholar]
  60. Yam, K. C., Goh, E. Y., Fehr, R., Lee, R., Soh, H., & Gray, K.
    (2022) When your boss is a robot: Workers are more spiteful to robot supervisors that seem more human. Journal of Experimental Social Psychology, 1021. 10.1016/j.jesp.2022.104360
    https://doi.org/10.1016/j.jesp.2022.104360 [Google Scholar]
/content/journals/10.1075/is.24045.bai
Loading
/content/journals/10.1075/is.24045.bai
Loading

Data & Media loading...

This is a required field
Please enter a valid email address
Approval was successful
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error