1887
Volume 26, Issue 2
  • ISSN 1572-0373
  • E-ISSN: 1572-0381
USD
Buy:$35.00 + Taxes

Abstract

Abstract

Team communication content can provide insights into teammates’ coordination processes and perceptions of one another. Using a simulated aircraft reconnaissance team task testbed, we investigate how personifying and objectifying communication content relate to people’s trust in and anthropomorphism of machine teammates and to overall team performance. A total of 44 participants were paired and assigned to one of two unique team roles alongside a synthetic pilot agent. Instances of verbal personifications and objectifications that occurred during the task were captured and compared to team performance, as well as questionnaire responses related to participants’ trust in, and anthropomorphizing of, the synthetic pilot. Verbal personifications were not correlated with trust and anthropomorphism but converged for the two human roles over time, along with a convergence in trust towards the synthetic agent. Verbal objectifications, on the other hand, were negatively correlated with perceived trustworthiness and anthropomorphism of a teammate. Neither verbal personifications nor objectifications were found to be related to team performance. Our findings suggest that people verbally personify machines to ease communication, and that the same processes that underlie tendencies to verbally personify and objectify machines are related to those that influence trust and anthropomorphism.

Loading

Article metrics loading...

/content/journals/10.1075/is.24050.coh
2026-02-27
2026-03-07
Loading full text...

Full text loading...

References

  1. Algoe, S. B., Dwyer, P. C., Younge, A., & Oveis, C.
    (2020) A new perspective on the social functions of emotions: Gratitude and the witnessing effect. Journal of Personality and Social Psychology, 119(1), 40–74. 10.1037/pspi0000202
    https://doi.org/10.1037/pspi0000202 [Google Scholar]
  2. Bandow, D.
    (2001) Time to create sound teamwork. The Journal for Quality and Partici-pation, 24(2), 41–47.
    [Google Scholar]
  3. Barth, S., Schraagen, J. M., & Schmettow, M.
    (2015) Network measures for characterising team adaptation processes. Ergonomics, 58(8), 1287–1302. 10.1080/00140139.2015.1009951
    https://doi.org/10.1080/00140139.2015.1009951 [Google Scholar]
  4. Bartneck, C., Kulić, D., Croft, E., & Zoghbi, S.
    (2009) Measurement Instruments for the Anthropomorphism, Animacy, Likeability, Perceived Intelligence, and Perceived Safety of Robots. International Journal of Social Robotics, 1 (1), 71–81. 10.1007/s12369‑008‑0001‑3
    https://doi.org/10.1007/s12369-008-0001-3 [Google Scholar]
  5. Brodbeck, F. C., Kugler, K. G., Fischer, J. A., Heinze, J., & Fischer, D.
    (2021) Group- level integrative complexity: Enhancing differentiation and integration in group decision-making. Group Processes & Intergroup Relations, 24(1), 125–144. 10.1177/1368430219892698
    https://doi.org/10.1177/1368430219892698 [Google Scholar]
  6. Cannon-Bowers, J. A., & Salas, E.
    (2001) Reflections on shared cognition. Journal of Organizational Behavior, 22(2), 195–202. 10.1002/job.82
    https://doi.org/10.1002/job.82 [Google Scholar]
  7. Chiou, E. K., & Lee, J. D.
    (2023) Trusting Automation: Designing for Responsivity and Resilience. Human Factors: The Journal of the Human Factors and Ergonomics Society, 65(1), 137–165. 10.1177/00187208211009995
    https://doi.org/10.1177/00187208211009995 [Google Scholar]
  8. Cohen, M. C., Demir, M., Chiou, E. K., & Cooke, N. J.
    (2021) The Dynamics of Trust and Verbal Anthropomorphism in Human-Autonomy Teaming. 2021 IEEE 2nd International Conference on Human-Machine Systems (ICHMS), 1–6. 10.1109/ICHMS53169.2021.9582655
    https://doi.org/10.1109/ICHMS53169.2021.9582655 [Google Scholar]
  9. Cohen, M. C., Mancenido, M. V., Chiou, E. K., & Cooke, N. J.
    (2023, June). Teamness and Trust in AI-Enabled Decision Support Systems (Short Paper). InP. K. Murukannaiah & T. Hirzle (Eds.), Proceedings of the Workshops at the Second International Conference on Hybrid Human-Artificial Intelligence (pp. 175–187, Vol.34561). CEUR.
    [Google Scholar]
  10. Cohen, M. C., Peel, M. A., Scalia, M. J., Willett, M. M., Chiou, E. K., Gorman, J. C., & Cooke, N. J.
    (2023) Anthropomorphism Moderates the Relationships of Dispositional, Perceptual, and Behavioral Trust in a Robot Teammate. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 671, 529–536. 10.1177/21695067231196240
    https://doi.org/10.1177/21695067231196240 [Google Scholar]
  11. Cooke, N. J., Cohen, M. C., Fazio, W. C., Inderberg, L. H., Johnson, C. J., Lematta, G. J., Peel, M., & Teo, A.
    (2024) From Teams to Teamness: Future Directions in the Science of Team Cognition. Human Factors: The Journal of the Human Factors and Ergonomics Society, 66(6), 1669–1680. 10.1177/00187208231162449
    https://doi.org/10.1177/00187208231162449 [Google Scholar]
  12. Cooke, N. J., Demir, M., McNeese, N., Gorman, J., & Myers, C.
    (2020, September). Human-Autonomy Teaming in Remotely Piloted Aircraft Systems Operations Under Degraded Conditions (tech. rep.). Cognitive Engineering Research Institute. Mesa, AZ.
    [Google Scholar]
  13. Cooke, N. J., Gorman, J. C., Duran, J. L., & Taylor, A. R.
    (2007) Team cognition in experienced command-and-control teams. Journal of Experimental Psychology: Applied, 13(3), 146–157. 10.1037/1076‑898X.13.3.146
    https://doi.org/10.1037/1076-898X.13.3.146 [Google Scholar]
  14. Cooke, N. J., Gorman, J. C., Myers, C. W., & Duran, J. L.
    (2013) Interactive Team Cognition. Cognitive Science, 37(2), 255–285. 10.1111/cogs.12009
    https://doi.org/10.1111/cogs.12009 [Google Scholar]
  15. Cooke, N. J., & Shope, S. M.
    (2004) Designing a Synthetic Task Environment. InS. G. Schiflett, L. R. Elliott, E. Salas, & M. D. Coovert (Eds.), Scaled Worlds: Development, Validation and Applications (pp. 263–278).
    [Google Scholar]
  16. de Visser, E. J., Monfort, S. S., McKendrick, R., Smith, M. A. B., McKnight, P. E., Krueger, F., & Parasuraman, R.
    (2016) Almost human: Anthropomorphism increases trust resilience in cognitive agents. Journal of Experimental Psychology: Applied, 22(3), 331–349. 10.1037/xap0000092
    https://doi.org/10.1037/xap0000092 [Google Scholar]
  17. Demir, M., Canan, M., & Cohen, M. C.
    (2023) Modeling Team Interaction and Decision-Making in Agile Human-Machine Teams: Quantum and Dynamical Systems Perspective. IEEE Transactions on Human-Machine Systems, 53(4), 720–730. 10.1109/THMS.2023.3276744
    https://doi.org/10.1109/THMS.2023.3276744 [Google Scholar]
  18. Demir, M., McNeese, N. J., & Cooke, N. J.
    (2016) Team communication behaviors of the human-automation teaming. 2016 IEEE International Multi-Disciplinary Conference on Cognitive Methods in Situation Awareness and Decision Support (CogSIMA), 28–34. 10.1109/COGSIMA.2016.7497782
    https://doi.org/10.1109/COGSIMA.2016.7497782 [Google Scholar]
  19. Demir, M., McNeese, N. J., Cooke, N. J., Ball, J. T., Myers, C., & Frieman, M.
    (2015) Synthetic Teammate Communication and Coordination With Humans. Proceedings of the Human Factors and Ergonomics Society Annual Meeting, 59(1), 951–955. 10.1177/1541931215591275
    https://doi.org/10.1177/1541931215591275 [Google Scholar]
  20. Duan, W., Flathmann, C., McNeese, N., Scalia, M. J., Zhang, R., Gorman, J., Freeman, G., Zhou, S., Hauptman, A. I., & Yin, X.
    (2025) Trusting Autonomous Teammates in Human-AI Teams — A Literature Review. Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems, 1–23. 10.1145/3706598.3713527
    https://doi.org/10.1145/3706598.3713527 [Google Scholar]
  21. Duan, W., McNeese, N., & Zhang, R.
    (2023, November). Communication in Human- AI Teaming. InGroup Communication (1st ed., pp. 340–352). Routledge. 10.4324/9781003227458‑27
    https://doi.org/10.4324/9781003227458-27 [Google Scholar]
  22. Dzindolet, M. T., Peterson, S. A., Pomranky, R. A., Pierce, L. G., & Beck, H. P.
    (2003) The role of trust in automation reliance. International Journal of Human-Computer Studies, 58(6), 697–718. 10.1016/S1071‑5819(03)00038‑7
    https://doi.org/10.1016/S1071-5819(03)00038-7 [Google Scholar]
  23. Dzindolet, M. T., Pierce, L. G., Beck, H. P., & Dawe, L. A.
    (2002) The Perceived Utility of Human and Automated Aids in a Visual Detection Task. Human Factors: The Journal of the Human Factors and Ergonomics Society, 44(1), 79–94. 10.1518/0018720024494856
    https://doi.org/10.1518/0018720024494856 [Google Scholar]
  24. Epley, N., Waytz, A., & Cacioppo, J. T.
    (2007) On Seeing Human: A Three-Factor Theory of Anthropomorphism. Psychological Review, 114(4), 864–886. 10.1037/0033‑295X.114.4.864
    https://doi.org/10.1037/0033-295X.114.4.864 [Google Scholar]
  25. Fischer, K.
    (2011) Interpersonal variation in understanding robots as social actors. 2011 6th ACM/IEEE International Conference on Human-Robot Interaction (HRI), 53–60. 10.1145/1957656.1957672
    https://doi.org/10.1145/1957656.1957672 [Google Scholar]
  26. Gorman, J. C.
    (2014) Team Coordination and Dynamics: Two Central Issues. Current Directions in Psychological Science, 23(5), 355–360. 10.1177/0963721414545215
    https://doi.org/10.1177/0963721414545215 [Google Scholar]
  27. Gorman, J. C., Amazeen, P. G., & Cooke, N. J.
    (2010) Team coordination dynamics. Nonlinear Dynamics, Psychology, and Life Sciences, 14(3), 265–289.
    [Google Scholar]
  28. Gorman, J. C., Cooke, N. J., & Winner, J. L.
    (2006) Measuring team situation awareness in decentralized command and control environments. Ergonomics, 49(12–13), 1312–1325. 10.1080/00140130600612788
    https://doi.org/10.1080/00140130600612788 [Google Scholar]
  29. Hamacher, A., Bianchi-Berthouze, N., Pipe, A. G., & Eder, K.
    (2016) Believing in BERT: Using expressive communication to enhance trust and counteract operational error in physical Human-robot interaction. 2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), 493–500. 10.1109/ROMAN.2016.7745163
    https://doi.org/10.1109/ROMAN.2016.7745163 [Google Scholar]
  30. Hancock, P. A., Billings, D. R., Schaefer, K. E., Chen, J. Y. C., de Visser, E. J., & Para-suraman, R.
    (2011) A Meta-Analysis of Factors Affecting Trust in Human-Robot Interaction. Human Factors: The Journal of the Human Factors and Ergonomics Society, 53(5), 517–527. 10.1177/0018720811417254
    https://doi.org/10.1177/0018720811417254 [Google Scholar]
  31. Haring, K. S., Phillips, E., Lazzara, E. H., Ullman, D., Baker, A. L., & Keebler, J. R.
    (2021, January). Chapter 17 — Applying the swift trust model to human-robot teaming. InC. S. Nam & J. B. Lyons (Eds.), Trust in Human-Robot Interaction (pp. 407–427). Academic Press. 10.1016/B978‑0‑12‑819472‑0.00017‑4
    https://doi.org/10.1016/B978-0-12-819472-0.00017-4 [Google Scholar]
  32. Hart, S. G., & Staveland, L. E.
    (1988, January). Development of NASA-TLX (Task Load Index): Results of Empirical and Theoretical Research. InP. A. Hancock & N. Meshkati (Eds.), Advances in Psychology (pp. 139–183, Vol.521). North-Holland. 10.1016/S0166‑4115(08)62386‑9
    https://doi.org/10.1016/S0166-4115(08)62386-9 [Google Scholar]
  33. Heider, F., & Simmel, M.
    (1944) An Experimental Study of Apparent Behavior. The American Journal of Psychology, 57(2), 243–259. 10.2307/1416950
    https://doi.org/10.2307/1416950 [Google Scholar]
  34. Hinds, P. J., & Bailey, D. E.
    (2003) Out of Sight, Out of Sync: Understanding Conflict in Distributed Teams. Organization Science, 14(6), 615–632. 10.1287/orsc.14.6.615.24872
    https://doi.org/10.1287/orsc.14.6.615.24872 [Google Scholar]
  35. Jarvenpaa, S. L., & Leidner, D. E.
    (1999) Communication and Trust in Global Virtual Teams. Organization Science, 10(6), 791–815. 10.1287/orsc.10.6.791
    https://doi.org/10.1287/orsc.10.6.791 [Google Scholar]
  36. Jensen, T., Khan, M. M. H., & Albayram, Y.
    (2020) The Role of Behavioral Anthropomor-phism in Human-Automation Trust Calibration. InH. Degen & L. Reinerman-Jones (Eds.), Artificial Intelligence in HCI (pp. 33–53). Springer International Publishing. 10.1007/978‑3‑030‑50334‑5_3
    https://doi.org/10.1007/978-3-030-50334-5_3 [Google Scholar]
  37. Jian, J.-Y., Bisantz, A. M., & Drury, C. G.
    (2000) Foundations for an Empirically Determined Scale of Trust in Automated Systems. International Journal of Cognitive Ergonomics, 4(1), 53–71. 10.1207/S15327566IJCE0401_04
    https://doi.org/10.1207/S15327566IJCE0401_04 [Google Scholar]
  38. Johnson, C. J., Demir, M., McNeese, N. J., Gorman, J. C., Wolff, A. T., & Cooke, N. J.
    (2023) The Impact of Training on Human-Autonomy Team Communications and Trust Calibration. Human Factors: The Journal of the Human Factors and Ergonomics Society, 65(7), 1554–1570. 10.1177/00187208211047323
    https://doi.org/10.1177/00187208211047323 [Google Scholar]
  39. Johnson, M., & Bradshaw, J. M.
    (2021) How Interdependence Explains the World of Teamwork. InW. F. Lawless, J. Llinas, D. A. Sofge, & R. Mittu (Eds.), Engineering Artificially Intelligent Systems: A Systems Engineering Approach to Realizing Synergistic Capabilities (pp. 122–146). Springer International Publishing. 10.1007/978‑3‑030‑89385‑9_8
    https://doi.org/10.1007/978-3-030-89385-9_8 [Google Scholar]
  40. Kozlowski, S. W. J., & Klein, K. J.
    (2000) A multilevel approach to theory and research in organizations: Contextual, temporal, and emergent processes. InMultilevel theory, research, and methods in organizations: Foundations, extensions, and new directions (pp. 3–90). Jossey-Bass/Wiley.
    [Google Scholar]
  41. Kulms, P., & Kopp, S.
    (2019) More Human-Likeness, More Trust?: The Effect of Anthropomorphism on Self-Reported and Behavioral Trust in Continued and Interdependent Human-Agent Cooperation. Proceedings of Mensch Und Computer 2019, 31–42. 10.1145/3340764.3340793
    https://doi.org/10.1145/3340764.3340793 [Google Scholar]
  42. Lee, J. D., & See, K. A.
    (2004) Trust in Automation: Designing for Appropriate Reliance. Human Factors, 46(1), 50–80. 10.1518/hfes.46.1.50.30392
    https://doi.org/10.1518/hfes.46.1.50.30392 [Google Scholar]
  43. Li, M., Erickson, I. M., Cross, E. V., & Lee, J. D.
    (2024) It’s Not Only What You Say, But Also How You Say It: Machine Learning Approach to Estimate Trust from Conversation. Human Factors: The Journal of the Human Factors and Ergonomics Society, 66(6), 1724–1741. 10.1177/00187208231166624
    https://doi.org/10.1177/00187208231166624 [Google Scholar]
  44. Madhavan, P., & Wiegmann, D. A.
    (2007) Similarities and differences between human- human and human-automation trust: An integrative review. Theoretical Issues in Ergonomics Science, 8(4), 277–301. 10.1080/14639220500337708
    https://doi.org/10.1080/14639220500337708 [Google Scholar]
  45. Mayer, R. C., Davis, J. H., & Schoorman, F. D.
    (1995) An Integrative Model Of Organizational Trust. Academy of Management Review, 20(3), 709–734. 10.2307/258792
    https://doi.org/10.2307/258792 [Google Scholar]
  46. Mayer, R. C., & Gavin, M. B.
    (2005) Trust in Management and Performance: Who Minds the Shop While the Employees Watch the Boss?Academy of Management Journal, 48(5), 874–888. 10.5465/amj.2005.18803928
    https://doi.org/10.5465/amj.2005.18803928 [Google Scholar]
  47. McNeese, N. J., Demir, M., Cooke, N. J., & She, M.
    (2021) Team Situation Awareness and Conflict: A Study of Human-Machine Teaming. Journal of Cognitive Engineering and Decision Making, 15(2–3), 83–96. 10.1177/15553434211017354
    https://doi.org/10.1177/15553434211017354 [Google Scholar]
  48. Mery, D., Saavedra, D., & Prasad, M.
    (2020) X-Ray Baggage Inspection With Computer Vision: A Survey. IEEE Access, 81, 145620–145633. 10.1109/ACCESS.2020.3015014
    https://doi.org/10.1109/ACCESS.2020.3015014 [Google Scholar]
  49. Mesmer-Magnus, J. R., Niler, A. A., Plummer, G., Larson, L. E., & DeChurch, L. A.
    (2017) The cognitive underpinnings of effective teamwork: A continuation. Career Development International, 22(5), 507–519. 10.1108/CDI‑08‑2017‑0140
    https://doi.org/10.1108/CDI-08-2017-0140 [Google Scholar]
  50. Montague, E., & Asan, O.
    (2012) Trust in technology-mediated collaborative health encounters: Constructing trust in passive user interactions with technologies. Ergonomics, 55(7), 752–761. 10.1080/00140139.2012.663002
    https://doi.org/10.1080/00140139.2012.663002 [Google Scholar]
  51. Musick, G., O’Neill, T. A., Schelble, B. G., McNeese, N. J., & Henke, J. B.
    (2021) What Happens When Humans Believe Their Teammate is an AI? An Investigation into Humans Teaming with Autonomy. Computers in Human Behavior, 1221, 106852. 10.1016/j.chb.2021.106852
    https://doi.org/10.1016/j.chb.2021.106852 [Google Scholar]
  52. Nass, C.
    (2004) Etiquette equality: Exhibitions and expectations of computer politeness. Communications of the ACM, 47(4), 35–37. 10.1145/975817.975841
    https://doi.org/10.1145/975817.975841 [Google Scholar]
  53. Nass, C., & Moon, Y.
    (2000) Machines and Mindlessness: Social Responses to Computers. Journal of Social Issues, 56(1), 81–103. 10.1111/0022‑4537.00153
    https://doi.org/10.1111/0022-4537.00153 [Google Scholar]
  54. Nass, C., Steuer, J., Tauber, E., & Reeder, H.
    (1993) Anthropomorphism, agency, and ethopoeia: Computers as social actors. INTERACT ’93 and CHI ’93 Conference Companion on Human Factors in Computing Systems, 111–112. 10.1145/259964.260137
    https://doi.org/10.1145/259964.260137 [Google Scholar]
  55. Ogan, A., Finkelstein, S., Mayfield, E., D’Adamo, C., Matsuda, N., & Cassell, J.
    (2012) “Oh dear stacy!”: Social interaction, elaboration, and learning with teachable agents. Proceedings of the 2012 ACM Annual Conference on Human Factors in Computing Systems — CHI ’12, 39. 10.1145/2207676.2207684
    https://doi.org/10.1145/2207676.2207684 [Google Scholar]
  56. Olson, J., & Olson, L.
    (2012) Virtual team trust: Task, communication and sequence. Team Performance Management: An International Journal, 18(5/6), 256–276. 10.1108/13527591211251131
    https://doi.org/10.1108/13527591211251131 [Google Scholar]
  57. O’Neill, T., McNeese, N., Barron, A., & Schelble, B.
    (2022) Human-Autonomy Teaming: A Review and Analysis of the Empirical Literature. Human Factors: The Journal of the Human Factors and Ergonomics Society, 64(5), 904–938. 10.1177/0018720820960865
    https://doi.org/10.1177/0018720820960865 [Google Scholar]
  58. OpenAI
    OpenAI (2022, November). Introducing ChatGPT.
    [Google Scholar]
  59. Parasuraman, R., & Miller, C. A.
    (2004) Trust and etiquette in high-criticality automated systems. Communications of the ACM, 47(4), 51–55. 10.1145/975817.975844
    https://doi.org/10.1145/975817.975844 [Google Scholar]
  60. Parasuraman, R., Sheridan, T., & Wickens, C.
    (2000) A model for types and levels of human interaction with automation. IEEE Transactions on Systems, Man, and Cybernetics — Part A: Systems and Humans, 30(3), 286–297. 10.1109/3468.844354
    https://doi.org/10.1109/3468.844354 [Google Scholar]
  61. Rempel, J. K., Holmes, J. G., & Zanna, M. P.
    (1985) Trust in close relationships. Journal of Personality and Social Psychology, 49(1), 95–112. 10.1037/0022‑3514.49.1.95
    https://doi.org/10.1037/0022-3514.49.1.95 [Google Scholar]
  62. Riek, L.
    (2012) Wizard of Oz Studies in HRI: A Systematic Review and New Reporting Guidelines. Journal of Human-Robot Interaction, 119–136. 10.5898/JHRI.1.1.Riek
    https://doi.org/10.5898/JHRI.1.1.Riek [Google Scholar]
  63. Salem, M., Eyssel, F., Rohlfing, K., Kopp, S., & Joublin, F.
    (2013) To Err is Human(- like): Effects of Robot Gesture on Perceived Anthropomorphism and Likability. International Journal of Social Robotics, 5(3), 313–323. 10.1007/s12369‑013‑0196‑9
    https://doi.org/10.1007/s12369-013-0196-9 [Google Scholar]
  64. Salem, M., Lakatos, G., Amirabdollahian, F., & Dautenhahn, K.
    (2015) Would You Trust a (Faulty) Robot? Effects of Error, Task Type and Personality on Human-Robot Cooperation and Trust. 2015 10th ACM/IEEE International Conference on Human-Robot Interaction (HRI), 1–8.
    [Google Scholar]
  65. Salikutluk, V., Koert, D., & Jäkel, F.
    (2023, June). Interacting with Large Language Models: A Case Study on AI-Aided Brainstorming for Guesstimation Problems. InP. Lukowicz, S. Mayer, J. Koch, J. Shawe-Taylor, & I. Tiddi (Eds.), Frontiers in Artificial Intelligence and Applications. IOS Press. 10.3233/FAIA230081
    https://doi.org/10.3233/FAIA230081 [Google Scholar]
  66. Schaefer, K. E., Chen, J. Y. C., Szalma, J. L., & Hancock, P. A.
    (2016) A Meta-Analysis of Factors Influencing the Development of Trust in Automation: Implications for Understanding Autonomy in Future Systems. Human Factors: The Journal of the Human Factors and Ergonomics Society, 58(3), 377–400. 10.1177/0018720816634228
    https://doi.org/10.1177/0018720816634228 [Google Scholar]
  67. Seeber, I., Bittner, E., Briggs, R. O., de Vreede, T., de Vreede, G.-J., Elkins, A., Maier, R., Merz, A. B., Oeste-Reiß, S., Randrup, N., Schwabe, G., & Söllner, M.
    (2020) Machines as teammates: A research agenda on AI in team collaboration. Information & Management, 57(2), 103174. 10.1016/j.im.2019.103174
    https://doi.org/10.1016/j.im.2019.103174 [Google Scholar]
  68. Shah, J., & Breazeal, C.
    (2010) An Empirical Analysis of Team Coordination Behaviors and Action Planning With Application to Human-Robot Teaming. Human Factors: The Journal of the Human Factors and Ergonomics Society, 52(2), 234–245. 10.1177/0018720809350882
    https://doi.org/10.1177/0018720809350882 [Google Scholar]
  69. Snow, T.
    (2021) From satisficing to artificing: The evolution of administrative decision-making in the age of the algorithm. Data & Policy, 31, e3. 10.1017/dap.2020.25
    https://doi.org/10.1017/dap.2020.25 [Google Scholar]
  70. Takayama, L.
    (2009) Making sense of agentic objects and teleoperation: In-the-moment and reflective perspectives. 2009 4th ACM/IEEE International Conference on Human-Robot Interaction (HRI), 239–240. 10.1145/1514095.1514155
    https://doi.org/10.1145/1514095.1514155 [Google Scholar]
  71. Tørring, B., Gittell, J. H., Laursen, M., Rasmussen, B. S., & Sørensen, E. E.
    (2019) Communication and relationship dynamics in surgical teams in the operating room: An ethnographic study. BMC Health Services Research, 19(1), 528. 10.1186/s12913‑019‑4362‑0
    https://doi.org/10.1186/s12913-019-4362-0 [Google Scholar]
  72. Walther, J. B., & D’Addario, K. P.
    (2001) The Impacts of Emoticons on Message Interpretation in Computer-Mediated Communication. Social Science Computer Review, 19(3), 324–347. 10.1177/089443930101900307
    https://doi.org/10.1177/089443930101900307 [Google Scholar]
  73. Waytz, A., Heafner, J., & Epley, N.
    (2014) The mind in the machine: Anthropomorphism increases trust in an autonomous vehicle. Journal of Experimental Social Psychology, 521, 113–117. 10.1016/j.jesp.2014.01.005
    https://doi.org/10.1016/j.jesp.2014.01.005 [Google Scholar]
  74. Wynne, K. T., & Lyons, J. B.
    (2018) An integrative model of autonomous agent teammate- likeness. Theoretical Issues in Ergonomics Science, 19(3), 353–374. 10.1080/1463922X.2016.1260181
    https://doi.org/10.1080/1463922X.2016.1260181 [Google Scholar]
/content/journals/10.1075/is.24050.coh
Loading
/content/journals/10.1075/is.24050.coh
Loading

Data & Media loading...

This is a required field
Please enter a valid email address
Approval was successful
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error