1887
Volume 26, Issue 2
  • ISSN 1572-0373
  • E-ISSN: 1572-0381
Preview this article:

Loading

Article metrics loading...

/content/journals/10.1075/is.00025.edi
2026-02-27
2026-03-07
Loading full text...

Full text loading...

References

  1. Afroogh, S., Akbari, A., Malone, E., & Langarizadeh, M.
    (2024) Trust in AI: Progress, challenges, and future directions. Humanities and Social Sciences Communications, 11 (1), 1568. 10.1057/s41599‑024‑04044‑8
    https://doi.org/10.1057/s41599-024-04044-8 [Google Scholar]
  2. Bailey, M. E., Gancz, B., & Pollick, F. E.
    (2026) The effect of emojis and AI reliability on team performance and trust in human-AI teams. Interaction Studies, (Special Issue on Multidisciplinary Perspectives on Human-AI Team Trust), 26 (2). 10.1075/is.24045.bai
    https://doi.org/10.1075/is.24045.bai [Google Scholar]
  3. Carragher, D. J., Sturman, D., & Hancock, P. J.
    (2024) Trust in automation and the accuracy of humanalgorithm teams performing one-to-one face matching tasks. Cognitive Research: Principles and Implications, 9 (41).
    [Google Scholar]
  4. Coester, U., Anderle, L., & Pohlmann, N.
    (2026) Trustworthiness needs for the use of AI solutions in business: Ethical and empirical considerations. Interaction Studies, (Special Issue on Multidisciplinary Perspectives on Human-AI Team Trust), 26 (2). 10.1075/is.24051.coe
    https://doi.org/10.1075/is.24051.coe [Google Scholar]
  5. Cohen, M. C., Chiou, E. K., & Cooke, N. J.
    (2026) Trusting machine teammates: The role of personifying and objectifying language in team communication. Interaction Studies, (Special Issue on Multidisciplinary Perspectives on Human-AI Team Trust), 26 (2). 10.1075/is.24050.coh
    https://doi.org/10.1075/is.24050.coh [Google Scholar]
  6. Cohen, M. C., Kim, N., Ba, Y., Pan, A., Bhatti, S., Salehi, P., Sung, J., Blasch, E., Mancenido, M. V., & Chiou, E. K.
    (2025) PADTHAI-MM: Principles-based approach for designing trustworthy, human-centered AI using the MAST methodology. AI Magazine, 46(1), e70000. 10.1002/aaai.70000
    https://doi.org/10.1002/aaai.70000 [Google Scholar]
  7. de Visser, E. J., Peeters, M. M. M., Jung, M. F., Kohn, S., Shaw, T. H., Pak, R., & Neerincx, M. A.
    (2020) Towards a Theory of Longitudinal Trust Calibration in Human-Robot Teams. International Journal of Social Robotics, 12 (2), 459–478. 10.1007/s12369‑019‑00596‑x
    https://doi.org/10.1007/s12369-019-00596-x [Google Scholar]
  8. de Visser, E. J., Monfort, S. S., McKendrick, R., Smith, M. A., McKnight, P. E., Krueger, F., & Parasuraman, R.
    (2016) Almost human: Anthropomorphism increases trust resilience in cognitive agents. Journal of Experimental Psychology: Applied, 22 (3), 331–349.
    [Google Scholar]
  9. Duan, W., Zhou, S., Scalia, M. J., Freeman, G., Gorman, J., Tolston, M., McNeese, N. J., & Funke, G.
    (2025) Understanding the processes of trust and distrust contagion in human-ai teams: A qualitative approach. Computers in Human Behavior, 1651, 108560. 10.1016/j.chb.2025.108560
    https://doi.org/10.1016/j.chb.2025.108560 [Google Scholar]
  10. European Commission
    European Commission (2021) Coordinated plan on artificial intelligence 2021 review (tech. rep.) (Accessed: 2025-09-04). European Commission. https://digital-strategy.ec.europa.eu/en/library/coordinated-plan-artificial-intelligence-2021-review
    [Google Scholar]
  11. Georganta, E., & Ulfert, A.-S.
    (2024) Would you trust an ai team member? team trust in human-ai teams. Journal of Occupational and Organizational Psychology, 97 (3), 1212–1241. 10.1111/joop.12504
    https://doi.org/10.1111/joop.12504 [Google Scholar]
  12. Glikson, E., & Woolley, A. W.
    (2020) Human trust in artificial intelligence: Review of empirical research. Academy of management annals, 14 (2), 627–660. 10.5465/annals.2018.0057
    https://doi.org/10.5465/annals.2018.0057 [Google Scholar]
  13. High-Level Expert Group on Artificial Intelligence
    High-Level Expert Group on Artificial Intelligence (2019) Ethics guidelines for trustworthy ai (tech. rep.) (Accessed: 2025-09-04). European Commission. https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai
    [Google Scholar]
  14. Hoff, K. A., & Bashir, M.
    (2015) Trust in automation: Integrating empirical evidence on factors that influence trust. Human factors, 57 (3), 407–434. 10.1177/0018720814547570
    https://doi.org/10.1177/0018720814547570 [Google Scholar]
  15. Jiang, L., Hwang, J. D., Bhagavatula, C., Bras, R. L., Forbes, M., Borchardt, J., Liang, J., Etzioni, O., Sap, M., & Choi, Y.
    (2021) Delphi: Towards machine ethics and norms. arXiv preprint arXiv:2110.07574.
    [Google Scholar]
  16. Jobin, A., Ienca, M., & Vayena, E.
    (2019) The global landscape of ai ethics guidelines. Nature machine intelligence, 1 (9), 389–399. 10.1038/s42256‑019‑0088‑2
    https://doi.org/10.1038/s42256-019-0088-2 [Google Scholar]
  17. Kucukosmanoglu, M., Johnson, C. J., Pollard, K., Chhan, D., Lakhmani, S. G., Forster, D., Conklin, S., Brooks, J., Crowell, H. P., & Krausman, A.
    (2026) Exploring trust in AI-supported military teams using sentiment analysis. Interaction Studies, (Special Issue on Multidisciplinary Perspectives on Human-AI Team Trust), 26 (2). 10.1075/is.24046.kuc
    https://doi.org/10.1075/is.24046.kuc [Google Scholar]
  18. Lee, J. D., & See, K. A.
    (2004) Trust in automation: Designing for appropriate reliance. Human factors, 46 (1), 50–80. 10.1518/hfes.46.1.50.30392
    https://doi.org/10.1518/hfes.46.1.50.30392 [Google Scholar]
  19. Lu, G., Lu, J., Yao, S., & Yip, Y. J.
    (2009) A review on computational trust models for multi-agent systems. The open information science journal, 21, 18–25. 10.2174/1874947X00902020018
    https://doi.org/10.2174/1874947X00902020018 [Google Scholar]
  20. Madhavan, P., & Wiegmann, D. A.
    (2007) Similarities and differences between human-human and human-automat trust: An integrative review [Publisher: Taylor & Francis _eprint: Theoretical Issues in Ergonomics Science, 8 (4), 277–301. 10.1080/14639220500337708
    https://doi.org/10.1080/14639220500337708 [Google Scholar]
  21. Malle, B. F., & Ullman, D.
    (2021, January). Chapter 1 — A multidimensional conception and measure of humanrobot trust. InC. S. Nam & J. B. Lyons (Eds.), Trust in Human-Robot Interaction (pp.3–25). Academic Press. 10.1016/B978‑0‑12‑819472‑0.00001‑0
    https://doi.org/10.1016/B978-0-12-819472-0.00001-0 [Google Scholar]
  22. Mayer, R. C., Davis, J. H., & Schoorman, F. D.
    (1995) An integrative model of organizational trust. Academy of management review, 20 (3), 709–734. 10.2307/258792
    https://doi.org/10.2307/258792 [Google Scholar]
  23. McNeese, N. J., Demir, M., Chiou, E. K., & Cooke, N. J.
    (2021) Trust and Team Performance in Human-Autonomy Teaming [Publisher: Routledge _eprint:]. International Journal of Electronic Commerce, 25 (1), 51–72. 10.1080/10864415.2021.1846854
    https://doi.org/10.1080/10864415.2021.1846854 [Google Scholar]
  24. Momen, A., Tossell, C. C., Walliser, J. C., Niemyer, R., Tolston, M., Funke, G. J., & de Visser, E. J.
    (2026) Perceived trustworthiness and moral competence of a GenAI-enabled ethical robot advisor. Interaction Studies, (Special Issue on Multidisciplinary Perspectives on Human-AI Team Trust), 26 (2). 10.1075/is.25072.mom
    https://doi.org/10.1075/is.25072.mom [Google Scholar]
  25. Musick, G., O’Neill, T. A., Schelble, B. G., McNeese, N. J., & Henke, J. B.
    (2021) What Happens When Humans Believe Their Teammate is an AI? An Investigation into Humans Teaming with Autonomy. Computers in Human Behavior, 1221, 106852. 10.1016/j.chb.2021.106852
    https://doi.org/10.1016/j.chb.2021.106852 [Google Scholar]
  26. Naikar, N., Hoffman, R., Roth, E. M., Klein, G., Militello, L. G., & Dominguez, C.
    (2025) Should we Make AI More Tool-like or Teammate-Like? [Publisher: SAGE Publications]. Journal of Cognitive Engineering and Decision Making, 15553434251346904. 10.1177/15553434251346904
    https://doi.org/10.1177/15553434251346904 [Google Scholar]
  27. Nguyen, D., Cohen, M. C., Kao, H.-T., Engberon, G., Penafiel, L., Lynch, S., & Volkova, S.
    (2026) Exploratory models of human-AI teams: Leveraging human digital twins to investigate trust development. Interaction Studies, (Special Issue on Multidisciplinary Perspectives on Human-AI Team Trust), 26 (2). 10.1075/is.24052.ngu
    https://doi.org/10.1075/is.24052.ngu [Google Scholar]
  28. Parasuraman, R., & Manzey, D. H.
    (2010) Complacency and Bias in Human Use of Automation: An Attentional Integration [Publisher: SAGE Publications Inc]. Human Factors, 52 (3), 381–410. 10.1177/0018720810376055
    https://doi.org/10.1177/0018720810376055 [Google Scholar]
  29. Rahwan, I., Cebrian, M., Obradovich, N., Bongard, J., Bonnefon, J.-F., Breazeal, C., Crandall, J. W., Christakis, N. A., Couzin, I. D., Jackson, M. O.,
    (2019) Machine behaviour. Nature, 568 (7753), 477–486. 10.1038/s41586‑019‑1138‑y
    https://doi.org/10.1038/s41586-019-1138-y [Google Scholar]
  30. Rezaei Khavas, Z., Kotturu, M. R., Ahmadzadeh, S. R., & Robinette, P.
    (2024) Do Humans Trust Robots that Violate Moral Trust?J. Hum.-Robot Interact., 13 (2), 25:1–25:30. 10.1145/3651992
    https://doi.org/10.1145/3651992 [Google Scholar]
  31. Ryan, M.
    (2020) In AI We Trust: Ethics, Artificial Intelligence, and Reliability. Science and Engineering Ethics, 26 (5), 2749–2767. 10.1007/s11948‑020‑00228‑y
    https://doi.org/10.1007/s11948-020-00228-y [Google Scholar]
  32. Schmutz, J. B., Outland, N., Kerstan, S., Georganta, E., & Ulfert, A.-S.
    (2024) Ai-teaming: Redefining collaboration in the digital era. Current Opinion in Psychology, 581, 101837. 10.1016/j.copsyc.2024.101837
    https://doi.org/10.1016/j.copsyc.2024.101837 [Google Scholar]
  33. Seeber, I., Bittner, E., Briggs, R. O., De Vreede, T., De Vreede, G.-J., Elkins, A., Maier, R., Merz, A. B., Oeste-ReiB, S., Randrup, N.,
    (2020) Machines as teammates: A research agenda on ai in team collaboration. Information & management, 57 (2), 103174. 10.1016/j.im.2019.103174
    https://doi.org/10.1016/j.im.2019.103174 [Google Scholar]
  34. Shneiderman, B.
    (1989) A nonanthropomorphic style guide: Overcoming the humpty-dumpty syndrome. The Computing Teacher, 16 (7), 5.
    [Google Scholar]
  35. Snow, T.
    (2021) From satisficing to artificing: The evolution of administrative decision-making in the age of the algorithm [Publisher: Cambridge University Press]. Data & Policy, 31, e3. 10.1017/dap.2020.25
    https://doi.org/10.1017/dap.2020.25 [Google Scholar]
  36. Tabassi, E.
    (2023, 2023-01-2605:01:00). Artificial intelligence risk management framework (ai rmf 1.0). 10.6028/NIST.AI.100‑1
    https://doi.org/10.6028/NIST.AI.100-1 [Google Scholar]
  37. The Guardian
    The Guardian (2023, May). Uk schools bewildered by ai and do not trust tech firms, headteachers say [Accessed: 2025-09-04].
    [Google Scholar]
  38. The Guardian
    The Guardian (2025a, July). Medical charlatans have existed through history. but ai has turbocharged them [Accessed: 2025-09-04].
    [Google Scholar]
  39. The Guardian
    The Guardian (2025b, August). Nhs to trial ai tool that speeds up hospital discharges [Accessed: 2025-09-04].
    [Google Scholar]
  40. The Guardian
    The Guardian (2025c, August). Therapists warn ai chatbots risk harming mental health support [Accessed: 2025-09-04].
    [Google Scholar]
  41. The Guardian
    The Guardian (2025d, May). Yes, ai will eventually replace some workers. but that day is still a long way off [Accessed: 2025-09-04].
    [Google Scholar]
  42. Tielman, M., Bailey, M. E., Frattolillo, F., Centeio Jorge, C., Ulfert, A.-S., & Meyer-Vitali, A.
    (2026) Multidisciplinary Perspectives on Human-AI Team Trust. Interaction Studies. 10.1075/is.24048.tie
    https://doi.org/10.1075/is.24048.tie [Google Scholar]
  43. UNESCO
    UNESCO (2021) Recommendation on the ethics of artificial intel ligence (tech. rep.) (Accessed: 2025-09-04). United Nations Educational, Scientific and Cultural Organization. https://unesdoc.unesco.org/ark:/48223/pf0000381137
    [Google Scholar]
  44. Wang, D., Churchill, E., Maes, P., Fan, X., Shneiderman, B., Shi, Y., & Wang, Q.
    (2020) From human-human collaboration to human-ai collaboration: Designing ai systems that can work together with people. Extended abstracts of the 2020 CHI conference on human factors in computing systems, 1–6. 10.1145/3334480.3381069
    https://doi.org/10.1145/3334480.3381069 [Google Scholar]
  45. Wen, Y., Wang, J., & Chen, X.
    (2025) Trust and ai weight: Human-ai collaboration in organizational management decision-making. Frontiers in Organizational Psychology, 31, 1419403. 10.3389/forgp.2025.1419403
    https://doi.org/10.3389/forgp.2025.1419403 [Google Scholar]
/content/journals/10.1075/is.00025.edi
Loading
This is a required field
Please enter a valid email address
Approval was successful
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error