1887
Volume 26, Issue 2
  • ISSN 1572-0373
  • E-ISSN: 1572-0381

Abstract

Abstract

Human-AI teamwork is no longer a topic of the future. Given the importance of trust in human teams, the question arises how trust functions in human-AI teams. Although trust has long been studied from a human-centred perspective (e.g. in psychology and philosophy), a computational perspective and from the perspective of human trust in AI (e.g. in human-computer interaction), the study of trust in human-AI interaction in a team setting is still a novel field. For this reason, the MULTITTRUST (Multidisciplinary perspectives on Human-AI Team Trust) workshop series was founded. In this paper, we present the main outcomes after three editions. Our contributions are: an overview of the shared language of concepts and definitions; an outline of the main open research challenges; and methodological guidelines for further studies in meaningful human-AI team trust. These three contributions form a foundational roadmap towards a better understanding of trust in human-AI team interactions.

Loading

Article metrics loading...

/content/journals/10.1075/is.24048.tie
2026-02-27
2026-03-17
Loading full text...

Full text loading...

References

  1. Adadi, A., & Berrada, M.
    (2018) Peeking inside the black-box: a survey on explainable artificial intelligence (XAI), IEEE access6152138–52160. Publisher: IEEE.
    [Google Scholar]
  2. Adams, B. D., Waldherr, S., & Sartori, J.
    (2008) Trust in Teams Scale, Trust in Leaders Scale: Manual for Administration and Analyses.
    [Google Scholar]
  3. Ali, A., Azevedo-Sa, H., Tilbury, D. M., & Robert Jr, L. P.
    (2022) Heterogeneous human-robot task allocation based on artificial trust, Scientific Reports12115304.
    [Google Scholar]
  4. Anjomshoae, S., Najjar, A., Calvaresi, D., & Främling, K.
    (2019) Explainable Agents and Robots: Results from a Systematic Literature Review, in: Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems, AAMAS ’19, International Foundation for Autonomous Agents and Multiagent Systems, pp. 1078–1088. Place: Richland, SC.
    [Google Scholar]
  5. Aroyo, A. M., Bruyne, J. D., Dheu, O., Fosch-Villaronga, E., Gudkov, A., Hoch, H., Jones, S., Lutz, C., Sætra, H., Solberg, M., & Tamò-Larrieux, A.
    (2021) Overtrusting robots: Setting a research agenda to mitigate overtrust in automation, Paladyn, Journal of Behavioral Robotics121423–436. https://www.degruyter.com/document/doi/10.1515/pjbr-2021-0029/htmlDe Gruyter Open Access Section: Paladyn.10.1515/pjbr‑2021‑0029
    https://doi.org/10.1515/pjbr-2021-0029 [Google Scholar]
  6. Azevedo-Sa, H., Yang, X. J., Robert, L. P., & Tilbury, D. M.
    (2021) A Unified Bi-Directional Model for Natural and Artificial Trust in Human-Robot Collaboration, IEEE Robotics and Automation Letters615913–5920. conference Name: IEEE Robotics and Automation Letters.10.1109/LRA.2021.3088082
    https://doi.org/10.1109/LRA.2021.3088082 [Google Scholar]
  7. Bach, T. A., Khan, A., Hallock, H., Beltrão, G., & Sousa, S.
    (2024) A systematic literature review of user trust in ai-enabled systems: An hci perspective, International Journal of Human-Computer Interaction401. 10.1080/10447318.2022.2138826
    https://doi.org/10.1080/10447318.2022.2138826 [Google Scholar]
  8. Baier, A.
    (1986) Trust and antitrust, Ethics961231–260. https://www.jstor.org/stable/2381376
    [Google Scholar]
  9. Barredo Arrieta, A., Díaz-Rodríguez, N., J. Del Ser, Bennetot, A., Tabik, S., Barbado, A., Garcia, S., Gil-Lopez, S., Molina, D., Benjamins, R., Chatila, R., & Herrera, F.
    (2020) Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI, Information Fusion58182–115. 10.1016/j.inffus.2019.12.012
    https://doi.org/10.1016/j.inffus.2019.12.012 [Google Scholar]
  10. Berretta, S., Tausch, A., Ontrup, G., Gilles, B., Peifer, C., Kluge, A.
    (2023) Defining human-AI teaming the human-centered way: a scoping review and network analysis, Frontiers in Artificial Intelligence61. Frontiers. 10.3389/frai.2023.1250725
    https://doi.org/10.3389/frai.2023.1250725 [Google Scholar]
  11. Bobko, P., Hirshfield, L., Eloy, L., Spencer, C., Doherty, E., Driscoll, J., & Obolsky, H.
    (2022) Human-agent teaming and trust calibration: a theoretical framework, configurable testbed, empirical illustration, and implications for the development of adaptive systems, Theoretical Issues in Ergonomics Science. 1–25. Taylor & Francis.10.1080/1463922X.2022.2086644
    https://doi.org/10.1080/1463922X.2022.2086644 [Google Scholar]
  12. Braga, D. D. S., Niemann, M., Hellingrath, B., & Neto, F. B. D. L.
    (2018) Survey on computational trust and reputation models, ACM Computing Surveys511. 10.1145/3236008
    https://doi.org/10.1145/3236008 [Google Scholar]
  13. Brandizzi, N., C. Centeio Jorge, Cipollone, R., Frattolillo, F., Iocchi, L., & A.-S. Ulfert-Blank
    (2023) Multittrust: 2nd workshop on multidisciplinary perspectives on human-ai team trust, in: Proceedings of the 11th International Conference on Human-Agent Interaction, pp. 496–497.
    [Google Scholar]
  14. Breakey, H., Cadman, T., & Sampford, C.
    (2015) Conceptualizing Personal and Institutional Integrity: The Comprehensive Integrity Framework, volume 14 ofResearch in Ethical Issues in Organizations, Emerald Group Publishing Limited, pp. 1–40. 10.1108/S1529‑209620150000014001
    https://doi.org/10.1108/S1529-209620150000014001 [Google Scholar]
  15. Briggs, G., Williams, T., Jackson, R. B., & Scheutz, M.
    (2022) Why and How Robots Should Say ‘No’, International Journal of Social Robotics141. 323–339. 10.1007/s12369‑021‑00780‑y
    https://doi.org/10.1007/s12369-021-00780-y [Google Scholar]
  16. Briggs, G., Law, T., Mirsky, R., Rogers, K., & Rosero, A.
    (2024) Rebellion and Disobedience in Human-Robot Interaction (RaD-HRI), in: Companion of the 2024 ACM/IEEE International Conference on Human-Robot Interaction, HRI ’24, Association for Computing Machinery, New York, NY, USA, pp. 1308–1310. 10.1145/3610978.3638170
    https://doi.org/10.1145/3610978.3638170 [Google Scholar]
  17. Brynjolfsson, E., & Mitchell, T.
    (2017) What can machine learning do? workforce implications, Science3581. 10.1126/science.aap8062
    https://doi.org/10.1126/science.aap8062 [Google Scholar]
  18. Burnett, C., Norman, T. J., & Sycara, K.
    (2011) Trust decision-making in multi-agent systems, in: IJCAI International Joint Conference on Artificial Intelligence. 10.5591/978‑1‑57735‑516‑8/IJCAI11‑031
    https://doi.org/10.5591/978-1-57735-516-8/IJCAI11-031 [Google Scholar]
  19. Cabiddu, F., Moi, L., Patriotta, G., & Allen, D. G.
    (2022) Why do users trust algorithms? A review and conceptualization of initial trust and trust over time, European management journal401. 685–706. Elsevier.10.1016/j.emj.2022.06.001
    https://doi.org/10.1016/j.emj.2022.06.001 [Google Scholar]
  20. Cameron, D., Collins, E. C., S. de Saille, Eimontaite, I., Greenwood, A., & Law, J.
    (2024) The Social Triad Model: Considering the Deployer in a Novel Approach to Trust in Human-Robot Interaction, International Journal of Social Robotics161. 1405–1418. 10.1007/s12369‑023‑01048‑3
    https://doi.org/10.1007/s12369-023-01048-3 [Google Scholar]
  21. Campagna, G., & Rehm, M.
    (2025) A Systematic Review of Trust Assessments in Human-Robot Interaction, Journal of Human-Robot Interaction141. 301:1–30:35. 10.1145/3706123
    https://doi.org/10.1145/3706123 [Google Scholar]
  22. Castaldo, S., Premazzi, K., & Zerbini, F.
    (2010) The meaning (s) of trust. a content analysis on the diverse conceptualizations of trust in scholarly research on business relationships, Journal of business ethics961. 657–668.
    [Google Scholar]
  23. Castelfranchi, C., & Falcone, R.
    (2010) Definitions of Trust: From Conceptual Components to the General Core, in: Trust Theory: A Socio-Cognitive and Computational Model, Wiley, pp. 7–33. https://ieeexplore.ieee.org/document/8041696. conference Name: Trust Theory: A Socio-Cognitive and Computational Model.10.1002/9780470519851.ch1
    https://doi.org/10.1002/9780470519851.ch1 [Google Scholar]
  24. (2010) Trust theory: A socio-cognitive and computational model, John Wiley & Sons.
    [Google Scholar]
  25. Chi, V. B., & Malle, B. F.
    (2023) Calibrated Human-Robot Teaching: What People Do When Teaching Norms to Robots*, in: 2023 32nd IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), pp. 1308–1314. https://ieeexplore.ieee.org/abstract/document/10309635. iSSN: 1944-9437.10.1109/RO‑MAN57019.2023.10309635
    https://doi.org/10.1109/RO-MAN57019.2023.10309635 [Google Scholar]
  26. Colquitt, J. A., & Salam, S. C.
    (2009) Foster trust through ability, benevolence, and integrity, Handbook of principles of organizational behavior: Indispensable knowledge for evidence-based management. 389–404. A John Wiley and Sons, Ltd, Publication.10.1002/9781119206422.ch21
    https://doi.org/10.1002/9781119206422.ch21 [Google Scholar]
  27. Costa, A. C., Fulmer, C. A., & Anderson, N. R.
    (2018) Trust in work teams: An integrative review, multilevel model, and future directions, Journal of Organizational Behavior391. 10.1002/job.2213
    https://doi.org/10.1002/job.2213 [Google Scholar]
  28. Degli-Esposti, S., & Arroyo, D.
    (2021) Trustworthy humans and machines, in: Trust and Transparency in an Age of Surveillance, 1 ed., Routledge, London, pp. 201–220. https://www.taylorfrancis.com/books/9781003120827/chapters/10.4324/9781003120827-15. 10.4324/9781003120827‑15
    https://doi.org/10.4324/9781003120827-15 [Google Scholar]
  29. Directorate-General for Communications Networks, Content and Technology (European Commission), Grupa ekspertów wysokiego szczebla ds. sztucznej inteligencji, Ethics guidelines for trustworthy AI, Publications Office of the European Union
    Directorate-General for Communications Networks, Content and Technology (European Commission), Grupa ekspertów wysokiego szczebla ds. sztucznej inteligencji, Ethics guidelines for trustworthy AI, Publications Office of the European Union (2019) https://data.europa.eu/doi/10.2759/346720
  30. Duan, W., Flathmann, C., McNeese, N., Scalia, M. J., Zhang, R., Gorman, J., Freeman, G., Zhou, S., Hauptman, A. I., & Yin, X.
    (2025) Trusting Autonomous Teammates in Human-AI Teams — A Literature Review, in: Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems, CHI ’25, Association for Computing Machinery, New York, NY, USA, pp. 1–23. 10.1145/3706598.3713527
    https://doi.org/10.1145/3706598.3713527 [Google Scholar]
  31. Duarte, R. d. B., Correia, F., Arriaga, P., & Paiva, A.
    (2023) AI Trust: Can Explainable AI Enhance Warranted Trust? — de Brito Duarte — 2023 — Human Behavior and Emerging Technologies — Wiley Online Library, Human behavior and Emerging technologies (2023) https://onlinelibrary.wiley.com/doi/10.1155/2023/4637678
    [Google Scholar]
  32. Esterwood, C., & Robert Jr., L. P.
    (2023) Three Strikes and you are out!: The impacts of multiple human-robot trust violations and repairs on robot trustworthiness, Computers in Human Behavior1421. 107658. https://www.sciencedirect.com/science/article/pii/S0747563223000092. 10.1016/j.chb.2023.107658
    https://doi.org/10.1016/j.chb.2023.107658 [Google Scholar]
  33. Falcone, R., & Castelfranchi, C.
    (2004) Trust dynamics: How trust is influenced by direct experiences and by trust itself, in: 3rd International Joint Conference on Autonomous Agents and Multiagent Systems (AAMAS 2004), 19–23August, New York, NY, USA, IEEE Computer Society 2004, pp. 740–747. https://doi.ieeecomputersociety.org/10.1109/AAMAS.2004.10084. 10.1109/AAMAS.2004.10084
    https://doi.org/10.1109/AAMAS.2004.10084 [Google Scholar]
  34. Falcone, R., Pezzulo, G., & Castelfranchi, C.
    (2002) A fuzzy approach to a belief-based trust computation, in: R. Falcone, K. S. Barber, L. Korba, M. P. Singh (Eds.), Trust, Reputation, and Security: Theories and Practice, AAMAS 2002 International Workshop, Bologna, Italy, July 15, 2002, Selected and Invited Papers, volume 2631 of Lecture Notes in Computer Science, Springer, pp. 73–86. 10.1007/3‑540‑36609‑1_7
    https://doi.org/10.1007/3-540-36609-1_7 [Google Scholar]
  35. Feitosa, J., Grossman, R., Kramer, W. S., & Salas, E.
    (2020) Measuring team trust: A critical and meta-analytical review, Journal of Organizational Behavior411. 479–501.
    [Google Scholar]
  36. Felzmann, H., Fosch-Villaronga, E., Lutz, C., & Tamo-Larrieux, A.
    (2020) Towards Transparency by Design for Artificial Intelligence, Science and Engineering Ethics261. 3333–3361. 10.1007/s11948‑020‑00276‑4
    https://doi.org/10.1007/s11948-020-00276-4 [Google Scholar]
  37. Fullam, K. K., Klos, T. B., Muller, G., Sabater, J., Schlosser, A., Topol, Z., Barber, K. S., Rosenschein, J. S., Vercouter, L., & Voss, M.
    (2005) A specification of the agent reputation and trust (art) testbed: experimentation and competition for trust in agent societies, in: Proceedings of the Fourth International Joint Conference on Autonomous Agents and Multiagent Systems, AAMAS ’05, Association for Computing Machinery, New York, NY, USA, p.512–518. 10.1145/1082473.1082551
    https://doi.org/10.1145/1082473.1082551 [Google Scholar]
  38. Fulmer, C. A., & Gelfand, M. J.
    (2012) At what level (and in whom) we trust: Trust across multiple organizational levels, Journal of management381. 1167–1230.
    [Google Scholar]
  39. Fulmer, C. A., & Ostroff, C.
    (2021) Trust conceptualizations across levels of analysis, in: Understanding trust in organizations, Routledge, pp. 14–42.
    [Google Scholar]
  40. Georganta, E., & Ulfert, A.-S.
    (2024) Would you trust an ai team member? team trust in human-ai teams, Journal of Occupational and Organizational Psychology.
    [Google Scholar]
  41. Glikson, E., & Woolley, A. W.
    (2020) Human trust in artificial intelligence: Review of empirical research, Academy of Management Annals141. 10.5465/annals.2018.0057
    https://doi.org/10.5465/annals.2018.0057 [Google Scholar]
  42. (2020) Human Trust in Artificial Intelligence: Review of Empirical Research, Academy of Management Annals. 10.5465/annals.2018.0057
    https://doi.org/10.5465/annals.2018.0057 [Google Scholar]
  43. Gulati, S., Sousa, S., & Lamas, D.
    (2019) Design, Development and Evaluation of a Human-Computer Trust Scale, Behaviour & Information Technology381. 1004–1015Taylor &; Francis.10.1080/0144929X.2019.1656779
    https://doi.org/10.1080/0144929X.2019.1656779 [Google Scholar]
  44. Guo, Y., & Yang, X. J.
    (2020) Modeling and Predicting Trust Dynamics in Human-Robot Teaming: A Bayesian Inference Approach, International Journal of Social Robotics. 10.1007/s12369‑020‑00703‑3Springer Science and Business Media B.V.
    https://doi.org/10.1007/s12369-020-00703-3 [Google Scholar]
  45. Hannibal, G., Dobrosovestnova, A., & Weiss, A.
    (2022) Tolerating Untrustworthy Robots: Studying Human Vulnerability Experience within a Privacy Scenario for Trust in Robots, in: 2022 31st IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), pp. 821–828. https://ieeexplore.ieee.org/abstract/document/9900830. 10.1109/RO‑MAN53752.2022.9900830, iSSN: 1944-9437.
    https://doi.org/10.1109/RO-MAN53752.2022.9900830 [Google Scholar]
  46. Herzig, A., Lorini, E., Hubner, J. F., & Vercouter, L.
    (2009) A logic of trust and reputation, Logic Journal of the IGPL181. 214–244. 10.1093/jigpal/jzp077
    https://doi.org/10.1093/jigpal/jzp077 [Google Scholar]
  47. Hoff, K. A., & Bashir, M.
    (2015) Trust in automation: Integrating empirical evidence on factors that influence trust, Human Factors571. 10.1177/0018720814547570
    https://doi.org/10.1177/0018720814547570 [Google Scholar]
  48. Huber, S., Weppert, L., Baumeister, L., Happel, O., & Grundgeiger, T.
    (2025) Team Roles of Artificial Intelligence in Anesthesiology — A Scoping Review, in: Proceedings of the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems, CHI EA ’25, Association for Computing Machinery, New York, NY, USA, pp. 1–13. 10.1145/3706599.3720186
    https://doi.org/10.1145/3706599.3720186 [Google Scholar]
  49. Ethically aligned design: A vision for prioritizing human well-being with autonomous and intelligent systems version 2, Technical Report, IEEE
    Ethically aligned design: A vision for prioritizing human well-being with autonomous and intelligent systems version 2, Technical Report, IEEE 2018.
  50. Jacovi, A., Marasović’, A., Miller, T., & Goldberg, Y.
    (2021) Formalizing trust in artificial intelligence: Prerequisites, causes and goals of human trust in AI, in: Proceedings of the 2021 ACM conference on fairness, accountability, and transparency, pp. 624–635.
    [Google Scholar]
  51. Jensen, T., & Khan, M. M. H.
    (2022) I’m Only Human: The Effects of Trust Dampening by Anthropomorphic Agents, in: J. Y. C. Chen, G. Fragomeni, H. Degen, S. Ntoa (Eds.). HCI International 2022 — Late Breaking Papers: Interacting with eXtended Reality and Artificial Intelligence, Springer Nature Switzerland, Cham, pp. 285–306. 10.1007/978‑3‑031‑21707‑4_21
    https://doi.org/10.1007/978-3-031-21707-4_21 [Google Scholar]
  52. Johnson, M., & Bradshaw, J. M.
    (2021) The role of interdependence in trust, in: Trust in Human-Robot Interaction, Elsevier, pp. 379–403.
    [Google Scholar]
  53. Johnson, M., Bradshaw, J. M., Feltovich, P. J., Jonker, C. M., M. B. van Riemsdijk, & Sierhuis, M.
    (2014) Coactive design: designing support for interdependence in joint activity, J. Hum.-Robot Interact. 31. 43–69. 10.5898/JHRI.3.1.Johnson
    https://doi.org/10.5898/JHRI.3.1.Johnson [Google Scholar]
  54. Johnson, M., Bradshaw, J. M., & Feltovich, P. J.
    (2018) Tomorrow’s human-machine design tools: From levels of automation to interdependencies, Journal of Cognitive Engineering and Decision Making121. 77–82. 10.1177/1555343417736462
    https://doi.org/10.1177/1555343417736462 [Google Scholar]
  55. Jong, B. A. De, & Elfring, T.
    (2010) How does trust affect the performance of ongoing teams? the mediating role of reflexivity, monitoring, and effort, Academy of Management journal531, 535–549.
    [Google Scholar]
  56. Jorge, C. Centeio, & A. S. Ulfert-Blank
    (2023) Multittrust-multidisciplinary perspectives on human-ai team trust, in: CEUR Workshop Proceedings, volume34561, CEUR-WS, pp. 132–136.
    [Google Scholar]
  57. Jorge, C. Centeio, Mehrotra, S., Tielman, M. L., & Jonker, C. M.
    (2021) Trust should correspond to trustworthiness: A formalization of appropriate mutual trust in human-agent teams, in: 22nd International Trust Workshop.
    [Google Scholar]
  58. Jorge, C. Centeio, Tielman, M. L., & Jonker, C. M.
    (2022) Artificial trust as a tool in human-ai teams, in: D. Sakamoto, A. Weiss, L. M. Hiatt, M. Shiomi (Eds.), ACM/IEEE International Conference on Human-Robot Interaction, HRI 2022, Sapporo, Hokkaido, Japan, March7 — 10 2022, IEEE / ACM, pp. 1155–1157. 10.1109/HRI53351.2022.9889652
    https://doi.org/10.1109/HRI53351.2022.9889652 [Google Scholar]
  59. C. Centeio Jorge, Jonker, C. M., & Tielman, M. L.
    (2024) How should an AI trust its human teammates? exploring possible cues of artificial trust, ACM Transactions of Interactive Intelligent Systems141, 51:1–5:26. 10.1145/3635475
    https://doi.org/10.1145/3635475 [Google Scholar]
  60. Kahr, P., Rooks, G., Snijders, C., Willemsen, M. C.
    (2025) Good Performance Isn’t Enough to Trust AI: Lessons from Logistics Experts on their Long-Term Collaboration with an AI Planning System, in: Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems, CHI ’25, Association for Computing Machinery, New York, NY, USA, pp. 1–16. 10.1145/3706598.3713099
    https://doi.org/10.1145/3706598.3713099 [Google Scholar]
  61. Kaur, D., Uslu, S., Rittichier, K. J., & Durresi, A.
    (2023) Trustworthy artificial intelligence: A review, ACM Computing Surveys551. 10.1145/3491209
    https://doi.org/10.1145/3491209 [Google Scholar]
  62. Kok, B. C., & Soh, H.
    (2020) Trust in robots: Challenges and opportunities, Current Robotics Reports11, 297–309.
    [Google Scholar]
  63. Kolomaznik, M., Petrik, V., Slama, M., & Jurik, V.
    (2024) The role of socio-emotional attributes in enhancing human-AI collaboration, Frontiers in Psychology151. https://www.frontiersin.org/journals/psychology/articles/10.3389/fpsyg.2024.1369957/fullFrontiers.10.3389/fpsyg.2024.1369957
    https://doi.org/10.3389/fpsyg.2024.1369957 [Google Scholar]
  64. Kox, E. S., Kerstholt, J. H., Hueting, T. F., & P. W. de Vries
    (2021) Trust repair in human-agent teams: the effectiveness of explanations and expressing regret, Autonomous Agents and Multi-Agent Systems351, 30. Publisher: Springer.
    [Google Scholar]
  65. Kox, E., Kerstholt, J., Hueting, T., & P. de Vries
    (2021) Trust repair in human-agent teams: the effectiveness of explanations and expressing regret, Autonomous agents and multi-agent systems351. 10.1007/s10458‑021‑09515‑9
    https://doi.org/10.1007/s10458-021-09515-9 [Google Scholar]
  66. Kumar, S., Savur, C., & Sahin, F.
    (2021) Survey of Human-Robot Collaboration in Industrial Settings: Awareness, Intelligence, and Compliance, IEEE Transactions on Systems, Man, and Cybernetics: Systems511, 280–297. https://ieeexplore.ieee.org/document/9302892. 10.1109/TSMC.2020.3041231
    https://doi.org/10.1109/TSMC.2020.3041231 [Google Scholar]
  67. Küper, A., & Krämer, N.
    (2023) Psychological Traits and Appropriate Reliance: Factors Shaping Trust in AI, International Journal of Human-Computer Interaction01, 1–17. Taylor & Francis _eprint: 10.1080/10447318.2024.2348216
    https://doi.org/10.1080/10447318.2024.2348216 [Google Scholar]
  68. Lascaux, A.
    (2008) Trust and uncertainty: a critical re-assessment, International Review of Sociology181, 1–18. URL:, publisher: Routledge _eprint: 10.1080/03906700701823613
    https://doi.org/10.1080/03906700701823613 [Google Scholar]
  69. Lee, M. H., & Chew, C. J.
    (2023) Understanding the Effect of Counterfactual Explanations on Trust and Reliance on AI for Human-AI Collaborative Clinical Decision Making, Proceedings of ACM Human-Computer Interaction71. New York, NY: Association for Computing Machinery. 10.1145/3610218
    https://doi.org/10.1145/3610218 [Google Scholar]
  70. Lee, J. D., & See, K. A.
    (2004) Trust in automation: Designing for appropriate reliance, Human Factors461, 50–80. pMID: 15151155.10.1518/hfes.46.1.50_30392
    https://doi.org/10.1518/hfes.46.1.50_30392 [Google Scholar]
  71. Lewicki, R. J., & Brinsfield, C.
    (2015) Trust research: measuring trust beliefs and behaviours, in: Handbook of research methods on trust, Edward Elgar Publishing.
    [Google Scholar]
  72. Lewis, J. D., & Weigert, A.
    (1985) Trust as a Social Reality, Social Forces631, 967–985. 10.1093/sf/63.4.967
    https://doi.org/10.1093/sf/63.4.967 [Google Scholar]
  73. Li, B., Qi, P., Liu, B., Di, S., Liu, J., Pei, J., Yi, J., & Zhou, B.
    (2023) Trustworthy ai: From principles to practices, ACM Computing Surveys551. 10.1145/3555803
    https://doi.org/10.1145/3555803 [Google Scholar]
  74. Luhmann, N.
    (2018) Trust and power, John Wiley & Sons.
    [Google Scholar]
  75. Malle, B. F., & Ullman, D.
    (2023) Measuring Human-Robot Trust with the MDMT (Multi-Dimensional Measure of Trust). arXiv:2311.14887 [cs].10.48550/arXiv.2311.14887
    https://doi.org/10.48550/arXiv.2311.14887 [Google Scholar]
  76. Mattioli, J., Sohier, H., Delaborde, A., Pedroza, G., Amokrane, K., Awadid, A., Chihani, Z., & Khalfaoui, S.
    (2023) Towards a holistic approach for AI trustworthiness assessment based upon aids for multi-criteria aggregation, in: G. Pedroza, X. Huang, X. C. Chen, A. Theodorou (Eds.), SafeAI 2023 — The AAAI’s Workshop on Artificial Intelligence Safety, volume 3381, Washington, D.C.: AAAI. https://hal.science/hal-04086455
    [Google Scholar]
  77. Mayer, R. C., Davis, J. H., & Schoorman, F. D.
    (1995) An integrative model of organizational trust, Academy of management review201, 709–734. Publisher: Academy of Management Briarcliff Manor, NY10510.
    [Google Scholar]
  78. McAllister, D. J.
    (1995) Affect-and cognition-based trust as foundations for interpersonal cooperation in organizations, Academy of management journal381, 24–59.
    [Google Scholar]
  79. McKnight, D., & Chervany, N.
    (1996) The Meanings of Trust.
    [Google Scholar]
  80. Mehrotra, S., Degachi, C., Vereschak, O., Jonker, C. M., & Tielman, M. L.
    (2024) A Systematic Review on Fostering Appropriate Trust in Human-AI Interaction: Trends, Opportunities and Challenges, ACM Journal of Responsible Computing. just Accepted.10.1145/3696449
    https://doi.org/10.1145/3696449 [Google Scholar]
  81. Miller, T.
    (2019) Explanation in artificial intelligence: Insights from the social sciences, Artificial Intelligence2671, 1–38. 10.1016/j.artint.2018.07.007
    https://doi.org/10.1016/j.artint.2018.07.007 [Google Scholar]
  82. Mui, L., Halberstadt, A., & Mohtashemi, M.
    (2003) Evaluating Reputation in Multi-agents Systems, in: R. Falcone, S. Barber, L. Korba, M. Singh (Eds.), Trust, Reputation, and Security: Theories and Practice, Springer, Berlin, Heidelberg, pp. 123–137. 10.1007/3‑540‑36609‑1_10
    https://doi.org/10.1007/3-540-36609-1_10 [Google Scholar]
  83. Nam, C. S., & Lyons, J. B.
    (Eds.) Trust in Human-Robot Interaction, Elsevier 202010.1016/C2018‑0‑04443‑6
    https://doi.org/10.1016/C2018-0-04443-6 [Google Scholar]
  84. Okamura, K., & Yamada, S.
    (2020) Adaptive trust calibration for human-AI collaboration, Plos one151) e0229132. Publisher: Public Library of Science San Francisco, CA USA.
    [Google Scholar]
  85. Parasuraman, R., & Riley, V.
    (1997) Humans and automation: Use, misuse, disuse, abuse, Human factors391, 230–253. Publisher: SAGE Publications Sage CA: Los Angeles, CA.
    [Google Scholar]
  86. Pinyol, I., & Sabater-Mir, J.
    (2013) Computational trust and reputation models for open multi-agent systems: a review, Artificial Intelligence Review401, 1–25. 10.1007/s10462‑011‑9277‑z
    https://doi.org/10.1007/s10462-011-9277-z [Google Scholar]
  87. Pouryousefi, S., & Tallant, J.
    (2023) Empirical and philosophical reflections on trust, Journal of the American Philosophical Association91. 10.1017/apa.2022.14
    https://doi.org/10.1017/apa.2022.14 [Google Scholar]
  88. Ramchurn, S. D., Huynh, D., & Jennings, N. R.
    (2004) Trust in multi-agent systems, Knowledge Engineering Review191. 10.1017/S0269888904000116
    https://doi.org/10.1017/S0269888904000116 [Google Scholar]
  89. Ramchurn, S. D., Stein, S., & Jennings, N. R.
    (2021) Trustworthy human-AI partnerships, iScience241, 102891. https://www.sciencedirect.com/science/article/pii/S2589004221008592. 10.1016/j.isci.2021.102891
    https://doi.org/10.1016/j.isci.2021.102891 [Google Scholar]
  90. Riedl, R.
    (2022) Is trust in artificial intelligence systems related to user personality?Review of empirical evidence and future research directions, Electronic Markets321, 2021–2051. 10.1007/s12525‑022‑00594‑4
    https://doi.org/10.1007/s12525-022-00594-4 [Google Scholar]
  91. Riegelsberger, J., Sasse, M. A., & McCarthy, J. D.
    (2005) The mechanics of trust: A framework for research and design, International Journal of Human Computer Studies621. 10.1016/j.ijhcs.2005.01.001
    https://doi.org/10.1016/j.ijhcs.2005.01.001 [Google Scholar]
  92. Rix, J.
    (2022) From tools to teammates: Conceptualizing humans’ perception of machines as teammates with a systematic literature review, in: Proceedings of the 55th Hawaii International Conference on System Sciences. 10.24251/hicss.2022.048
    https://doi.org/10.24251/hicss.2022.048 [Google Scholar]
  93. Robinette, P., Li, W., Allen, R., Howard, A. M., & Wagner, A. R.
    (2016) Overtrust of robots in emergency evacuation scenarios, in: 2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI), IEEE, pp. 101–108. 10.1109/HRI.2016.7451740
    https://doi.org/10.1109/HRI.2016.7451740 [Google Scholar]
  94. Rousseau, D. M., Sitkin, S. B., Burt, R. S., & Camerer, C.
    (1998) Not so different after all: A crossdiscipline view of trust, Academy of management review231, 393–404. https://www.jstor.org/stable/259285, publisher: Academy of Management Briarcliff Manor, NY10510.
    [Google Scholar]
  95. Sabater, J., & Sierra, C.
    (2005) Review on computational trust and reputation models, Artificial Intelligence Review241, 33–60. 10.1007/s10462‑004‑0041‑5
    https://doi.org/10.1007/s10462-004-0041-5 [Google Scholar]
  96. Sabater-Mir, J., & Vercouter, L.
    (2013) Trust and reputation in multiagent systems, Multiagent systems) 381. Publisher: MIT Press.
    [Google Scholar]
  97. (2013) Trust and reputation in multiagent systems, Multiagent systems) 381.
    [Google Scholar]
  98. Salas, E., Sims, D. E., & Burke, C. S.
    (2005) Is there a “big five” in teamwork?, Small group research361, 555–599.
    [Google Scholar]
  99. Sapp, J. E., Torre, D. M., Larsen, K. L., Holmboe, E. S., & Durning, S. J.
    (2019) Trust in group decisions: A scoping review, BMC Medical Education191. 10.1186/s12909‑019‑1726‑4
    https://doi.org/10.1186/s12909-019-1726-4 [Google Scholar]
  100. Schemmer, M., Kuehl, N., Benz, C., Bartos, A., & Satzger, G.
    (2023) Appropriate Reliance on AI Advice: Conceptualization and the Effect of Explanations, in: Proceedings of the 28th International Conference on Intelligent User Interfaces, Iui ’23, Association for Computing Machinery, New York, NY, USA, pp. 410–422. event-place: Sydney, NSW, Australia.10.1145/3581641.3584066
    https://doi.org/10.1145/3581641.3584066 [Google Scholar]
  101. Schmutz, J. B., Outland, N., Kerstan, S., Georganta, E., & Ulfert, A.-S.
    (2024) AI-teaming: Redefining collaboration in the digital era, Current Opinion in Psychology581, 101837. https://www.sciencedirect.com/science/article/pii/S2352250X24000502. 10.1016/j.copsyc.2024.101837
    https://doi.org/10.1016/j.copsyc.2024.101837 [Google Scholar]
  102. Seraj, E., & Gombolay, M.
    (2020) Coordinated Control of UAVs for Human-Centered Active Sensing of Wildfires, in: 2020 American Control Conference (ACC), pp. 1845–1852. https://ieeexplore.ieee.org/document/9147613. iSSN: 2378-5861.10.23919/ACC45564.2020.9147613
    https://doi.org/10.23919/ACC45564.2020.9147613 [Google Scholar]
  103. F. Santoni de Sio, & J. Van den Hoven
    (2018) Meaningful human control over autonomous systems: A philosophical account, Frontiers in Robotics and AI) 15. Publisher: Frontiers.
    [Google Scholar]
  104. Spain, R. D., Bustamante, E. A., & Bliss, J. P.
    (2008) Towards an empirically developed scale for system trust: Take two, in: Proceedings of the human factors and ergonomics society annual meeting, volume 52, SAGE Publications Sage CA: Los Angeles, CA, pp. 1335–1339. Issue: 19.
    [Google Scholar]
  105. Stuck, R. E., Holthausen, B. E., & Walker, B. N.
    (2021) The role of risk in human-robot trust, in: Trust in human-robot interaction, Elsevier, pp. 179–194.
    [Google Scholar]
  106. Surendran, V., & Wagner, A. R.
    (2019) Your robot is watching: Using surface cues to evaluate the trustworthiness of human actions, in: 28th IEEE International Conference on Robot and Human Interactive Communication, RO-MAN 2019, New Delhi, India, October14–18, IEEE, pp. 1–8. 10.1109/RO‑MAN46459.2019.8956343
    https://doi.org/10.1109/RO-MAN46459.2019.8956343 [Google Scholar]
  107. Tielman, M. L., Meyer-Vitali, A., Bailey, M., & Frattolillo, F.
    (2024) Multittrust: 3rd workshop on multidisciplinary perspectives on human-ai team trust, in: Proceedings of HHAI 2024 Workshops, CEUR. https://ceur-ws.org/Vol-3825/prefaceW5.pdf
    [Google Scholar]
  108. Tolmeijer, S., Weiss, A., Hanheide, M., Lindner, F., Powers, T. M., Dixon, C., & Tielman, M. L.
    (2020) Taxonomy of Trust-Relevant Failures and Mitigation Strategies, in: Proceedings of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, pp. 3–12.
    [Google Scholar]
  109. Tomlinson, E. C., & Mayer, R. C.
    (2009) The Role of Causal Attribution Dimensions in Trust Repair, The Academy of Management Review341, 85–104. https://www.jstor.org/stable/27759987, publisher: Academy of Management.
    [Google Scholar]
  110. Tucci, V., Saary, J., & Doyle, T. E.
    (2022) Factors influencing trust in medical artificial intelligence for healthcare professionals: a narrative review, Journal of Medical Artificial Intelligence51. https://jmai.amegroups.org/article/view/6664. number: 0 Publisher: AME Publishing Company.10.21037/jmai‑21‑25
    https://doi.org/10.21037/jmai-21-25 [Google Scholar]
  111. Ulfert, A.-S., Georganta, E., C. Centeio Jorge, Mehrotra, S., & Tielman, M. L.
    (2024) Shaping a multidisciplinary understanding of team trust in human-ai teams: a theoretical framework, European Journal of Work and Organizational Psychology331. 158–171.
    [Google Scholar]
  112. Ullman, D., & Malle, B. F.
    (2018) What Does it Mean to Trust a Robot? Steps Toward a Multidimensional Measure of Trust, in: Companion of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, HRI ’18, Association for Computing Machinery, New York, NY, USA, pp. 263–264. 10.1145/3173386.3176991
    https://doi.org/10.1145/3173386.3176991 [Google Scholar]
  113. Urbano, J., Rocha, A. P., & Oliveira, E.
    (2011) Computational trust: A review, ACM Computing Surveys4311–36. 10.1145/1824795.1824799
    https://doi.org/10.1145/1824795.1824799 [Google Scholar]
  114. Vasconcelos, H., Jörke, M., Grunde-McLaughlin, M., Gerstenberg, T., Bernstein, M. S., & Krishna, R.
    (2023) Explanations Can Reduce Overreliance on AI Systems During Decision-Making, Proc. ACM Human-Computer Interaction71. place: New York, NY, USA Publisher: Association for Computing Machinery. 10.1145/3579605,
    https://doi.org/10.1145/3579605 [Google Scholar]
  115. Verhagen, R. S., Neerincx, M. A., & Tielman, M. L.
    (2022) The influence of interdependence and a transparent or explainable communication style on human-robot teamwork, Frontiers in Robotics and AI91243. Publisher: Frontiers.
    [Google Scholar]
  116. (2024) Meaningful human control and variable autonomy in human-robot teams for firefighting, Frontiers in Robotics and AI111Frontiers. . 10.3389/frobt.2024.1323980
    https://doi.org/10.3389/frobt.2024.1323980 [Google Scholar]
  117. Verhagen, R. S., Marcu, A., Neerincx, M. A., & Tielman, M. L.
    (2024) The Influence of Interdependence on Trust Calibration in Human-Machine Teams, in: HHAI 2024: Hybrid Human AI Systems for the Social Good, IOS Press, pp. 300–314. 10.3233/FAIA240203
    https://doi.org/10.3233/FAIA240203 [Google Scholar]
  118. Vinanzi, S., Patacchiola, M., Chella, A., & Cangelosi, A.
    (2018) Would a robot trust you? developmental robotics model of trust and theory of mind, in: A. Chella, I. Infantino, A. Lieto (Eds.), Proceedings of the 6th International Workshop on Artificial Intelligence and Cognition, Palermo, Italy, July2–4 2018, volume24181ofCEUR Workshop Proceedings, CEUR-WS.orgp.74. https://ceur-ws.org/Vol-2418/paper7.pdf
    [Google Scholar]
  119. Visser, E. J. de, Pak, R., & Shaw, T. H.
    (2018) From automation to autonomy: the importance of trust repair in human-machine interaction, Ergonomics6111409–1427Taylor & Francis _eprint: 10.1080/00140139.2018.1457725
    https://doi.org/10.1080/00140139.2018.1457725 [Google Scholar]
  120. Visser, E. J. de, Marieke, M. M. Peeters, Malte, F. Jung, Kohn, S., Tyler, H. Shaw, Pak, R., & Neerincx, M. A.
    (2020) Towards a Theory of Longitudinal Trust Calibration in Human-Robot Teams, International Journal of Social Robotics121459–478. iSBN: 5,98,108,117,1.10.1007/s12369‑019‑00596‑x
    https://doi.org/10.1007/s12369-019-00596-x [Google Scholar]
  121. Visser, E. J. de, Momen, A., Walliser, J. C., Kohn, S. C., Shaw, T. H., & Tossell, C. C.
    (2023) Mutually Adaptive Trust Calibration in Human-AI Teams. https://ceur-ws.org/Vol-3456/short4-8.pdf
  122. Waa, J. van der, Diggelen, J. van, Siebert, L. Cavalcante, Neerincx, M., Jonker, & C.
    (2020) Allocation of Moral Decision-Making in Human-Agent Teams: A Pattern Approach, in: D. Harris, W.-C. Li (Eds.), Engineering Psychology and Cognitive Ergonomics. Cognition and Design, Springer International Publishing, Cham, pp. 203–220. 10.1007/978‑3‑030‑49183‑3_16
    https://doi.org/10.1007/978-3-030-49183-3_16 [Google Scholar]
  123. Wagner, A. R., Borenstein, J., & Howard, A.
    (2018) Overtrust in the robotic age, Communications of the ACM61122–24ACM New York, NY, USA. 10.1145/3241365
    https://doi.org/10.1145/3241365 [Google Scholar]
  124. Winikoff, M.
    (2017) Towards Trusting Autonomous Systems, in: International Workshop on Engineering Multi-Agent Systems, Springer, pp. 3–20. 10.1007/978‑3‑319‑91899‑0_1
    https://doi.org/10.1007/978-3-319-91899-0_1 [Google Scholar]
  125. Youssef, M., E.-N. Abdeslam, & Mohamed, D.
    (2015) A jade based testbed for evaluating computational trust models, in: 2015 10th International Conference on Intelligent Systems: Theories and Applications (SITA), pp. 1–7. 10.1109/SITA.2015.7358407
    https://doi.org/10.1109/SITA.2015.7358407 [Google Scholar]
  126. Zerilli, J., Bhatt, U., & Weller, A.
    (2022) How transparency modulates trust in artificial intelligence, Patterns31100455. 10.1016/j.patter.2022.100455
    https://doi.org/10.1016/j.patter.2022.100455 [Google Scholar]
  127. Zhang, Y., Liao, Q. V., & Bellamy, R. K. E.
    (2020) Effect of Confidence and Explanation on Accuracy and Trust Calibration in AI-Assisted Decision Making, in: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, ’20, Association for Computing Machinery, New York, NY, USA, pp. 295–305. event-place: Barcelona, Spain.10.1145/3351095.3372852
    https://doi.org/10.1145/3351095.3372852 [Google Scholar]
  128. Zhang, Q., Lee, M. L., & Carter, S.
    (2022) You complete me: Human-ai teams and complementary expertise, in: Conference on Human Factors in Computing Systems — Proceedings. 10.1145/3491102.3517791
    https://doi.org/10.1145/3491102.3517791 [Google Scholar]
  129. (2022) You Complete Me: Human-AI Teams and Complementary Expertise, in: Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems, CHI ’22, Association for Computing Machinery, New York, NY, USA, pp. 1–28. 10.1145/3491102.3517791
    https://doi.org/10.1145/3491102.3517791 [Google Scholar]
/content/journals/10.1075/is.24048.tie
Loading
/content/journals/10.1075/is.24048.tie
Loading

Data & Media loading...

This is a required field
Please enter a valid email address
Approval was successful
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error