Afroogh, S., Akbari, A., Malone, E., & Langarizadeh, M.
(2024) Trust
in AI: Progress, challenges, and future directions. Humanities and Social Sciences
Communications, 11 (1), 1568. 10.1057/s41599‑024‑04044‑8
(2026) The
effect of emojis and AI reliability on team performance and trust in human-AI
teams. Interaction Studies, (Special Issue on Multidisciplinary
Perspectives on Human-AI Team Trust), 26 (2). 10.1075/is.24045.bai
(2024) Trust
in automation and the accuracy of humanalgorithm teams performing one-to-one face matching
tasks. Cognitive Research: Principles and
Implications, 9 (41).
(2026) Trustworthiness
needs for the use of AI solutions in business: Ethical and empirical
considerations. Interaction Studies, (Special Issue on
Multidisciplinary Perspectives on Human-AI Team
Trust), 26 (2). 10.1075/is.24051.coe
(2026) Trusting
machine teammates: The role of personifying and objectifying language in team
communication. Interaction Studies, (Special Issue on
Multidisciplinary Perspectives on Human-AI Team
Trust), 26 (2). 10.1075/is.24050.coh
Cohen, M. C., Kim, N., Ba, Y., Pan, A., Bhatti, S., Salehi, P., Sung, J., Blasch, E., Mancenido, M. V., & Chiou, E. K.
(2025) PADTHAI-MM:
Principles-based approach for designing trustworthy, human-centered AI using the MAST
methodology. AI
Magazine, 46(1), e70000. 10.1002/aaai.70000
de
Visser, E. J., Peeters, M. M. M., Jung, M. F., Kohn, S., Shaw, T. H., Pak, R., & Neerincx, M. A.
(2020) Towards
a Theory of Longitudinal Trust Calibration in Human-Robot Teams. International Journal of
Social
Robotics, 12 (2), 459–478. 10.1007/s12369‑019‑00596‑x
Duan, W., Zhou, S., Scalia, M. J., Freeman, G., Gorman, J., Tolston, M., McNeese, N. J., & Funke, G.
(2025) Understanding
the processes of trust and distrust contagion in human-ai teams: A qualitative
approach. Computers in Human
Behavior, 1651, 108560. 10.1016/j.chb.2025.108560
(2024) Would
you trust an ai team member? team trust in human-ai teams. Journal of Occupational and
Organizational
Psychology, 97 (3), 1212–1241. 10.1111/joop.12504
Kucukosmanoglu, M., Johnson, C. J., Pollard, K., Chhan, D., Lakhmani, S. G., Forster, D., Conklin, S., Brooks, J., Crowell, H. P., & Krausman, A.
(2026) Exploring
trust in AI-supported military teams using sentiment analysis. Interaction
Studies, (Special Issue on Multidisciplinary Perspectives on Human-AI Team
Trust), 26 (2). 10.1075/is.24046.kuc
(2007) Similarities
and differences between human-human and human-automat trust: An integrative
review [Publisher: Taylor & Francis
_eprint: Theoretical Issues in Ergonomics
Science, 8 (4), 277–301. 10.1080/14639220500337708
(2021, January). Chapter
1 — A multidimensional conception and measure of humanrobot
trust. InC. S. Nam & J. B. Lyons (Eds.), Trust
in Human-Robot
Interaction (pp.3–25). Academic
Press. 10.1016/B978‑0‑12‑819472‑0.00001‑0
McNeese, N. J., Demir, M., Chiou, E. K., & Cooke, N. J.
(2021) Trust
and Team Performance in Human-Autonomy
Teaming [Publisher: Routledge
_eprint:]. International Journal of Electronic
Commerce, 25 (1), 51–72. 10.1080/10864415.2021.1846854
Momen, A., Tossell, C. C., Walliser, J. C., Niemyer, R., Tolston, M., Funke, G. J., & de
Visser, E. J.
(2026) Perceived
trustworthiness and moral competence of a GenAI-enabled ethical robot advisor. Interaction
Studies, (Special Issue on Multidisciplinary Perspectives on Human-AI Team
Trust), 26 (2). 10.1075/is.25072.mom
Musick, G., O’Neill, T. A., Schelble, B. G., McNeese, N. J., & Henke, J. B.
(2021) What
Happens When Humans Believe Their Teammate is an AI? An Investigation into Humans Teaming with
Autonomy. Computers in Human
Behavior, 1221, 106852. 10.1016/j.chb.2021.106852
Naikar, N., Hoffman, R., Roth, E. M., Klein, G., Militello, L. G., & Dominguez, C.
(2025) Should
we Make AI More Tool-like or Teammate-Like? [Publisher: SAGE Publications]. Journal of
Cognitive Engineering and Decision Making, 15553434251346904. 10.1177/15553434251346904
Nguyen, D., Cohen, M. C., Kao, H.-T., Engberon, G., Penafiel, L., Lynch, S., & Volkova, S.
(2026) Exploratory
models of human-AI teams: Leveraging human digital twins to investigate trust
development. Interaction Studies, (Special Issue on
Multidisciplinary Perspectives on Human-AI Team
Trust), 26 (2). 10.1075/is.24052.ngu
(2010) Complacency
and Bias in Human Use of Automation: An Attentional Integration [Publisher: SAGE Publications
Inc]. Human
Factors, 52 (3), 381–410. 10.1177/0018720810376055
Rahwan, I., Cebrian, M., Obradovich, N., Bongard, J., Bonnefon, J.-F., Breazeal, C., Crandall, J. W., Christakis, N. A., Couzin, I. D., Jackson, M. O.,
(2021) From
satisficing to artificing: The evolution of administrative decision-making in the age of the algorithm [Publisher: Cambridge
University Press]. Data &
Policy, 31, e3. 10.1017/dap.2020.25
UNESCO (2021) Recommendation on the
ethics of artificial intel ligence (tech.
rep.) (Accessed: 2025-09-04). United Nations Educational, Scientific and Cultural
Organization. https://unesdoc.unesco.org/ark:/48223/pf0000381137
Wang, D., Churchill, E., Maes, P., Fan, X., Shneiderman, B., Shi, Y., & Wang, Q.
(2020) From
human-human collaboration to human-ai collaboration: Designing ai systems that can work together with
people. Extended abstracts of the 2020 CHI conference on human factors in computing
systems, 1–6. 10.1145/3334480.3381069
(2025) Trust
and ai weight: Human-ai collaboration in organizational management decision-making. Frontiers
in Organizational
Psychology, 31, 1419403. 10.3389/forgp.2025.1419403