- Home
- e-Journals
- Interaction Studies
- Previous Issues
- Volume 26, Issue 2, 2025
Interaction Studies - Volume 26, Issue 2, 2025
Volume 26, Issue 2, 2025
-
Multidisciplinary perspectives on human‑AI team trust
pp.: 164–199 (36)More LessAbstractHuman-AI teamwork is no longer a topic of the future. Given the importance of trust in human teams, the question arises how trust functions in human-AI teams. Although trust has long been studied from a human-centred perspective (e.g. in psychology and philosophy), a computational perspective and from the perspective of human trust in AI (e.g. in human-computer interaction), the study of trust in human-AI interaction in a team setting is still a novel field. For this reason, the MULTITTRUST (Multidisciplinary perspectives on Human-AI Team Trust) workshop series was founded. In this paper, we present the main outcomes after three editions. Our contributions are: an overview of the shared language of concepts and definitions; an outline of the main open research challenges; and methodological guidelines for further studies in meaningful human-AI team trust. These three contributions form a foundational roadmap towards a better understanding of trust in human-AI team interactions.
-
Trusting machine teammates
Author(s): Myke C. Cohen, Erin K. Chiou and Nancy J. Cookepp.: 200–228 (29)More LessAbstractTeam communication content can provide insights into teammates’ coordination processes and perceptions of one another. Using a simulated aircraft reconnaissance team task testbed, we investigate how personifying and objectifying communication content relate to people’s trust in and anthropomorphism of machine teammates and to overall team performance. A total of 44 participants were paired and assigned to one of two unique team roles alongside a synthetic pilot agent. Instances of verbal personifications and objectifications that occurred during the task were captured and compared to team performance, as well as questionnaire responses related to participants’ trust in, and anthropomorphizing of, the synthetic pilot. Verbal personifications were not correlated with trust and anthropomorphism but converged for the two human roles over time, along with a convergence in trust towards the synthetic agent. Verbal objectifications, on the other hand, were negatively correlated with perceived trustworthiness and anthropomorphism of a teammate. Neither verbal personifications nor objectifications were found to be related to team performance. Our findings suggest that people verbally personify machines to ease communication, and that the same processes that underlie tendencies to verbally personify and objectify machines are related to those that influence trust and anthropomorphism.
-
Exploring trust in AI-supported military teams using sentiment analysis
pp.: 229–266 (38)More LessAbstractExamining sentiment in team communications can provide information about trust among teammates. Natural language processing (NLP) models provide an efficient means of sentiment analysis. However, military teams and other professional teams use language that differs from what NLP models are trained on, leading to potentially inaccurate sentiment analysis. This study investigates the novel application of two advanced NLP models, DistilBERT and GPT-2, for sentiment analysis of expert military teams conducting AI-supported combat missions in a high fidelity simulation environment. Our fine-tuning process resulted in improved sentiment classification accuracy. The sentiment measures also correlated with measures of team trust and trust in the AI systems, providing valuable insight into the relationship between sentiment and trust in human-AI teaming scenarios. The generalized approach we describe may be useful for adapting sentiment analysis and NLP techniques to military teams, and may help measure trust dynamics and team states in human machine integrated teams.
-
Exploratory models of human-AI teams
pp.: 267–297 (31)More LessAbstractAs human-agent teaming (HAT) research continues to grow, computational methods for modeling HAT behaviors and measuring HAT effectiveness also continue to develop. One rising method involves the use of human digital twins (HDT) to approximate human behaviors and socio-emotional-cognitive reactions to AI-driven agent team members. To help HDT research effectively model human trust in HATs, we offer two lines of insight. First, through a review of the HAT trust literature, we identify key characteristics and attributes of trust that must be considered in order to properly conceptualize, model, and measure trust. Through this review, we outline the theoretical foundations of trust needed for effective HDTs capable of emulating human trust and offer guidance on where and how extant HAT research should translate into HDT modeling and future research. Second, through causal analyses of archival team communication data from a HAT experiment, we supplement theoretical foundations for modeling trust with data-driven insights to guide the trust-related language HDTs may need to effectively emulate human trust. Finally, we discuss implications of these combined theoretical and empirical insights for future HDT research, highlighting the necessity of ongoing validation against human behaviors and the refinement of computational methods. This paper ultimately aims to advance both the fidelity and applicability of HDTs in modeling nuanced human-agent trust dynamics, fostering more effective and realistic human-agent collaborations.
-
Trustworthiness needs for the use of AI solutions in business
Author(s): Ulla Coester, Laura Anderle and Norbert Pohlmannpp.: 298–325 (28)More LessAbstractUsing AI adequately is necessary for user companies to remain competitive. Studies show that nevertheless many companies are hesitant in this regard. In relation to the assumption that people’s ability to act is influenced by a lack of trust, particularly in the context of AI, we conducted a study as part of the TrustKI research project to analyze which factors are relevant to documenting trustworthiness in the context of AI. Our evaluation revealed that users demand holistic transparency; the provision of relevant information on the AI solution and proof of technical expertise is not sufficient to build trust, but there is a demand from users for specific information about the respective company. Based on the generally recognized components, we were able to identify further dimensions to provide the required information even more precisely. Thus, the study allows us to propose a preliminary set of information requirements for AI providers.
-
Perceived trustworthiness and moral competence of a GenAI-enabled ethical robot advisor
Author(s): Ali Momen, Chad C. Tossell, Richard E. Niemeyer, James Walliser, Michael Tolston, Gregory Funke and Ewart J. de Visserpp.: 326–356 (31)More LessAbstractGenerative AI agents (GenAIs) powered by Large-language models (LLMs) have emerged as prominent technological advancements. As these sophisticated systems permeate diverse sectors ranging from business to entertainment, their capability to handle moral queries becomes a focal point of exploration. This study investigates how users perceive Delphi, a GenAI trained to respond to moral queries (Jiang et al., 2025). Participants were instructed to interact with the agent, implemented either as a humanlike robot or a web client, to assess its moral competence and trustworthiness. Both agents received high scores for moral competence and perceived morality, yet fell short by not offering justifications for their moral decisions. Despite being deemed trustworthy, participants were hesitant about relying on such systems in the future. This study offers an initial evaluation of an algorithm with moral competence in an embodied human-like interface, paving the way for the evolution of ethical robot advisors.
-
The effect of emojis and AI reliability on team performance and trust in human-AI teams
Author(s): Morgan Bailey, Benjamin Gancz and Frank Pollickpp.: 357–385 (29)More LessAbstractThe increasing integration of Artificial Intelligence (AI) into human teams necessitates a deeper understanding of how to foster effective collaboration. This study investigates how incorporating emojis, as a representation of emotional intelligence, into AI communication influences human-AI team dynamics. Specifically, the study examined how emojis impact human trust in AI teammates, whether different types of emojis yield varied outcomes, and how emoji use affects the perceived performance of both AI and human teammates. A controlled experiment was conducted with participants who collaborated with a simulated AI teammate on a geographic location identification task. The AI teammate’s reliability and the use of emojis were manipulated across different experimental conditions. Results showed that neither the AI teammate’s reliability nor the use of emojis significantly influenced participants’ explicit trust ratings in the AI teammate. These findings highlight the complex interplay of trust, perception, and emotional cues in HAT collaboration.
Volumes & issues
-
Volume 26 (2025)
-
Volume 25 (2024)
-
Volume 24 (2023)
-
Volume 23 (2022)
-
Volume 22 (2021)
-
Volume 21 (2020)
-
Volume 20 (2019)
-
Volume 19 (2018)
-
Volume 18 (2017)
-
Volume 17 (2016)
-
Volume 16 (2015)
-
Volume 15 (2014)
-
Volume 14 (2013)
-
Volume 13 (2012)
-
Volume 12 (2011)
-
Volume 11 (2010)
-
Volume 10 (2009)
-
Volume 9 (2008)
-
Volume 8 (2007)
-
Volume 7 (2006)
-
Volume 6 (2005)
-
Volume 5 (2004)
Most Read This Month