Full text loading...
, Myke C. Cohen1,2, Hsien-Te Kao1, Grant Engberson1, Louis Penafiel1, Spencer Lynch1, Robert McCormack1, Laura Cassani1 and Svitlana Volkova1
Abstract
As human-agent teaming (HAT) research continues to grow, computational methods for modeling HAT behaviors and measuring HAT effectiveness also continue to develop. One rising method involves the use of human digital twins (HDT) to approximate human behaviors and socio-emotional-cognitive reactions to AI-driven agent team members. To help HDT research effectively model human trust in HATs, we offer two lines of insight. First, through a review of the HAT trust literature, we identify key characteristics and attributes of trust that must be considered in order to properly conceptualize, model, and measure trust. Through this review, we outline the theoretical foundations of trust needed for effective HDTs capable of emulating human trust and offer guidance on where and how extant HAT research should translate into HDT modeling and future research. Second, through causal analyses of archival team communication data from a HAT experiment, we supplement theoretical foundations for modeling trust with data-driven insights to guide the trust-related language HDTs may need to effectively emulate human trust. Finally, we discuss implications of these combined theoretical and empirical insights for future HDT research, highlighting the necessity of ongoing validation against human behaviors and the refinement of computational methods. This paper ultimately aims to advance both the fidelity and applicability of HDTs in modeling nuanced human-agent trust dynamics, fostering more effective and realistic human-agent collaborations.
Article metrics loading...
Full text loading...
References
Data & Media loading...