Full text loading...
, Jing Niu2
, Yilin Ren3 and Inam Ul Haq2
Abstract
As Generative Artificial Intelligence (GAI) becomes increasingly integrated into daily life, understanding how users develop trust in these systems while navigating privacy concerns is critical. This study examines how perceived anthropomorphism, privacy concerns, and dependency influence trust in GAI, drawing on Privacy Calculus Theory (PCT) and Media Dependency Theory (MDT). The findings reveal that users trust GAI more when they perceive it as human-like, but privacy concerns reduce trust, creating a trust-privacy paradox. However, GAI dependency moderates these relationships, strengthening the positive effect of anthropomorphism on trust while weakening the negative impact of privacy concerns. Additionally, privacy concerns partially mediate the relationship between anthropomorphism and trust, suggesting that users who perceive AI as human-like worry less about privacy risks. By integrating PCT and MDT, this study offers a comprehensive framework to understand how trust in AI evolves, not just through rational cost-benefit evaluations (PCT) but also through behavioral adaptation based on dependency (MDT). These insights have practical implications for AI developers and policymakers, emphasizing the need for human-centered AI design, privacy safeguards, and ethical guidelines to foster sustained trust in AI-driven interactions while addressing user concerns.
Article metrics loading...
Full text loading...
References
Data & Media loading...