1887
image of Anthropomorphism, dependency, and trust in Generative Artificial Intelligence
USD
Buy:$35.00 + Taxes

Abstract

As Generative Artificial Intelligence (GAI) becomes increasingly integrated into daily life, understanding how users develop trust in these systems while navigating privacy concerns is critical. This study examines how perceived anthropomorphism, privacy concerns, and dependency influence trust in GAI, drawing on Privacy Calculus Theory (PCT) and Media Dependency Theory (MDT). The findings reveal that users trust GAI more when they perceive it as human-like, but privacy concerns reduce trust, creating a trust-privacy paradox. However, GAI dependency moderates these relationships, strengthening the positive effect of anthropomorphism on trust while weakening the negative impact of privacy concerns. Additionally, privacy concerns partially mediate the relationship between anthropomorphism and trust, suggesting that users who perceive AI as human-like worry less about privacy risks. By integrating PCT and MDT, this study offers a comprehensive framework to understand how trust in AI evolves, not just through rational cost-benefit evaluations (PCT) but also through behavioral adaptation based on dependency (MDT). These insights have practical implications for AI developers and policymakers, emphasizing the need for human-centered AI design, privacy safeguards, and ethical guidelines to foster sustained trust in AI-driven interactions while addressing user concerns.

Loading

Article metrics loading...

/content/journals/10.1075/is.25116.maz
2026-03-17
2026-04-21
Loading full text...

Full text loading...

References

  1. Afroogh, S., Akbari, A., Malone, E., Kargar, M., & Alambeigi, H.
    (2024) Trust in AI: progress, challenges, and future directions. Humanities and Social Sciences Communications, (), –. 10.1057/s41599‑024‑04044‑8
    https://doi.org/10.1057/s41599-024-04044-8 [Google Scholar]
  2. Ali, H., & Aysan, A. F.
    (2025) Ethical dimensions of generative AI: a cross-domain analysis using machine learning structural topic modeling. International Journal of Ethics and Systems, (), –. 10.1108/IJOES‑04‑2024‑0112
    https://doi.org/10.1108/IJOES-04-2024-0112 [Google Scholar]
  3. Bach, T. A., Khan, A., Hallock, H., Beltrão, G., & Sousa, S.
    (2024) A systematic literature review of user trust in AI-enabled systems: An HCI perspective. International Journal of Human–Computer Interaction, (), –. 10.1080/10447318.2022.2138826
    https://doi.org/10.1080/10447318.2022.2138826 [Google Scholar]
  4. Ball-Rokeach, S. J., & DeFleur, M. L.
    (1976) A dependency model of mass-media effects. Communication research, (), –. 10.1177/009365027600300101
    https://doi.org/10.1177/009365027600300101 [Google Scholar]
  5. Başer, M. Y., Büyükbeşe, T., & Durmaz, Y.
    (2024) “Yes, It’s Cute, But How Can I Be Sure It’s Safe or Not?” Investigating the Intention to Use Service Robots in the Context of Privacy Calculus. International Journal of Human–Computer Interaction, (), –. 10.1080/10447318.2023.2254617
    https://doi.org/10.1080/10447318.2023.2254617 [Google Scholar]
  6. Bellovin, S. M.
    (2024) Degenerative AI?IEEE Security & Privacy, (), –. 10.1109/MSEC.2024.3385549
    https://doi.org/10.1109/MSEC.2024.3385549 [Google Scholar]
  7. Benitez, J., Henseler, J., Castillo, A., & Schuberth, F.
    (2020) How to perform and report an impactful analysis using partial least squares: Guidelines for confirmatory and explanatory IS research. Information & management, (), . 10.1016/j.im.2019.05.003
    https://doi.org/10.1016/j.im.2019.05.003 [Google Scholar]
  8. Carillo, K., Scornavacca, E., & Za, S.
    (2017) The role of media dependency in predicting continuance intention to use ubiquitous media systems. Information & Management, (), –. 10.1016/j.im.2016.09.002
    https://doi.org/10.1016/j.im.2016.09.002 [Google Scholar]
  9. Chandra, S., Shirish, A., & Srivastava, S. C.
    (2022) To be or not to be… human? Theorizing the role of human-like competencies in conversational artificial intelligence agents. Journal of Management Information Systems, (), –. 10.1080/07421222.2022.2127441
    https://doi.org/10.1080/07421222.2022.2127441 [Google Scholar]
  10. Chen, X., Wu, M., Cheng, C., & Mou, J.
    (2024) Weighing user’s privacy calculus on personal information disclosure: the moderating effect of social media identification. Online Information Review. 10.1108/oir‑03‑2024‑0135
    https://doi.org/10.1108/oir-03-2024-0135 [Google Scholar]
  11. Chen, Y., & Esmaeilzadeh, P.
    (2024) Generative AI in medical practice: in-depth exploration of privacy and security challenges. Journal of Medical Internet Research, , e53008. 10.2196/53008
    https://doi.org/10.2196/53008 [Google Scholar]
  12. Chen, Z., Gong, Y., Huang, R., & Lu, X.
    (2024) How does information encountering enhance purchase behavior? The mediating role of customer inspiration. Journal of Retailing and Consumer Services, , . 10.1016/j.jretconser.2024.103772
    https://doi.org/10.1016/j.jretconser.2024.103772 [Google Scholar]
  13. Culnan, M. J., & Armstrong, P. K.
    (1999) Information privacy concerns, procedural fairness, and impersonal trust: An empirical investigation. Organization science, (), –. 10.1287/orsc.10.1.104
    https://doi.org/10.1287/orsc.10.1.104 [Google Scholar]
  14. Ding, Y., & Najaf, M.
    (2024) Interactivity, humanness, and trust: a psychological approach to AI chatbot adoption in e-commerce. BMC psychology, (), . 10.1186/s40359‑024‑02083‑z
    https://doi.org/10.1186/s40359-024-02083-z [Google Scholar]
  15. Dwork, C., & Minow, M.
    (2022) Distrust of artificial intelligence: Sources & responses from computer science & law. Daedalus, (), –. 10.1162/daed_a_01918
    https://doi.org/10.1162/daed_a_01918 [Google Scholar]
  16. F. Hair Jr, J., Sarstedt, M., Hopkins, L., & G. Kuppelwieser, V.
    (2014) Partial least squares structural equation modeling (PLS-SEM). European Business Review, (), –. 10.1108/EBR‑10‑2013‑0128
    https://doi.org/10.1108/EBR-10-2013-0128 [Google Scholar]
  17. Fornell, C., & Larcker, D. F.
    (1981) Evaluating structural equation models with unobservable variables and measurement error. Journal of marketing research, (), –. 10.1177/002224378101800104
    https://doi.org/10.1177/002224378101800104 [Google Scholar]
  18. Golda, A., Mekonen, K., Pandey, A., Singh, A., Hassija, V., Chamola, V., & Sikdar, B.
    (2024) Privacy and Security Concerns in Generative AI: A Comprehensive Survey. IEEE Access. 10.1109/ACCESS.2024.3381611
    https://doi.org/10.1109/ACCESS.2024.3381611 [Google Scholar]
  19. Gupta, M., Akiri, C., Aryal, K., Parker, E., & Praharaj, L.
    (2023) From chatgpt to threatgpt: Impact of generative ai in cybersecurity and privacy. IEEE Access. 10.1109/ACCESS.2023.3300381
    https://doi.org/10.1109/ACCESS.2023.3300381 [Google Scholar]
  20. Ha, Q.-A., Chen, J. V., Uy, H. U., & Capistrano, E. P.
    (2021) Exploring the Privacy Concerns in Using Intelligent Virtual Assistants under Perspectives of Information Sensitivity and Anthropomorphism. International Journal of Human–Computer Interaction, (), –. 10.1080/10447318.2020.1834728
    https://doi.org/10.1080/10447318.2020.1834728 [Google Scholar]
  21. Hagendorff, T.
    (2024) Mapping the Ethics of Generative AI: A Comprehensive Scoping Review. Minds and Machines, (), . 10.1007/s11023‑024‑09694‑w
    https://doi.org/10.1007/s11023-024-09694-w [Google Scholar]
  22. Henrique, B. M., & Santos Jr, E.
    (2024) Trust in artificial intelligence: Literature review and main path analysis. Computers in Human Behavior: Artificial Humans, . 10.1016/j.chbah.2024.100043
    https://doi.org/10.1016/j.chbah.2024.100043 [Google Scholar]
  23. Henseler, J., Ringle, C. M., & Sarstedt, M.
    (2015) A new criterion for assessing discriminant validity in variance-based structural equation modeling. Journal of the academy of marketing science, , –. 10.1007/s11747‑014‑0403‑8
    https://doi.org/10.1007/s11747-014-0403-8 [Google Scholar]
  24. Hsieh, S. H., & Lee, C. T.
    (2024) The AI humanness: how perceived personality builds trust and continuous usage intention. Journal of Product & Brand Management. 10.1108/JPBM‑10‑2023‑4797
    https://doi.org/10.1108/JPBM-10-2023-4797 [Google Scholar]
  25. Hsu, C.-L., & Lin, J. C.-C.
    (2008) Acceptance of blog usage: The roles of technology acceptance, social influence and knowledge sharing motivation. Information & management, (), –. 10.1016/j.im.2007.11.001
    https://doi.org/10.1016/j.im.2007.11.001 [Google Scholar]
  26. Huang, S., Lai, X., Ke, L., Li, Y., Wang, H., Zhao, X., Dai, X., & Wang, Y.
    (2024) AI Technology panic — is AI Dependence Bad for Mental Health? A Cross-Lagged Panel Model and the Mediating Roles of Motivations for AI Use Among Adolescents. Psychology Research and Behavior Management, (), –. 10.2147/PRBM.S440889
    https://doi.org/10.2147/PRBM.S440889 [Google Scholar]
  27. Hyun Baek, T., & Kim, M.
    (2023) Is ChatGPT scary good? How user motivations affect creepiness and trust in generative artificial intelligence. Telematics and Informatics, , . 10.1016/j.tele.2023.102030
    https://doi.org/10.1016/j.tele.2023.102030 [Google Scholar]
  28. Kim, J., & Im, I.
    (2023) Anthropomorphic response: Understanding interactions between humans and artificial intelligence agents. Computers in Human Behavior, , . 10.1016/j.chb.2022.107512
    https://doi.org/10.1016/j.chb.2022.107512 [Google Scholar]
  29. Kim, J. S., & Baek, T. H.
    (2024) Motivational determinants of continuance usage intention for generative AI: an investment model approach for ChatGPT users in the United States. Behaviour & Information Technology, –. 10.1080/0144929x.2024.2429647
    https://doi.org/10.1080/0144929x.2024.2429647 [Google Scholar]
  30. Kim, J. S., Kim, M., & Baek, T. H.
    (2024) Enhancing User Experience With a Generative AI Chatbot. International Journal of Human–Computer Interaction, –. 10.1080/10447318.2024.2351717
    https://doi.org/10.1080/10447318.2024.2351717 [Google Scholar]
  31. Krügel, S., Ostermaier, A., & Uhl, M.
    (2022) Zombies in the Loop? Humans Trust Untrustworthy AI-Advisors for Ethical Decisions. Philosophy & Technology, (), . 10.1007/s13347‑022‑00511‑9
    https://doi.org/10.1007/s13347-022-00511-9 [Google Scholar]
  32. Lim, S., & Shim, H.
    (2022) No secrets between the two of us: Privacy concerns over using AI agents. Cyberpsychology: Journal of Psychosocial Research on Cyberspace, (). 10.5817/CP2022‑4‑3
    https://doi.org/10.5817/CP2022-4-3 [Google Scholar]
  33. Liu, S., & Wang, L.
    (2016) Influence of managerial control on performance in medical information system projects: the moderating role of organizational environment and team risks. International Journal of Project Management, (), –. 10.1016/j.ijproman.2015.10.003
    https://doi.org/10.1016/j.ijproman.2015.10.003 [Google Scholar]
  34. Maseeh, H. I., Jebarajakirthy, C., Pentecost, R., Arli, D., Weaven, S., & Ashaduzzaman, M.
    (2021) Privacy concerns in e-commerce: A multilevel meta-analysis. Psychology & Marketing, (), –. 10.1002/mar.21493
    https://doi.org/10.1002/mar.21493 [Google Scholar]
  35. Menard, P., & Bott, G. J.
    (2025) Artificial intelligence misuse and concern for information privacy: New construct validation and future directions. Information Systems Journal, (), –. 10.1111/isj.12544
    https://doi.org/10.1111/isj.12544 [Google Scholar]
  36. Patrizi, M., Šerić, M., & Vernuccio, M.
    (2024) Hey Google, I trust you! The consequences of brand anthropomorphism in voice-based artificial intelligence contexts. Journal of Retailing and Consumer Services, , . 10.1016/j.jretconser.2023.103659
    https://doi.org/10.1016/j.jretconser.2023.103659 [Google Scholar]
  37. Piller, F. T., Srour, M., & Marion, T. J.
    (2024) Generative AI, Innovation, and Trust. The Journal of Applied Behavioral Science, (), –. 10.1177/00218863241285033
    https://doi.org/10.1177/00218863241285033 [Google Scholar]
  38. Rohmah, N. N. m., & Rahmawati, E.
    (2023) Interpretation of Media System Dependency Theory on Financial Technology. Jurnal ASPIKOM, (), –. 10.24329/aspikom.v8i1.1186
    https://doi.org/10.24329/aspikom.v8i1.1186 [Google Scholar]
  39. Seberger, J. S.
    (2024) Stuck in Isabel’s Garden: Anthropomorphic Metaphors and the Oversimplification of Digital Privacy. Proc. ACM Hum.-Comput. Interact., (), Article 408. 10.1145/3686947
    https://doi.org/10.1145/3686947 [Google Scholar]
  40. Shahzad, M. F., Xu, S., & Asif, M.
    (2024) Factors affecting generative artificial intelligence, such as ChatGPT, use in higher education: An application of technology acceptance model. British Educational Research Journal, (). 10.1002/berj.4084
    https://doi.org/10.1002/berj.4084 [Google Scholar]
  41. Teo, Z. L., Quek, C. W. N., Wong, J. L. Y., & Ting, D. S. W.
    (2024) Cybersecurity in the generative artificial intelligence era. Asia-Pacific Journal of Ophthalmology, (), . 10.1016/j.apjo.2024.100091
    https://doi.org/10.1016/j.apjo.2024.100091 [Google Scholar]
  42. Tschopp, M., Gieselmann, M., & Sassenberg, K.
    (2023) Servant by default? How humans perceive their relationship with conversational AI. Cyberpsychology: Journal of Psychosocial Research on Cyberspace, (). 10.5817/CP2023‑3‑9
    https://doi.org/10.5817/CP2023-3-9 [Google Scholar]
  43. van Es, K., & Nguyen, D.
    (2024) “Your friendly AI assistant”: the anthropomorphic self-representations of ChatGPT and its implications for imagining AI. AI & SOCIETY. 10.1007/s00146‑024‑02108‑6
    https://doi.org/10.1007/s00146-024-02108-6 [Google Scholar]
  44. Wach, K., Duong, C. D., Ejdys, J., Kazlauskaitė, R., Korzynski, P., Mazurek, G., Paliszkiewicz, J., & Ziemba, E.
    (2023) The dark side of generative artificial intelligence: A critical analysis of controversies and risks of ChatGPT. Entrepreneurial Business and Economics Review, (), –. 10.15678/EBER.2023.110201
    https://doi.org/10.15678/EBER.2023.110201 [Google Scholar]
  45. Wang, T., Zhang, Y., Qi, S., Zhao, R., Xia, Z., & Weng, J.
    (2024) Security and Privacy on Generative Data in AIGC: A Survey. ACM Comput. Surv., (), Article 82. 10.1145/3703626
    https://doi.org/10.1145/3703626 [Google Scholar]
  46. Westerman, D., Edwards, A. P., Edwards, C., Luo, Z., & Spence, P. R.
    (2020) I-It, I-Thou, I-Robot: The perceived humanness of AI in human-machine communication. Communication Studies, (), –. 10.1080/10510974.2020.1749683
    https://doi.org/10.1080/10510974.2020.1749683 [Google Scholar]
  47. Wilkowska, W., Otten, S., Maidhof, C., & Ziefle, M.
    (2024) Trust Conditions and Privacy Perceptions in the Acceptance of Ambient Technologies for Health-Related Purposes. International Journal of Human–Computer Interaction, (), –. 10.1080/10447318.2023.2272075
    https://doi.org/10.1080/10447318.2023.2272075 [Google Scholar]
  48. Zhou, T., & Zhang, C.
    (2024) Examining generative AI user addiction from a CAC perspective. Technology in Society, , . 10.1016/j.techsoc.2024.102653
    https://doi.org/10.1016/j.techsoc.2024.102653 [Google Scholar]
/content/journals/10.1075/is.25116.maz
Loading
/content/journals/10.1075/is.25116.maz
Loading

Data & Media loading...

This is a required field
Please enter a valid email address
Approval was successful
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error