1887
Volume 26, Issue 2
  • ISSN 1572-0373
  • E-ISSN: 1572-0381
USD
Buy:$35.00 + Taxes

Abstract

Abstract

Generative AI agents (GenAIs) powered by Large-language models (LLMs) have emerged as prominent technological advancements. As these sophisticated systems permeate diverse sectors ranging from business to entertainment, their capability to handle moral queries becomes a focal point of exploration. This study investigates how users perceive Delphi, a GenAI trained to respond to moral queries (Jiang et al., 2025). Participants were instructed to interact with the agent, implemented either as a humanlike robot or a web client, to assess its moral competence and trustworthiness. Both agents received high scores for moral competence and perceived morality, yet fell short by not offering justifications for their moral decisions. Despite being deemed trustworthy, participants were hesitant about relying on such systems in the future. This study offers an initial evaluation of an algorithm with moral competence in an embodied human-like interface, paving the way for the evolution of ethical robot advisors.

Loading

Article metrics loading...

/content/journals/10.1075/is.25072.mom
2026-02-27
2026-03-06
Loading full text...

Full text loading...

References

  1. Amazon Polly
    Amazon Polly (2022) https://aws.amazon.com/polly/.
  2. Baldwin, M. W., Carrell, S. E., & Lopez, D. F.
    (1990) Priming relationship schemas: My advisor and the pope are watching me from the back of my mind, Journal of Experimental Social Psychology261, 435–454. 10.1016/0022‑1031(90)90068‑W
    https://doi.org/10.1016/0022-1031(90)90068-W [Google Scholar]
  3. Banks, J.
    (2019) A perceived moral agency scale: Development and validation of a metric for humans and social machines, Computers in Human Behavior901, 363–371. 10.1016/j.chb.2018.08.028
    https://doi.org/10.1016/j.chb.2018.08.028 [Google Scholar]
  4. Bello, P., & Bringsjord, S.
    (2013) On how to build a moral machine, Topoi321. 10.1007/s11245‑012‑9129‑8
    https://doi.org/10.1007/s11245-012-9129-8 [Google Scholar]
  5. Bhat, S., Lyons, J. B., Shi, C., & Yang, X. J.
    (2024) Evaluating the impact of personalized value alignment in human-robot interaction: Insights into trust and team performance outcomes, in: Proceedings of the 2024 ACM/IEEE international conference on human-robot interaction, pp.32–41.
    [Google Scholar]
  6. Bigman, Y. E., & Gray, K.
    (2018) People are averse to machines making moral decisions, Cognition1811, 21–34. 10.1016/j.cognition.2018.08.003
    https://doi.org/10.1016/j.cognition.2018.08.003 [Google Scholar]
  7. Borders, J., Leung, A., & Condon, M.
    (2025) A framework for identifying key decision-maker attributes in uncertain and complex environments, in: 2025 IEEE Conference on Artificial Intelligence (CAI), IEEE, pp.1–5. 10.1109/CAI64502.2025.00205
    https://doi.org/10.1109/CAI64502.2025.00205 [Google Scholar]
  8. Botzer, N., Gu, S., & Weninger, T.
    (2022) Analysis of moral judgment on reddit, IEEE Transactions on Computational Social Systems1–11. 10.1109/TCSS.2022.3160677
    https://doi.org/10.1109/TCSS.2022.3160677 [Google Scholar]
  9. Breazeal, C., Kidd, C. D., Thomaz, A. L., Hoffman, G., & Berlin, M.
    (2005) Effects of nonver-bal communication on efficiency and robustness in human-robot teamwork, in: 2005 IEEE/RSJ International Conference on Intelligent Robots and Systems, pp.708–713. 10.1109/IROS.2005.1545011
    https://doi.org/10.1109/IROS.2005.1545011 [Google Scholar]
  10. Card, D., & Smith, N. A.
    (2020) On consequentialism and fairness, Frontiers in Artificial Intelligence31. 10.3389/frai.2020.00034
    https://doi.org/10.3389/frai.2020.00034 [Google Scholar]
  11. Caselli, T., Basile, V., Mitrovic, J., & Granitzer, M.
    (2021) Hatebert: Retraining BERT for abusive language detection in English, arXiv preprint arXiv:2010.12472. 10.18653/v1/2021.woah‑1.3
    https://doi.org/10.18653/v1/2021.woah-1.3 [Google Scholar]
  12. Coleman, C., Neuman, W. R., Dasdan, A., Ali, S., & Shah, M.
    (2025) The convergent ethics of AI? Analyzing moral foundation priorities in large language models with a multi-framework approach, arXiv preprint arXiv:2504.19255.
    [Google Scholar]
  13. Defense Science Board, Task force report: The role of autonomy in DoD systems, Office of the Under Secretary of Defense for Acquisition, Technology and Logistics
    Defense Science Board, Task force report: The role of autonomy in DoD systems, Office of the Under Secretary of Defense for Acquisition, Technology and Logistics 2019.
  14. Dietvorst, B. J., Simmons, J. P., & Massey, C.
    (2018) Overcoming algorithm aversion: People will use imperfect algorithms if they can (even slightly) modify them, Management Science641, 1155–1170. 10.1287/mnsc.2016.2643
    https://doi.org/10.1287/mnsc.2016.2643 [Google Scholar]
  15. DiSalvo, C. F.
    (2002) All Robots Are Not Created Equal: The Design and Perception of Humanoid Robot Heads, Technical Report, Ph.D. thesis, MIT.
    [Google Scholar]
  16. Epley, N., Waytz, A., & Cacioppo, J. T.
    (2007) On seeing human: a three-factor theory of anthropomorphism, Psychological review1141, 864. 10.1037/0033‑295X.114.4.864
    https://doi.org/10.1037/0033-295X.114.4.864 [Google Scholar]
  17. Fabre, E. F., Mouratille, D., Bonnemains, V., Palmiotti, G. P., & Causse, M.
    (2024) Making moral decisions with artificial agents as advisors. A fNIRS study, Computers in Human Behavior: Artificial Humans21, 100096. 10.1016/j.chbah.2024.100096
    https://doi.org/10.1016/j.chbah.2024.100096 [Google Scholar]
  18. Floridi, L., & Chiriatti, M.
    (2020) Gpt-3: Its nature, scope, limits, and consequences, Minds and Machines301, 681–694. 10.1007/s11023‑020‑09548‑1
    https://doi.org/10.1007/s11023-020-09548-1 [Google Scholar]
  19. Gabriel, I.
    (2020) Artificial intelligence, values, and alignment, Minds and machines301, 411–437.
    [Google Scholar]
  20. Gratch, J., & Fast, N. J.
    (2022) The power to harm: AI assistants pave the way to unethical behavior, Current Opinion in Psychology471. 10.1016/j.copsyc.2022.101382
    https://doi.org/10.1016/j.copsyc.2022.101382 [Google Scholar]
  21. Gray, K., DiMaggio, N., Schein, C., & Kachanoff, F.
    (2023) The problem of purity in moral psychology, Personality and Social Psychology Review271, 272–308. 10.1177/10888683221124741
    https://doi.org/10.1177/10888683221124741 [Google Scholar]
  22. Greene, J. D., Sommerville, R. B., Nystrom, L. E., Darley, J. M., & Cohen, J. D.
    (2001) An fmri investigation of emotional engagement in moral judgment, Science2931, 2105–2108. 10.1126/science.1062872
    https://doi.org/10.1126/science.1062872 [Google Scholar]
  23. Gutzwiller, R. S., Yousefi, R., Larson-Calcano, T., Lee, J. R., Verma, A., Tenhundfeld, N. L., & Maknojia, I.
    (2025) Systematic review of the use and modification of the “trust in automated systems scale”, in: Proceedings of the Human Factors and Ergonomics Society Annual Meeting, SAGE Publications Sage CA: Los Angeles, CA , p.10711813251357911. 10.1177/10711813251357911
    https://doi.org/10.1177/10711813251357911 [Google Scholar]
  24. Haidt, J., & Joseph, C.
    (2007) The moral mind: How five sets of innate intuitions guide the development of many culture-specific virtues, and perhaps even modules, in: The Innate Mind, volume 3, pp.367–391.
    [Google Scholar]
  25. Hauptman, A. I., Schelble, B. G., & McNeese, N. J.
    (2022) Adaptive Autonomy as a Means for Implementing Shared Ethics in Human-AI Teams, Technical Report, Unpublished manuscript.
    [Google Scholar]
  26. Hoff, K. A., & Bashir, M.
    (2015) Trust in automation: Integrating empirical evidence on factors that influence trust, Human Factors571, 407–434. 10.1177/0018720814547570
    https://doi.org/10.1177/0018720814547570 [Google Scholar]
  27. Janoff-Bulman, R., Sheikh, S., & Hepp, S.
    (2009) Proscriptive versus prescriptive morality: Two faces of moral regulation, Journal of Personality and Social Psychology961, 521–537. 10.1037/a0013779
    https://doi.org/10.1037/a0013779 [Google Scholar]
  28. Jiang, L., Hwang, J. D., Bhagavatula, C., Bras, R. L., Forbes, M., Borchardt, J., Liang, J., Etzioni, O., Sap, M., & Choi, Y.
    (2021) Delphi: Towards machine ethics and norms, arXiv preprint arXiv:2102.06724.
    [Google Scholar]
  29. Jiang, L., Hwang, J. D., Bhagavatula, C., Bras, R. L., Liang, J. T., Levine, S., Dodge, J., Sakaguchi, K., Forbes, M., & Hessel, J.,
    (2025) Investigating machine moral judgement through the delphi experiment, Nature Machine Intelligence71, 145–160. 10.1038/s42256‑024‑00969‑6
    https://doi.org/10.1038/s42256-024-00969-6 [Google Scholar]
  30. Khavas, Z. Rezaei, Kotturu, M. R., Ahmadzadeh, S. R., & Robinette, P.
    (2024) Do humans trust robots that violate moral trust?, ACM Transactions on Human-Robot Interaction131, 1–30. 10.1145/3651992
    https://doi.org/10.1145/3651992 [Google Scholar]
  31. Kim, B., Wen, R., Zhu, Q., Williams, T., & Phillips, E.
    (2021) Robots as moral advisors: The effects of deontological, virtue, and confucian role ethics on encouraging honest behavior, in: Companion of the 2021 ACM/IEEE International Conference on Human-Robot Interaction, pp.10–18. 10.1145/3434074.3446908
    https://doi.org/10.1145/3434074.3446908 [Google Scholar]
  32. Kim, B., Visser, E. de, & Phillips, E.
    (2022) Two uncanny valleys: Re-evaluating the uncanny valley across the full spectrum of real-world human-like robots, Computers in Human Behavior1351. 10.1016/j.chb.2022.107340
    https://doi.org/10.1016/j.chb.2022.107340 [Google Scholar]
  33. Kim, B., Wen, R., Visser, E. J. de, Tossell, C. C., Zhu, Q., Williams, T., & Phillips, E.
    (2024) Can robot advisers encourage honesty?: Considering the impact of rule, identity, and role-based moral advice, International Journal of Human-Computer Studies1841, 103217. 10.1016/j.ijhcs.2024.103217
    https://doi.org/10.1016/j.ijhcs.2024.103217 [Google Scholar]
  34. Kim, M. J., Shaw, T., Kim, B., & Phillips, E.
    (2025) Does presence and embodiment matter? investigating telepresence robots as a superior educational modality to videoconferencing, International Journal of Social Robotics1–12. 10.1007/s12369‑025‑01262‑1
    https://doi.org/10.1007/s12369-025-01262-1 [Google Scholar]
  35. Klinger, M., Burton, P., & Pitts, G.
    (2000) Mechanisms of unconscious priming: I. response competition, not spreading activation, Journal of Experimental Psychology: Learning, Memory, and Cognition261, 441–455. 10.1037/0278‑7393.26.2.441
    https://doi.org/10.1037/0278-7393.26.2.441 [Google Scholar]
  36. Knijnenburg, B. P., & Willemsen, M. C.
    (2016) Inferring capabilities of intelligent agents from their external traits, ACM Transactions on Interactive Intelligent Systems61, 1–25. 10.1145/2963106
    https://doi.org/10.1145/2963106 [Google Scholar]
  37. Kohlberg, L., Levine, C., & Hewer, A.
    (1983) Moral stages: A current formulation and a response to critics, in: Contributions to Human Development, volume 10, pp.174–174.
    [Google Scholar]
  38. Kohn, S. C., Visser, E. J. de, Wiese, E., Lee, Y.-C., & Shaw, T. H.
    (2021) Measurement of trust in automation: A narrative review and reference guide, Frontiers in Psychology121. 10.3389/fpsyg.2021.604977
    https://doi.org/10.3389/fpsyg.2021.604977 [Google Scholar]
  39. Lee, J. D., & See, K. A.
    (2004) Trust in automation: Designing for appropriate reliance, Human Factors461, 50–80. 10.1518/hfes.46.1.50.30392
    https://doi.org/10.1518/hfes.46.1.50.30392 [Google Scholar]
  40. Liu, Y., Zhang, X. F., Wegsman, D., Beauchamp, N., & Wang, L.
    (2022) Politics: Pretraining with same-story article comparison for ideology prediction and stance detection, arXiv preprint arXiv:2205.00619. 10.18653/v1/2022.findings‑naacl.101
    https://doi.org/10.18653/v1/2022.findings-naacl.101 [Google Scholar]
  41. Lucas, G. M., Gratch, J., King, A., & Morency, L.-P.
    (2014) It’s only a computer: Virtual humans increase willingness to disclose, Computers in Human Behavior371, 94–100. 10.1016/j.chb.2014.04.043
    https://doi.org/10.1016/j.chb.2014.04.043 [Google Scholar]
  42. Lyons, J. B., & Guznov, S. Y.
    (2019) Individual differences in human-machine trust: A multi-study look at the perfect automation schema, Theoretical Issues in Ergonomics Science201, 440–458. 10.1080/1463922X.2018.1491071
    https://doi.org/10.1080/1463922X.2018.1491071 [Google Scholar]
  43. Malle, B. F., & Scheutz, M.
    (2017) Moral competence in social robots, in: Machine Ethics and Robot Ethics, Routledge.
    [Google Scholar]
  44. Malle, B. F., Rosen, E., Chi, V. B., Berg, M., & Haas, P.
    (2020) A general methodology for teaching norms to social robots, in: 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), IEEE, pp.1395–1402. 10.1109/RO‑MAN47096.2020.9223610
    https://doi.org/10.1109/RO-MAN47096.2020.9223610 [Google Scholar]
  45. Maninger, T., & Shank, D. B.
    (2022) Perceptions of violations by artificial and human actors across moral foundations, Computers in Human Behavior Reports51, 100154. 10.1016/j.chbr.2021.100154
    https://doi.org/10.1016/j.chbr.2021.100154 [Google Scholar]
  46. Mayer, R. C., & Davis, J. H.
    (1999) The effect of the performance appraisal system on trust for management: A field quasi-experiment, Journal of Applied Psychology841, 123–136. 10.1037/0021‑9010.84.1.123
    https://doi.org/10.1037/0021-9010.84.1.123 [Google Scholar]
  47. Mayer, R. C., Davis, J. H., & Schoorman, F. D.
    (1995) An integrative model of organizational trust, Academy of Management Review201, 709–734. 10.2307/258792
    https://doi.org/10.2307/258792 [Google Scholar]
  48. McVay, J., Visser, E. J. de, Pippin, B., Mani, A., Hyde, J. N., & Kman, N.
    (2025) Trust in aligned AIdecision makers, in: 2025 IEEE Conference on Artificial Intelligence (CAI), IEEE, pp.1–4. 10.1109/CAI64502.2025.00202
    https://doi.org/10.1109/CAI64502.2025.00202 [Google Scholar]
  49. Merritt, S. M., Unnerstall, J. L., Lee, D., & Huber, K.
    (2015) Measuring individual differences in the perfect automation schema, Human factors571, 740–753. 10.1177/0018720815581247
    https://doi.org/10.1177/0018720815581247 [Google Scholar]
  50. Momen, A., Visser, E. de, Wolsten, K., Cooley, K., Walliser, J., & Tossell, C. C.
    (2023) Trusting the moral judgments of a robot: Perceived moral competence and humanlikeness of a gpt-3 enabled ai, in: Proceedings of the 56th Hawaii International Conference on System Sciences10.24251/HICSS.2023.063
    https://doi.org/10.24251/HICSS.2023.063 [Google Scholar]
  51. Momen, A., Visser, E. J. de, Fraune, M. R., Madison, A., Rueben, M., Cooley, K., & Tossell, C. C.
    (2023) Group trust dynamics during a risky driving experience in a tesla model x, Frontiers in psychology141, 1129369. 10.3389/fpsyg.2023.1129369
    https://doi.org/10.3389/fpsyg.2023.1129369 [Google Scholar]
  52. Monfort, S. S., Graybeal, J. J., Harwood, A. E., McKnight, P. E., & Shaw, T. H.
    (2018) A single-item assessment for remaining mental resources: development and validation of the gas tank questionnaire (gtq), Theoretical Issues in Ergonomics Science191, 530–552. 10.1080/1463922X.2017.1397228
    https://doi.org/10.1080/1463922X.2017.1397228 [Google Scholar]
  53. Mori, M., MacDorman, K. F., & Kageki, N.
    (2012) The uncanny valley [from the field], IEEE Robotics & automation magazine191, 98–100. 10.1109/MRA.2012.2192811
    https://doi.org/10.1109/MRA.2012.2192811 [Google Scholar]
  54. National Security Commission on Artificial Intelligence, Final report
    National Security Commission on Artificial Intelligence, Final report (2021) https://www.nscai.gov/.
  55. O’Neill, T., McNeese, N., Barron, A., & Schelble, B. G.
    (2022) Human-autonomy teaming: A review and analysis of the empirical literature, Human Factors641, 904–938. 10.1177/0018720820960865
    https://doi.org/10.1177/0018720820960865 [Google Scholar]
  56. Parasuraman, R., & Riley, V.
    (1997) Humans and automation: Use, misuse, disuse, abuse, Human Factors391, 230–253. 10.1518/001872097778543886
    https://doi.org/10.1518/001872097778543886 [Google Scholar]
  57. Phillips, E., Zhao, X., Ullman, D., & Malle, B. F.
    (2018) What is human-like?: Decomposing robots’ human-like appearance using the anthropomorphic robot(abot) database, in: Proceedings of the 2018 ACM/IEEE International Conference on Human-Robot Interaction, pp.105–113. 10.1145/3171221.3171268
    https://doi.org/10.1145/3171221.3171268 [Google Scholar]
  58. Raffard, S., Salesse, R. N., Marin, L., Del-Monte, J., Schmidt, R. C., Varlet, M., Bardy, B. G., Boulenger, J.-P., & Capdevielle, D.
    (2015) Social priming enhances interpersonal synchronization and feeling of connectedness towards schizophrenia patients, Scientific Reports51, 1–10. 10.1038/srep08156
    https://doi.org/10.1038/srep08156 [Google Scholar]
  59. Roesler, E., Manzey, D., & Onnasch, L.
    (2021) A meta-analysis on the effectiveness of anthropomorphism in human-robot interaction, Science Robotics (Vol6, Issue58). 10.1126/scirobotics.abj5425
    https://doi.org/10.1126/scirobotics.abj5425 [Google Scholar]
  60. Schelble, B. G., Lopez, J., Textor, C., Zhang, R., McNeese, N. J., Pak, R., & Freeman, G.
    (2022) Towards ethical ai: Empirically investigating dimensions of AI ethics, trust repair, and performance in human-ai teaming, Human Factors. 10.1177/00187208221116952
    https://doi.org/10.1177/00187208221116952 [Google Scholar]
  61. Scheutz, M., Thielstrom, R., & Abrams, M.
    (2022) Transparency through explanations and justifications in human-robot task-based communications, International Journal of Human-Computer Interaction381, 1739–1752. 10.1080/10447318.2022.2091086
    https://doi.org/10.1080/10447318.2022.2091086 [Google Scholar]
  62. Shariff, A. F., & Norenzayan, A.
    (2007) God is watching you: Priming god concepts increases prosocial behavior in an anonymous economic game, Psychological Science181, 803–809. 10.1111/j.1467‑9280.2007.01983.x
    https://doi.org/10.1111/j.1467-9280.2007.01983.x [Google Scholar]
  63. Simmons, G.
    (2022) Moral mimicry: Large language models produce moral rationalizations tailored to political identity, arXiv preprint arXiv:2209.12106.
    [Google Scholar]
  64. Sucholutsky, I., Muttenthaler, L., Weller, A., Peng, A., Bobu, A., Kim, B., Love, B. C., Grant, E., Groen, I., & Achterberg, J.,
    (2023) Getting aligned on representational alignment, arXiv preprint arXiv:2310.13018.
    [Google Scholar]
  65. Tenhundfeld, N., Demir, M., & Visser, E. de
    (2022) Assessment of trust in automation in the “real world”: Requirements for new trust in automation measurement techniques for use by practitioners, Journal of Cognitive Engineering and Decision Making161, 101–118. 10.1177/15553434221096261
    https://doi.org/10.1177/15553434221096261 [Google Scholar]
  66. Textor, C., Zhang, R., Lopez, J., Schelble, B. G., McNeese, N. J., Freeman, G., Pak, R., Tossell, C., & Visser, E. J. de
    (2022) Exploring the relationship between ethics and trust in human-artificial intelligence teaming: A mixed methods approach, Journal of Cognitive Engineering and Decision Making15553434221113964. 10.1177/15553434221113964
    https://doi.org/10.1177/15553434221113964 [Google Scholar]
  67. Time, Why the wga is striking for limits on the use of ai, Time.com
    Time, Why the wga is striking for limits on the use of ai, Time.com, No date available.
  68. Tossell, C. C., Tenhundfeld, N. L., Momen, A., Cooley, K., & Visser, E. J. De
    (2024) Student perceptions of chatgpt use in a college essay assignment: Implications for learning, grading, and trust in artificial intelligence, IEEE Transactions on Learning Technologies171, 1069–1081. 10.1109/TLT.2024.3355015
    https://doi.org/10.1109/TLT.2024.3355015 [Google Scholar]
  69. Ullman, D., & Malle, B. F.
    (2019) Measuring gains and losses in human-robot trust: Evidence for differentiable components of trust, in: 2019 14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), pp.618–619. 10.1109/HRI.2019.8673154
    https://doi.org/10.1109/HRI.2019.8673154 [Google Scholar]
  70. Ullman, D., Aladia, S., & Malle, B. F.
    (2021) Challenges and opportunities for replication science in hri: A case study in human-robot trust, in: Proceedings of the 2021 ACM/IEEE international conference on human-robot interaction, pp.110–118. 10.1145/3434073.3444652
    https://doi.org/10.1145/3434073.3444652 [Google Scholar]
  71. Varanasi, L.
    (2023) Ai models like chatgpt and gpt-4 are acing everything from the bar exam to ap biology: Here’s a list of difficult exams both AI versions have passed. URL: https://www.businessinsider.com/list-here-are-the-exams-chatgpt-has-passed-so-far-2023-1, accessed: 2025-08-04.
  72. Visser, E. de, Cohen, M., Freedy, A., & Parasuraman, R.
    (2014) A design methodology for trust cue calibration in cognitive agents, in: Lecture Notes in Computer Science, volume 8525, pp.262–271. 10.1007/978‑3‑319‑07458‑0_24
    https://doi.org/10.1007/978-3-319-07458-0_24 [Google Scholar]
  73. de Visser, E. J., Monfort, S., McKendrick, R., Smith, M., McKnight, P., Krueger, F., & Parasuraman, R.
    (2016) Almost human: Anthropomorphism increases trust resilience in cognitive agents, Journal of Experimental Psychology: Applied221. 10.1037/xap0000092
    https://doi.org/10.1037/xap0000092 [Google Scholar]
  74. de Visser, E. J., Peeters, M. M. M., Jung, M., Kohn, S., Shaw, T., Pak, R., & Neerincx, M.
    (2020) Towards a theory of longitudinal trust calibration in human-robot teams, International Journal of Social Robotics121. 10.1007/s12369‑019‑00596‑x
    https://doi.org/10.1007/s12369-019-00596-x [Google Scholar]
  75. Voiklis, J., Cusimano, C., & Malle, B.
    (2013) A social-conceptual map of moral criticism, in: Proceedings of the 7th International Conference on Social Robotics.
    [Google Scholar]
  76. Wagner, A. R.
    (2020) Principles of evacuation robots, in: R. Pak, E. de Visser, E. Rovira (Eds.), Living with Robots, Academic Press, pp.153–164. 10.1016/B978‑0‑12‑815367‑3.00008‑6
    https://doi.org/10.1016/B978-0-12-815367-3.00008-6 [Google Scholar]
  77. Weisman, K., Dweck, C. S., & Markman, E. M.
    (2017) Rethinking people’s conceptions of mental life, Proceedings of the National Academy of Sciences1141, 11374–11379. 10.1073/pnas.1704347114
    https://doi.org/10.1073/pnas.1704347114 [Google Scholar]
  78. Williams, T., Ayers, D., Kaufman, C., Serrano, J., & Roy, S.
    (2021) Deconstructed trustee theory: Disentangling trust in body and identity in multi-robot distributed systems, in: Proceedings of the 2021 ACM/IEEE International Conference on Human-Robot Interaction, pp.262–271. 10.1145/3434073.3444644
    https://doi.org/10.1145/3434073.3444644 [Google Scholar]
  79. Winkle, K., Jackson, R. B., Brščić, D., Leite, I., Melsion, G. I., & Williams, T.
    (2022) Norm-breaking responses to sexist abuse: A cross-cultural human-robot interaction study, International Journal of Social Robotics.
    [Google Scholar]
  80. Yurkevich, V.
    (2023) Experts warn about possible misuse of new AI tool chatgpt, Fox19.com. Accessed2023-01-24.
    [Google Scholar]
  81. Zhang, Y., Wu, J., Yu, F., & Xu, L.
    (2023) Moral judgments of human vs. AI agents in moral dilemmas, Behavioral Sciences131, 181. 10.3390/bs13020181
    https://doi.org/10.3390/bs13020181 [Google Scholar]
/content/journals/10.1075/is.25072.mom
Loading
/content/journals/10.1075/is.25072.mom
Loading

Data & Media loading...

  • Article Type: Research Article
Keyword(s): human-robot interaction; moral competence; trust; trustworthiness
This is a required field
Please enter a valid email address
Approval was successful
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error