1887
Volume 20, Issue 3
  • ISSN 1572-0373
  • E-ISSN: 1572-0381
USD
Buy:$35.00 + Taxes

Abstract

Abstract

Trust is a key dimension of human-robot interaction (HRI), and has often been studied in the HRI community. A common challenge arises from the difficulty of assessing trust levels in ecologically invalid environments: we present in this paper two independent laboratory studies, totalling 160 participants, where we investigate the impact of different types of errors on resulting trust, using both behavioural and subjective measures of trust. While we found a (weak) general effect of errors on reported and observed level of trust, no significant differences between the type of errors were found in either of our studies. We discuss this negative result in light of our experimental protocols, and argue for the community to move towards alternative methodologies to assess trust.

Loading

Article metrics loading...

/content/journals/10.1075/is.18067.flo
2019-11-18
2024-12-11
Loading full text...

Full text loading...

References

  1. Allen, M., Poggiali, D., Whitaker, K., Marshall, T. R., & Kievit, R.
    (2018, August). Raincloud plots: a multi-platform tool for robust data visualization. PeerJ Preprints, 6, e27137v1. Retrieved from10.7287/peerj.preprints.27137v1
    https://doi.org/10.7287/peerj.preprints.27137v1 [Google Scholar]
  2. Barber, B.
    (1983) The logic and limits of trust.
    [Google Scholar]
  3. Bartneck, C., Kulić, D., Croft, E., & Zoghbi, S.
    (2009) Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots. International journal of social robotics, 1 (1), 71–81. doi:  10.1007/s12369‑008‑0001‑3
    https://doi.org/10.1007/s12369-008-0001-3 [Google Scholar]
  4. Bartneck, C., Suzuki, T., Kanda, T., & Nomura, T.
    (2007) The influence of people’s culture and prior experiences with aibo on their attitude towards robots. Ai & Society, 21 (1–2), 217–230.
    [Google Scholar]
  5. Baxter, P., Kennedy, J. E. S., Lemaignan, S., & Belpaeme, T.
    (2016) From characterising three years of hri to methodology and reporting recommendations. InProceedings of the 2016 acm/ieee human-robot interaction conference (alt.hri). doi:  10.1109/HRI.2016.7451777
    https://doi.org/10.1109/HRI.2016.7451777 [Google Scholar]
  6. Bickmore, T., Pfeifer, L., Schulman, D., Perera, S., Senanayake, C., & Nazmi, I.
    (2008) Public displays of affect: Deploying relational agents in public spaces. InProceedings of chi’08 (pp.3297–3302). doi:  10.1145/1358628.1358847
    https://doi.org/10.1145/1358628.1358847 [Google Scholar]
  7. Breazeal, C., Kidd, C., Thomaz, A., Hoffman, G., & Berlin, M.
    (2005) Effects of nonverbal communication on efficiency and robustness in human-robot teamwork. InProceedings of the ieee international conference on intelligent robots and systems (pp.708–713). doi:  10.1109/IROS.2005.1545011
    https://doi.org/10.1109/IROS.2005.1545011 [Google Scholar]
  8. Corritore, C., Kracher, B., & Wiedenbeck, S.
    (2003) Online trust: Concepts, evolving themes, a model. International Journal of Human-Computer Studies, 58(6), 737–758. 10.1016/S1071‑5819(03)00041‑7
    https://doi.org/10.1016/S1071-5819(03)00041-7 [Google Scholar]
  9. Dautenhahn, K., Woods, S., Kaouri, C., Walters, M., Koay, K., & Werry, I.
    (2005) What is a robot companion – friend, assistant or butler. InProceedings of the ieee international conference on intelligent systems and robots (pp.1192–1197). doi:  10.1109/IROS.2005.1545189
    https://doi.org/10.1109/IROS.2005.1545189 [Google Scholar]
  10. Desai, M., Medvedev, M., Vázquez, M., McSheehy, S., Gadea-Omelchenko Bruggeman, S., … Yanco, H.
    (2012) Effects of changing reliability on trust of robot systems. InProceedings of the acm/ieee conference on human-robot interaction (pp.73–80). doi:  10.1145/2157689.2157702
    https://doi.org/10.1145/2157689.2157702 [Google Scholar]
  11. Goetz, J., Kiesler, S., & Powers, A.
    (2003) Matching robot appearance and behaviour to tasks to improve human-robot interaction. InProceedings of ieee roman international workshop on robot and human interactive communication (pp.55–60).
    [Google Scholar]
  12. Gosling, S. D., Rentfrow, P. J., & Swann, W. B.
    (2003) A very brief measure of the big five personality domains. Journal of Research in Personality, 37, 504–528. doi:  10.1016/S0092‑6566(03)00046‑1
    https://doi.org/10.1016/S0092-6566(03)00046-1 [Google Scholar]
  13. Gray, K., & Wegner, D.
    (2012) Feeling robots and human zombies: Mind perception and the uncanny valley. Cognition, 125(1), 125–130. doi:  10.1016/j.cognition.2012.06.007
    https://doi.org/10.1016/j.cognition.2012.06.007 [Google Scholar]
  14. Guznov, S., Lyons, J., Nelson, A., & Woolley, M.
    (2016) The effects of automation error types on operators’ trust and reliance. InS. Lackey & R. Shumaker (Eds.), Virtual, augmented and mixed reality (pp.116–124). Cham: Springer International Publishing. 10.1007/978‑3‑319‑39907‑2_11
    https://doi.org/10.1007/978-3-319-39907-2_11 [Google Scholar]
  15. Hamacher, A., Bianchi-Berthouze, N., Pipe, A. G., & Eder, K.
    (2016) Believing in BERT: Using expressive communication to enhance trust and counteract operational error in physical human-robot interaction. InRobot and human interactive communication (ro-man), 2016 25th ieee international symposium on (pp.493–500). doi:  10.1109/ROMAN.2016.7745163
    https://doi.org/10.1109/ROMAN.2016.7745163 [Google Scholar]
  16. Hancock, P., Billings, D., Schaefer, K., Chen, J., de Visser, E., & Parasuraman, R.
    (2011) A meta-analysis of factors affecting trust in human-robot interaction. Human Factors, 53(5), 517–527. doi:  10.1177/0018720811417254
    https://doi.org/10.1177/0018720811417254 [Google Scholar]
  17. Iwamura, Y., Shiomi, M., Kanda, T., Ishiguro, H., & Hagita, N.
    (2011) Do elderly people prefer a conversational humanoid as a shopping assistant partner in supermarkets. InProceedings of the acm/ieee international conference on human-robot interaction (pp.449–456). doi:  10.1145/1957656.1957816
    https://doi.org/10.1145/1957656.1957816 [Google Scholar]
  18. Lee, J. D., & See, K. A.
    (2004) Trust in automation: Designing for appropriate reliance. Human factors, 46(1), 50–80. 10.1518/hfes.46.1.50.30392
    https://doi.org/10.1518/hfes.46.1.50.30392 [Google Scholar]
  19. Lee, J. J., Knox, W. B., Wormwood, J. B., Breazeal, C., & Desteno, D.
    (2013) Computationally modelling interpersonal trust. Frontiers in Psychology, 4(893). doi:  10.3389/fpsyg.2013.00893
    https://doi.org/10.3389/fpsyg.2013.00893 [Google Scholar]
  20. Lee, M., Kiesler, S., & Forlizzi, J.
    (2010) Receptionist or information kiosk: how do people talk with a robot?InProceedings of the 2010 acm conference on computer supported cooperative work (pp.31–40). doi:  10.1145/1718918.1718927
    https://doi.org/10.1145/1718918.1718927 [Google Scholar]
  21. Lemaignan, S., Fink, J., & Dillenbourg, P.
    (2014) The dynamics of anthropomorphism in robotics. Inin proceedings of the international conference on human-robot interaction (pp.226–227). doi:  10.1145/2559636.2559814
    https://doi.org/10.1145/2559636.2559814 [Google Scholar]
  22. Lucas, G., Boberg, J., Traum, D., Artstein, R., Gratch, J., Gainer, A., … Leuski, A.
    (2018) Getting to know each other: The role of social dialogue in recovery from errors in social robots. InProceedings of the 2018 acm/ieee international conference on human-robot interaction (pp.344–351). doi:  10.1145/3171221.3171258
    https://doi.org/10.1145/3171221.3171258 [Google Scholar]
  23. Mayer, R. C., Davis, J. H., & Schoorman, F. D.
    (1995) An integrative model of organizational trust. Academy of management review, 20(3), 709–734. 10.5465/amr.1995.9508080335
    https://doi.org/10.5465/amr.1995.9508080335 [Google Scholar]
  24. Mirnig, N., Stollnberger, G., Miksch, M., Stadler, S., Giuliani, M., & Tscheligi, M.
    (2017) To err is robot: How humans assess and act toward an erroneous social robot. InFrontiers in robotics and ai. doi:  10.3389/frobt.2017.00021
    https://doi.org/10.3389/frobt.2017.00021 [Google Scholar]
  25. Moray, N., & Inagaki, T.
    (1999) Laboratory studies of trust between humans and machines in automated systems. Transactions of the Institute of Measurement and Control, 21 (4–5), 203–211. 10.1177/014233129902100408
    https://doi.org/10.1177/014233129902100408 [Google Scholar]
  26. Mori, M.
    (1970) The uncanny valley. Energy, 7(4), 33–35.
    [Google Scholar]
  27. Muir, B., & Moray, N.
    (1996) Trust in automation: Part II. “experimental studies of trust and human intervention in a process control simulation.”. InErgonomics (pp.429–460). doi:  10.1080/00140139608964474
    https://doi.org/10.1080/00140139608964474 [Google Scholar]
  28. Muir, B. M.
    (1994) Trust in automation: Part i. theoretical issues in the study of trust and human intervention in automated systems. Ergonomics, 37(11), 1905–1922. 10.1080/00140139408964957
    https://doi.org/10.1080/00140139408964957 [Google Scholar]
  29. Nass, C., & Lee, K.
    (2000) Does computer-generated speech manifest personality? an experimental test of similarity-attraction. InProceedings of chi’00 (p.329–336). doi:  10.1145/332040.332452
    https://doi.org/10.1145/332040.332452 [Google Scholar]
  30. Nomura, T., & Kanda, T.
    (2003, Nov). On proposing the concept of robot anxiety and considering measurement of it. InThe 12th ieee international workshop on robot and human interactive communication, 2003. proceedings. roman 2003. (p.373–378). doi:  10.1109/ROMAN.2003.1251874
    https://doi.org/10.1109/ROMAN.2003.1251874 [Google Scholar]
  31. Pages, J., Marchionni, L., & Ferro, F.
    (2016) Tiago: the modular robot that adapts to different research needs. InInternational workshop on robot modularity, iros.
    [Google Scholar]
  32. Parasuraman, R., & Miller, C.
    (2004) Trust and etiquette in high-criticality automated systems. Communication of the ACM, 47(4), 51–55. doi:  10.1145/975817.975844
    https://doi.org/10.1145/975817.975844 [Google Scholar]
  33. Ray, C., Mondada, F., & Siegwart, R.
    (2008) What do people expect from robots?InProceedings of the ieee/rsj 2008 international conference on intelligent robots and systems (pp.3816–3821). doi:  10.1109/IROS.2008.4650714
    https://doi.org/10.1109/IROS.2008.4650714 [Google Scholar]
  34. Robinette, P., Howard, A. M., & Wagner, A. R.
    (2017) Effect of robot performance on human-robot trust in time-critical situations. IEEE Transactions on Human-Machine Systems, 47(4), 425–436. doi:  10.1109/THMS.2017.2648849
    https://doi.org/10.1109/THMS.2017.2648849 [Google Scholar]
  35. Robinette, P., Li, W., Allen, R., Howard, A. M., & Wagner, A. R.
    (2016) Overtrust of robots in emergency evacuation scenarios. InThe eleventh acm/ieee international conference on human robot interaction (pp.101–108). doi:  10.1109/HRI.2016.7451740
    https://doi.org/10.1109/HRI.2016.7451740 [Google Scholar]
  36. Salem, M., Eyssel, F., Rohlfing, K., Kopp, S., & Joulbin, F.
    (2013) To err is human(-like): Effects of robot gesture on perceived anthropomorphism and likeability. International Journal of Social Robotics, 5, 313–323. doi:  10.1007/s12369‑013‑0196‑9
    https://doi.org/10.1007/s12369-013-0196-9 [Google Scholar]
  37. Salem, M., Lakatos, G., Amirabdollahian, F., & Dautenhahn, K.
    (2015) Would you trust a (faulty) robot?: Effects of error, task type and personality on human-robot cooperation and trust. InProceedings of the tenth annual acm/ieee international conference on human-robot interaction (pp.141–148). doi:  10.1145/2696454.2696497
    https://doi.org/10.1145/2696454.2696497 [Google Scholar]
  38. Sarkar, S., Araiza-Illan, D., & Eder, K.
    (2017) Effects of faults, experience, and personality on trust in a robot co-worker. arXiv preprint arXiv:1703.02335.
    [Google Scholar]
  39. Shiomi, M., Kanda, T., Ishiguro, H., & Hagita, N.
    (2006) Interactive humanoid robots for a science museum. InProceedings of the 1st acm sigchi/sigart conference on human-robot interaction (pp.305–312). doi:  10.1109/MIS.2007.37
    https://doi.org/10.1109/MIS.2007.37 [Google Scholar]
  40. Sidner, C., Lee, C., & Lesh, N.
    (2003) Engagement rules for human-robot collaborative interactions. InProceedings of the ieee international conference on systems man and cybernetics (pp.3957–3962). doi:  10.1109/ICSMC.2003.1244506
    https://doi.org/10.1109/ICSMC.2003.1244506 [Google Scholar]
  41. Thrun, S., Schulte, J., & Rosenburg, C.
    (2000) Interaction with mobile robots in public places. IEEE Intelligent Systems, 7–11.
    [Google Scholar]
  42. Wiegmann, D. A., Rich, A., & Zhang, H.
    (2001) Automated diagnostic aids: The effects of aid reliability on users’ trust and reliance. InTheoretical issues in ergonomic science (p.352–367). doi:  10.1080/14639220110110306
    https://doi.org/10.1080/14639220110110306 [Google Scholar]
  43. Wilson, J., Straus, S., & McEvily, B.
    (2006) All in due time: The development of trust in computer-mediated and face-to-face teams. Organizational Behaviour and Human Decision Processes, 99(1), 16–33. 10.1016/j.obhdp.2005.08.001
    https://doi.org/10.1016/j.obhdp.2005.08.001 [Google Scholar]
  44. Wortham, Robert H., and Andreas Theodorou
    (2017) “Robot transparency, trust and utility.” Connection Science29.3 (2017): 242–248. 10.1080/09540091.2017.1313816
    https://doi.org/10.1080/09540091.2017.1313816 [Google Scholar]
/content/journals/10.1075/is.18067.flo
Loading
/content/journals/10.1075/is.18067.flo
Loading

Data & Media loading...

  • Article Type: Research Article
Keyword(s): human-robot interaction; trust
This is a required field
Please enter a valid email address
Approval was successful
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error