1887
Volume 22, Issue 2
  • ISSN 1572-0373
  • E-ISSN: 1572-0381

Abstract

Abstract

How do we perceive robots practising a task that we have taught them? While learning, human trainees usually provide nonverbal cues that reveal their level of understanding and interest in the task. Similarly, nonverbal social cues of trainee robots that can be interpreted naturally by humans can enhance robot learning. In this article, we investigated a scenario in which a robot is practising a physical task in front of the human teachers (i.e., participants), who were asked to assume that they had previously taught the robot to perform that task. Through an online experiment with 167 participants, we examined the effects of different gaze patterns and arm movements with multiple speeds and various kinds of pauses on human teachers’ perception of different attributes of the robot. We found that the perception of a trainee robot’s attributes (e.g., confidence and eagerness to learn) can be systematically affected by its behaviours. Findings of this study can inform designing more successful nonverbal social interactions for intelligent robots.

Available under the CC BY-NC 4.0 license.
Loading

Article metrics loading...

/content/journals/10.1075/is.20036.ali
2022-02-28
2024-12-13
Loading full text...

Full text loading...

/deliver/fulltext/is.20036.ali.html?itemId=/content/journals/10.1075/is.20036.ali&mimeType=html&fmt=ahah

References

  1. Admoni, H., Hayes, B., Feil-Seifer, D., Ullman, D., & Scassellati, B.
    (2013) Are you looking at me? Perception of robot attention is mediated by gaze type and group size. 2013 8th ACM/IEEE International Conference on Human-Robot Interaction (HRI), 389–395. doi:  10.1109/HRI.2013.6483614
    https://doi.org/10.1109/HRI.2013.6483614 [Google Scholar]
  2. Admoni, H., & Scassellati, B.
    (2017) Social eye gaze in human-robot interaction: A review. Journal of Human-Robot Interaction, 6(1), 25. doi:  10.5898/JHRI.6.1.Admoni
    https://doi.org/10.5898/JHRI.6.1.Admoni [Google Scholar]
  3. Ahmadzadeh, S. R., Paikan, A., Mastrogiovanni, F., Natale, L., Kormushev, P., & Caldwell, D. G.
    (2015) Learning symbolic representations of actions from human demonstrations. 2015 IEEE International Conference on Robotics and Automation (ICRA), 3801–3808. doi:  10.1109/ICRA.2015.7139728
    https://doi.org/10.1109/ICRA.2015.7139728 [Google Scholar]
  4. Aliasghari, P., Ghafurian, M., Nehaniv, C. L., & Dautenhahn, K.
    (2021) Effects of gaze and arm motion kinesics on a humanoid’s perceived confidence, eagerness to learn, and attention to the task in a teaching scenario. HRI ’21: Proceedings of the 2021 ACM/IEEE International Conference on Human-Robot Interaction, 197–206. doi:  10.1145/3434073.3444651
    https://doi.org/10.1145/3434073.3444651 [Google Scholar]
  5. Andrist, S., Mutlu, B., & Tapus, A.
    (2015) Look like me: Matching robot personality via gaze to increase motivation. CHI ’15: Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems, 3603–3612. doi:  10.1145/2702123.2702592
    https://doi.org/10.1145/2702123.2702592 [Google Scholar]
  6. Barrick, M. R., & Mount, M. K.
    (1991) The big five personality dimensions and job performance: A meta-analysis. Personnel Psychology, 44(1), 1–26. doi:  10.1111/j.1744‑6570.1991.tb00688.x
    https://doi.org/10.1111/j.1744-6570.1991.tb00688.x [Google Scholar]
  7. Bartneck, C., Duenser, A., Moltchanova, E., & Zawieska, K.
    (2015) Comparing the similarity of responses received from studies in Amazon Mechanical Turk to studies conducted online and with direct recruitment. PLOS ONE, 10(4), 1–23. doi:  10.1371/journal.pone.0121595
    https://doi.org/10.1371/journal.pone.0121595 [Google Scholar]
  8. Bates, D., Mächler, M., Bolker, B. M., & Walker, S. C.
    (2015) Fitting linear mixed-effects models using lme4. Journal of Statistical Software, 67(1). doi:  10.18637/jss.v067.i01
    https://doi.org/10.18637/jss.v067.i01 [Google Scholar]
  9. Billard, A., Calinon, S., Dillmann, R., & Schaal, S.
    (2008) Robot programming by demonstration. Springer handbook of robotics (pp.1371–1394). Springer. doi:  10.1007/978‑3‑540‑30301‑5_60
    https://doi.org/10.1007/978-3-540-30301-5_60 [Google Scholar]
  10. Birdwhistell, R. L.
    (1983) Background to kinesics. ETC: A Review of General Semantics, 40(3), 352–361.
    [Google Scholar]
  11. Biswas, M., Romeo, M., Cangelosi, A., & Jones, R. B.
    (2020) Are older people any different from younger people in the way they want to interact with robots? Scenario based survey. Journal on Multimodal User Interfaces, 14(1), 61–72. doi:  10.1007/s12193‑019‑00306‑x
    https://doi.org/10.1007/s12193-019-00306-x [Google Scholar]
  12. Bozdogan, H.
    (1987) Model selection and Akaike’s Information Criterion (AIC): The general theory and its analytical extensions. Psychometrika, 52(3), 345–370. doi:  10.1007/BF02294361
    https://doi.org/10.1007/BF02294361 [Google Scholar]
  13. Breazeal, C.
    (2009) Role of expressive behaviour for robots that learn from people. Philosophical Transactions of the Royal Society B: Biological Sciences, 364(1535), 3527–3538. doi:  10.1098/rstb.2009.0157
    https://doi.org/10.1098/rstb.2009.0157 [Google Scholar]
  14. Brooks, A. G., & Arkin, R. C.
    (2007) Behavioral overlays for non-verbal communication expression on a humanoid robot. Autonomous Robots, 22(1), 55–74. doi:  10.1007/s10514‑006‑9005‑8
    https://doi.org/10.1007/s10514-006-9005-8 [Google Scholar]
  15. Bruneau, T.
    (2012) Chronemics: Time-binding and the construction of personal time. ETC: A Review of General Semantics, 69(1), 72–92.
    [Google Scholar]
  16. Cangelosi, A., & Stramandinoli, F.
    (2018) A review of abstract concept learning in embodied agents and robots. Philosophical Transactions of the Royal Society B: Biological Sciences, 373(1752), 2–7. doi:  10.1098/rstb.2017.0131
    https://doi.org/10.1098/rstb.2017.0131 [Google Scholar]
  17. Chao, C., Cakmak, M., & Thomaz, A. L.
    (2010) Transparent active learning for robots. 2010 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI), 317–324. doi:  10.1109/HRI.2010.5453178
    https://doi.org/10.1109/HRI.2010.5453178 [Google Scholar]
  18. Claret, J. A., Venture, G., & Basañez, L.
    (2017) Exploiting the robot kinematic redundancy for emotion conveyance to humans as a lower priority task. International Journal of Social Robotics, 9(2), 277–292. doi:  10.1007/s12369‑016‑0387‑2
    https://doi.org/10.1007/s12369-016-0387-2 [Google Scholar]
  19. Dautenhahn, K.
    (2007) Socially intelligent robots: Dimensions of human-robot interaction. Philosophical Transactions of the Royal Society B: Biological Sciences, 362(1480), 679–704. doi:  10.1098/rstb.2006.2004
    https://doi.org/10.1098/rstb.2006.2004 [Google Scholar]
  20. Di Cesare, G.
    (2020) The importance of the affective component of movement in action understanding. Modelling human motion: From human perception to robot design (pp.103–116). Springer. doi:  10.1007/978‑3‑030‑46732‑6_6
    https://doi.org/10.1007/978-3-030-46732-6_6 [Google Scholar]
  21. Dragan, A., & Srinivasa, S.
    (2013) Generating legible motion. Proceedings of Robotics: Science and Systems. doi:  10.15607/RSS.2013.IX.024
    https://doi.org/10.15607/RSS.2013.IX.024 [Google Scholar]
  22. Emery, N.
    (2000) The eyes have it: The neuroethology, function and evolution of social gaze. Neuroscience Biobehavioral Reviews, 24(6), 581–604. doi:  10.1016/S0149‑7634(00)00025‑7
    https://doi.org/10.1016/S0149-7634(00)00025-7 [Google Scholar]
  23. English, B. A., Coates, A., & Howard, A.
    (2017) Recognition of gestural behaviors expressed by humanoid robotic platforms for teaching affect recognition to children with autism – A healthy subjects pilot study. International Conference on Social Robotics, 567–576. doi:  10.1007/978‑3‑319‑70022‑9_56
    https://doi.org/10.1007/978-3-319-70022-9_56 [Google Scholar]
  24. Farroni, T., Csibra, G., Simion, F., & Johnson, M. H.
    (2002) Eye contact detection in humans from birth. Proceedings of the National Academy of Sciences, 99(14), 9602–9605. doi:  10.1073/pnas.152159999
    https://doi.org/10.1073/pnas.152159999 [Google Scholar]
  25. Feil-Seifer, D., Haring, K. S., Rossi, S., Wagner, A. R., & Williams, T.
    (2020) Where to next? the impact of covid-19 on human-robot interaction research. ACM Transactions on Human-Robot Interaction, 10(1). doi:  10.1145/3405450
    https://doi.org/10.1145/3405450 [Google Scholar]
  26. Fischer, K., & Saunders, J.
    (2012) Getting acquainted with a developing robot. HBU 12: Proceedings of the Third international conference on Human Behavior Understanding, 125–133. doi:  10.1007/978‑3‑642‑34014‑7_11
    https://doi.org/10.1007/978-3-642-34014-7_11 [Google Scholar]
  27. Funke, F., & Reips, U.-D.
    (2012) Why semantic differentials in web-based research should be made from visual analogue scales and not from 5-point scales. Field Methods, 24(3), 310–327. doi:  10.1177/1525822X12444061
    https://doi.org/10.1177/1525822X12444061 [Google Scholar]
  28. Ghafurian, M., Budnarain, N., & Hoey, J.
    (2019) Improving humanness of virtual agents and users’ cooperation through emotions. arxiv.org/abs/1903.03980
  29. Glowinski, D., Dael, N., Camurri, A., Volpe, G., Mortillaro, M., & Scherer, K.
    (2011) Toward a minimal representation of affective gestures. IEEE Transactions on Affective Computing, 2(2), 106–118. doi:  10.1109/T‑AFFC.2011.7
    https://doi.org/10.1109/T-AFFC.2011.7 [Google Scholar]
  30. Hietanen, J. K., Leppänen, J. M., Peltola, M. J., Linna-aho, K., & Ruuhiala, H. J.
    (2008) Seeing direct and averted gaze activates the approach-avoidance motivational brain systems. Neuropsychologia, 46(9), 2423–2430. doi:  10.1016/j.neuropsychologia.2008.02.029
    https://doi.org/10.1016/j.neuropsychologia.2008.02.029 [Google Scholar]
  31. Hortensius, R., Hekele, F., & Cross, E. S.
    (2018) The perception of emotion in artificial agents. IEEE Transactions on Cognitive and Developmental Systems, 10(4), 852–864. doi:  10.1109/TCDS.2018.2826921
    https://doi.org/10.1109/TCDS.2018.2826921 [Google Scholar]
  32. Hosseinpanah, A., Kramer, N. C., & Straßmann, C.
    (2018) Empathy for everyone?: The effect of age when evaluating a virtual agent. HAI ’18: Proceedings of the 6th International Conference on Human-Agent Interaction, 184–190. doi:  10.1145/3284432.3284442
    https://doi.org/10.1145/3284432.3284442 [Google Scholar]
  33. Huang, S. H., Huang, I., Pandya, R., & Dragan, A. D.
    (2020) Nonverbal robot feedback for human teachers. Proceedings of the Conference on Robot Learning, 1038–1051. proceedings.mlr.press/v100/huang20a.html
    [Google Scholar]
  34. Ito, A., Hayakawa, S., & Terada, T.
    (2004) Why robots need body for mind communication – An attempt of eye-contact between human and robot. RO-MAN 2004. 13th IEEE International Workshop on Robot and Human Interactive Communication, 473–478. doi:  10.1109/ROMAN.2004.1374806
    https://doi.org/10.1109/ROMAN.2004.1374806 [Google Scholar]
  35. Jonell, P., Kucherenko, T., Torre, I., & Beskow, J.
    (2020) Can we trust online crowdworkers? comparing online and offline participants in a preference test of virtual agents. IVA ’20: Proceedings of the 20th ACM International Conference on Intelligent Virtual Agents. doi:  10.1145/3383652.3423860
    https://doi.org/10.1145/3383652.3423860 [Google Scholar]
  36. Joo, H., Simon, T., Cikara, M., & Sheikh, Y.
    (2019) Towards social artificial intelligence: Nonverbal social signal prediction in a triadic interaction. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 10865–10875. doi:  10.1109/CVPR.2019.01113
    https://doi.org/10.1109/CVPR.2019.01113 [Google Scholar]
  37. Kaiser, F. G., Glatte, K., & Lauckner, M.
    (2019) How to make nonhumanoid mobile robots more likable: Employing kinesic courtesy cues to promote appreciation. Applied Ergonomics, 781, 70–75. doi:  10.1016/j.apergo.2019.02.004
    https://doi.org/10.1016/j.apergo.2019.02.004 [Google Scholar]
  38. Kazuaki, T., Motoyuki, O., & Natsuki, O.
    (2010) The hesitation of a robot: A delay in its motion increases learning efficiency and impresses humans as teachable. 2010 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI), 189–190. doi:  10.1109/HRI.2010.5453200
    https://doi.org/10.1109/HRI.2010.5453200 [Google Scholar]
  39. Kim, J., Cauli, N., Vicente, P., Damas, B., Cavallo, F., & Santos-Victor, J.
    (2018) ICub, clean the table! A robot learning from demonstration approach using deep neural networks. 18th IEEE International Conference on Autonomous Robot Systems and Competitions, ICARSC 2018, 3–9. doi:  10.1109/ICARSC.2018.8374152
    https://doi.org/10.1109/ICARSC.2018.8374152 [Google Scholar]
  40. Kim, J., Kwak, S. S., & Kim, M.
    (2009) Entertainment robot personality design based on basic factors of motions: A case study with ROLLY. RO-MAN 2009 – The 18th IEEE International Symposium on Robot and Human Interactive Communication, 803–808. doi:  10.1109/ROMAN.2009.5326222
    https://doi.org/10.1109/ROMAN.2009.5326222 [Google Scholar]
  41. Koenig, N., Takayama, L., & Matarić, M.
    (2010) Communication and knowledge sharing in human-robot interaction and learning from demonstration. Neural Networks, 23(8–9), 1104–1112. doi:  10.1016/j.neunet.2010.06.005
    https://doi.org/10.1016/j.neunet.2010.06.005 [Google Scholar]
  42. Kulić, D., & Croft, E.
    (2007) Physiological and subjective responses to articulated robot motion. Robotica, 25(1), 13–27. doi:  10.1017/S0263574706002955
    https://doi.org/10.1017/S0263574706002955 [Google Scholar]
  43. Laird, J. E., Gluck, K., Anderson, J., Forbus, K. D., Jenkins, O. C., Lebiere, C., Salvucci, D., Scheutz, M., Thomaz, A., Trafton, G.,
    (2017) Interactive task learning. IEEE Intelligent Systems, 32(4), 6–21. doi:  10.1109/MIS.2017.3121552
    https://doi.org/10.1109/MIS.2017.3121552 [Google Scholar]
  44. Maljkovic, V., & Nakayama, K.
    (1994) Priming of pop-out: I. Role of features. Memory & Cognition, 22(6), 657–672. doi:  10.3758/BF03209251
    https://doi.org/10.3758/BF03209251 [Google Scholar]
  45. Matejka, J., Glueck, M., Grossman, T., & Fitzmaurice, G.
    (2016) The effect of visual appearance on the performance of continuous sliders and visual analogue scales. CHI ’16: Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, 5421–5432. doi:  10.1145/2858036.2858063
    https://doi.org/10.1145/2858036.2858063 [Google Scholar]
  46. Mavridis, N.
    (2015) A review of verbal and non-verbal human-robot interactive communication. Robotics and Autonomous Systems, 63(P1), 22–35. doi:  10.1016/j.robot.2014.09.031
    https://doi.org/10.1016/j.robot.2014.09.031 [Google Scholar]
  47. Metta, G., Fitzpatrick, P., & Natale, L.
    (2006) YARP: Yet another robot platform. International Journal of Advanced Robotic Systems, 3(1), 43–48. doi:  10.5772/5761
    https://doi.org/10.5772/5761 [Google Scholar]
  48. Metta, G., Sandini, G., Vernon, D., Natale, L., & Nori, F.
    (2008) The iCub humanoid robot: An open platform for research in embodied cognition. PerMIS ’08: Proceedings of the 8th Workshop on Performance Metrics for Intelligent Systems, 50–56. doi:  10.1145/1774674.1774683
    https://doi.org/10.1145/1774674.1774683 [Google Scholar]
  49. Moon, A., Panton, B., Van der Loos, H. F. M., & Croft, E. A.
    (2010) Using hesitation gestures for safe and ethical human-robot interaction. IEEE Conference on Robotics and Automation: Workshop on Interactive Communication for Autonomous Intelligent Robots, 1–3.
    [Google Scholar]
  50. Moon, A., Parker, C. A. C., Croft, E. A., & Van der Loos, H. F. M.
    (2013) Design and impact of hesitation gestures during human-robot resource conflicts. Journal of Human-Robot Interaction, 2(3), 18–40. doi:  10.5898/JHRI.2.3.Moon
    https://doi.org/10.5898/JHRI.2.3.Moon [Google Scholar]
  51. Muto, Y., Takasugi, S., Yamamoto, T., & Miyake, Y.
    (2009) Timing control of utterance and gesture in interaction between human and humanoid robot. RO-MAN 2009 – The 18th IEEE International Symposium on Robot and Human Interactive Communication, 1022–1028. doi:  10.1109/ROMAN.2009.5326319
    https://doi.org/10.1109/ROMAN.2009.5326319 [Google Scholar]
  52. Nehaniv, C. L., Dautenhahn, K., Kubacki, J., Haegele, M., Parlitz, C., & Alami, R.
    (2005) A methodological approach relating the classification of gesture to identification of human intent in the context of human-robot interaction. ROMAN2005. IEEE International Workshop on Robot and Human Interactive Communication, 371–377. doi:  10.1109/ROMAN.2005.1513807
    https://doi.org/10.1109/ROMAN.2005.1513807 [Google Scholar]
  53. Normoyle, A., Badler, J. B., Fan, T., Badler, N. I., Cassol, V. J., & Musse, S. R.
    (2013) Evaluating perceived trust from procedurally animated gaze. MIG ’13: Proceedings of Motion on Games, 119–126. doi:  10.1145/2522628.2522630
    https://doi.org/10.1145/2522628.2522630 [Google Scholar]
  54. Paolacci, G., Chandler, J., & Ipeirotis, P. G.
    (2010) Running experiments on Amazon Mechanical Turk. Judgment and Decision Making, 5(5), 411–419.
    [Google Scholar]
  55. Peters, R., Broekens, J., & Neerincx, M. A.
    (2017) Robots educate in style: The effect of context and non-verbal behaviour on children’s perceptions of warmth and competence. 2017 26th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), 449–455. doi:  10.1109/ROMAN.2017.8172341
    https://doi.org/10.1109/ROMAN.2017.8172341 [Google Scholar]
  56. Pitsch, K., Lohan, K. S., Rohlfing, K., Saunders, J., Nehaniv, C. L., & Wrede, B.
    (2012) Better be reactive at the beginning. Implications of the first seconds of an encounter for the tutoring style in human-robot-interaction. 2012 IEEE RO-MAN: The 21st IEEE International Symposium on Robot and Human Interactive Communication, 974–981. doi:  10.1109/ROMAN.2012.6343876
    https://doi.org/10.1109/ROMAN.2012.6343876 [Google Scholar]
  57. Robert Jr., L. P., Alahmad, R., Esterwood, C., Kim, S., You, S., & Zhang, Q.
    (2020) A review of personality in human-robot interactions. Foundations and Trends in Information Systems, 4(2), 107–212. doi:  10.1561/2900000018
    https://doi.org/10.1561/2900000018 [Google Scholar]
  58. Robins, B., Dautenhahn, K., Nehaniv, C. L., Mirza, N. A., Franҫois, D., & Olsson, L.
    (2005) Sustaining interaction dynamics and engagement in dyadic child-robot interaction kinesics: Lessons learnt from an exploratory study. ROMAN 2005. IEEE International Workshop on Robot and Human Interactive Communication, 20051., 716–722. doi:  10.1109/ROMAN.2005.1513864
    https://doi.org/10.1109/ROMAN.2005.1513864 [Google Scholar]
  59. Saerbeck, M., & Bartneck, C.
    (2010) Perception of affect elicited by robot motion. 2010 5th ACM/IEEE International Conference on Human-Robot Interaction (HRI), 53–60. doi:  10.1109/HRI.2010.5453269
    https://doi.org/10.1109/HRI.2010.5453269 [Google Scholar]
  60. Salem, M., Kopp, S., Wachsmuth, I., Rohlfing, K., & Joublin, F.
    (2012) Generation and evaluation of communicative robot gesture. International Journal of Social Robotics, 4(2), 201–217. doi:  10.1007/s12369‑011‑0124‑9
    https://doi.org/10.1007/s12369-011-0124-9 [Google Scholar]
  61. Saunderson, S., & Nejat, G.
    (2019) How robots influence humans: A survey of nonverbal communication in social human-robot interaction. International Journal ofSocial Robotics, 11(4), 575–608. doi:  10.1007/s12369‑019‑00523‑0
    https://doi.org/10.1007/s12369-019-00523-0 [Google Scholar]
  62. Treiblmaier, H., & Filzmoser, P.
    (2011) Benefits from using continuous rating scales in online survey research. Thirty Second International Conference on Information Systems, 2087–2099. doi:  CitetononCRdoi:10.13140/RG.2.1.2899.6643
    https://doi.org/Cite to nonCR doi: 10.13140/RG.2.1.2899.6643 [Google Scholar]
  63. Vannucci, F., Di Cesare, G., Rea, F., Sandini, G., & Sciutti, A.
    (2018) A robot with style: Can robotic attitudes influence human actions?2018 IEEE-RAS 18th International Conference on Humanoid Robots (Humanoids), 952–957. doi:  10.1109/HUMANOIDS.2018.8625004
    https://doi.org/10.1109/HUMANOIDS.2018.8625004 [Google Scholar]
  64. Venture, G., & Kulić, D.
    (2019) Robot expressive motions: A survey of generation and evaluation methods. ACM Transactions on Human-Robot Interaction, 8(4). doi:  10.1145/3344286
    https://doi.org/10.1145/3344286 [Google Scholar]
  65. Wallkötter, S., Stower, R., Kappas, A., & Castellano, G.
    (2020) A Robot by Any Other Frame: Framing and Behaviour Influence Mind Perception in Virtual but not Real-World Environments. HRI ’20: ACM/IEEE International Conference on Human-Robot Interaction, 609–618. doi:  10.1145/3319502.3374800
    https://doi.org/10.1145/3319502.3374800 [Google Scholar]
  66. Whiten, A., & van de Waal, E.
    (2018) The pervasive role of social learning in primate lifetime development. Behavioral Ecology and Sociobiology, 72(5), 80. doi:  10.1007/s00265‑018‑2489‑3
    https://doi.org/10.1007/s00265-018-2489-3 [Google Scholar]
  67. Wrede, B., Rohlfing, K. J., Hanheide, M., & Sagerer, G.
    (2009) Towards learning by interacting. Creating brain-like intelligence (pp.139–150). Springer. doi:  10.1007/978‑3‑642‑00616‑6_8
    https://doi.org/10.1007/978-3-642-00616-6_8 [Google Scholar]
/content/journals/10.1075/is.20036.ali
Loading
/content/journals/10.1075/is.20036.ali
Loading

Data & Media loading...

  • Article Type: Research Article
Keyword(s): gaze; kinesics; nonverbal behaviour; perceived robot attributes; social learning
This is a required field
Please enter a valid email address
Approval was successful
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error