1887
Volume 20, Issue 1
  • ISSN 1572-0373
  • E-ISSN: 1572-0381
USD
Buy:$35.00 + Taxes

Abstract

Abstract

Social interaction, especially for older people living alone is a challenge currently facing human-robot interaction (HRI). There has been little research on user preference towards HRI interfaces. In this paper, we took both objective observations and participants’ opinions into account in studying older users with a robot partner. The developed dual-modal robot interface offered older users options of speech or touch screen to perform tasks. Fifteen people aged from 70 to 89 years old, participated. We analyzed the spontaneous actions of the participants, including their attentional activities and conversational activities, the temporal characteristics of these social behaviours, as well as questionnaires. It has been revealed that social engagement with the robot demonstrated by older people was no different from what might be expected towards a human partner. This study is an early attempt to reveal the social connections between human beings and a personal robot in real life.

Loading

Article metrics loading...

/content/journals/10.1075/is.18042.wan
2019-07-15
2025-04-19
Loading full text...

Full text loading...

References

  1. Al-Razgan, M., H. Al-Khalifa, M. Al-Shahrani, and H. Al-Ajmi
    (2012) Touch-based mobile phone interface guidelines and design recommendations for elderly people: A survey of the literature, ser. Lecture Notes in Computer Science. Berlin Heidelberg: Springer, vol.7666, pp.568–574.
    [Google Scholar]
  2. Blythe, M. A., A. F. Monk, and K. Doughty
    (2005) “Socially dependable design: The challenge of ageing populations for HCI,” Interacting with Computers, vol.17, no.6, pp.672–689. 10.1016/j.intcom.2005.09.005
    https://doi.org/10.1016/j.intcom.2005.09.005 [Google Scholar]
  3. Bohus, D. and A. Rudnicky
    (2009) “The ravenclaw dialog management framework: Architecture and systems,” Computer Speech and Language, vol.23, no.3, pp.332–361. 10.1016/j.csl.2008.10.001
    https://doi.org/10.1016/j.csl.2008.10.001 [Google Scholar]
  4. Breazealm C., C. Kidd, A. Thomaz, G. Hoffman, and M. Berlin
    (2005) “Effects of nonverbal communication on efficiency and robustness in human-robot teamwork,” inProc. IEEE/RSJ International Conference on Intelligent Robots and Systems, pp.708–713.
    [Google Scholar]
  5. Cavallo, F., R. Esposito, R. Limosani, A. Manzi, R. Bevilacqua, E. Felici, A. D. Nuovo, A. Cangelosi, F. Lattanzio, and P. Dario
    (2018) “Robotic services acceptance in smart environments with older adults: User satisfaction and acceptability study,” J Med Internet Res, vol.20, no.9.
    [Google Scholar]
  6. Di Nuovo, A., F. Broz, T. Belpaeme, A. Cangelosi, F. Cavallo, R. Esposito, and P. Dario
    (2014) “A web based multi-modal interface for elderly users of the Robot-Era multi-robot services,” inProc. IEEE International Conference on Systems, Man and Cybernetics, pp.2186–2191.
    [Google Scholar]
  7. Hare, B. and M. Tomasello
    (2005) “Human-like social skills in dogs?” Trends in Cognitive Sciences, vol.9, no.9, pp.439–444. 10.1016/j.tics.2005.07.003
    https://doi.org/10.1016/j.tics.2005.07.003 [Google Scholar]
  8. Kipp, M.
    (2014) Handbook of Corpus Phonology. Oxford University Press, ch. ANVIL: A Universal Video Research Tool, pp.420–436.
    [Google Scholar]
  9. Knapp, M. L., J. A. Hall, and T. G. Horgan
    (2013) Nonverbal communication in human interaction, 8th ed.Boston, USA: Wadsworth 2013.
    [Google Scholar]
  10. Lemon, O.
    (2004) “Context-sensitive speech recognition in Information-State Update dialogue systems: results for the grammar switching approach,” inProc. Eighth Workshop on the Semantics and Pragmatics of Dialogue, pp.49–55.
    [Google Scholar]
  11. Linert, J. and P. Kopacek
    (2016) “Robots for education (edutainment),” inIFAC-PapersOnLine, Elsevier, Ed., vol.49, no.29, pp.24–29.
    [Google Scholar]
  12. Mayer, P., C. Beck, and P. Panek
    (2012) “Examples of multimodal user interfaces for socially assistive robots in ambient assisted environments,” inProc. IEEE International Conference on Cognitive Infocommunications, pp.401–406.
    [Google Scholar]
  13. Morency, L.-P. and T. Darrell
    (2004) “From conversational tooltips to grounded discourse: Head post tracking in interactive dialog systems,” inACM International Conference on Multimodal Interaction, pp.32–37.
    [Google Scholar]
  14. Mutlu, B., T. Shiwa, T. Kanda, H. Ishiguro, and N. Hagita
    (2009) “Footing in human-robot conversations: How robots might shape participant roles using gaze cues,” inProc. ACM/IEEE International Conference on Human Robot Interaction, pp.61–68.
    [Google Scholar]
  15. Portet, F., M. Vacher, C. Golanski, C. Roux, and B. Meillon
    (2013) “Design and evaluation of a smart home voice interface for the elderly: acceptability and objection aspects,” Personal and Ubiquitous Computing, vol.17, no.1, pp.127–144. 10.1007/s00779‑011‑0470‑5
    https://doi.org/10.1007/s00779-011-0470-5 [Google Scholar]
  16. “Robot-Era project: Implementation and integration of advanced robotic systems and intelligent environments in real scenarios for the ageing population,” FP7-ICT-Challenge 5: ICT for Health, Ageing Well, Inclusion and Governance. Grant agreement number 288899.
    [Google Scholar]
  17. Schneider, J., S. Irgenfried, W. Stork, and H. Wörn
    (2015) “A multimodal human machine interface for a robotic mobility aid,” inProc. IEEE International Conference on Automation, Robotics and Applications, pp.289–294.
    [Google Scholar]
  18. Sehili, M. A., F. Yang, and L. Devillers
    (2014) “Attention detection in elderly people-robot spoken interaction,” inProc. ICMI Workshop on Multimodal, Multi-Party, Real-World Human-Robot Interaction, pp.7–12.
    [Google Scholar]
  19. Sheikhi, S. and J.-M. Odobez
    (2014) “Combining dynamic head pose-gaze mapping with the robot conversational state for attention recognition in human-robot interactions,” Pattern Recognition Letters.
    [Google Scholar]
  20. Smarr, C., A. Prakash, J. Beer, T. Mitzner, C. Kemp, and W. Rogers
    (2012) “Older adults preferences for and acceptance of robot assistance for everyday living tasks,” inProc. Human Factors and Ergonomics Society Annual Meeting, pp.153–157. 10.1177/1071181312561009
    https://doi.org/10.1177/1071181312561009 [Google Scholar]
  21. Staudte, M. and M. Crocker
    (2009) “Visual attention in spoken human-robot interaction,” inProc. ACM/IEEE International Conference on Human-Robot Interaction, pp.77–84.
    [Google Scholar]
  22. Tang, D., B. Yusuf, J. Botzheim, N. Kubota, and C. Chan
    (2015) “A novel multimodal communication framework using robot partner for aging population,” Expert Systems with Applications, vol.42, no.9, pp.4540–4555. 10.1016/j.eswa.2015.01.016
    https://doi.org/10.1016/j.eswa.2015.01.016 [Google Scholar]
  23. United Nations
    United Nations (2017, October). “Population facts.”
    [Google Scholar]
  24. Wang, N., F. Broz, A. Di Nuovo, T. Belpaeme, and A. Cangelosi
    (2016) Recent Advances in Nonlinear Speech Processing, ser. 2190–3018. Springer International Publishing, vol.48, ch. A user-centric design of service robots speech interface for the elderly, pp.275–283. 10.1007/978‑3‑319‑28109‑4_28
    https://doi.org/10.1007/978-3-319-28109-4_28 [Google Scholar]
  25. Yamazaki, A., K. Yamazaki, Y. Kuno, M. Burdelski, M. Kawashima, and H. Kuzuoka
    (2008) “Precision timing in human-robot interaction: Coordination of head movement and utterance,” inProc. SIGCHI Conference on Human Factors in Computing Systems, pp.131–140.
    [Google Scholar]
  26. Yu, C., M. Scheutz, and P. Schermerhorn
    (2010) “Investigating multimodal real-time patterns of joint attention in an HRI word learning task,” inProc. ACM/IEEE International Conference on Human-Robot Interaction, pp.309–316.
    [Google Scholar]
/content/journals/10.1075/is.18042.wan
Loading
/content/journals/10.1075/is.18042.wan
Loading

Data & Media loading...

This is a required field
Please enter a valid email address
Approval was successful
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error