1887
Volume 22, Issue 1
  • ISSN 1572-0373
  • E-ISSN: 1572-0381
USD
Buy:$35.00 + Taxes

Abstract

Abstract

This paper investigates the effects of group interaction in a storytelling situation for children using two robots: a reader robot and a listener robot as a side-participant. We developed a storytelling system that consists of a reader robot, a listener robot, a display, a gaze model, a depth sensor, and a human operator who responds and provides easily understandable answers to the children’s questions. We experimentally investigated the effects of using a listener robot and either one or two children during a storytelling situation on the children’s preferences and their speech activities. Our experimental results showed that the children preferred storytelling with the listener robot. Although two children obviously produced more speech than one child, the listener robot discouraged the children’s speech regardless of whether one or two were listening.

Loading

Article metrics loading...

/content/journals/10.1075/is.18033.tam
2021-09-17
2025-04-27
Loading full text...

Full text loading...

References

  1. Alborzi, H., A. Druin, J. Montemayor, M. Platner, J. Porteous, L. Sherman, A. Boltman, G. Taxén, J. Best, and J. Hammer
    (2000) “Designing StoryRooms: interactive storytelling spaces for children,” inProceedings of the 3rd conference on Designing interactive systems: processes, practices, methods, and techniques, pp. 95–104. 10.1145/347642.347673
    https://doi.org/10.1145/347642.347673 [Google Scholar]
  2. Arai, H., M. Kimoto, T. Iio, K. Shimohara, R. Matsumura, and M. Shiomi
    (2019) “How Can Robot’s Gaze Ratio and Body Direction Show an Awareness of Priority to the People with whom it is Interacting?,” IEEE Robotics and Automation Letters, vol.4, no.4, pp. 3798–3805. 10.1109/LRA.2019.2929992
    https://doi.org/10.1109/LRA.2019.2929992 [Google Scholar]
  3. Bers, M. U., and J. Cassell
    (1998) “Interactive storytelling systems for children: Using technology to explore language and identity,” Journal of Interactive Learning Research, vol.9, no.2, pp. 183.
    [Google Scholar]
  4. Clark, H. H.
    (1996) Using Language: Cambridge University Press. 10.1017/CBO9780511620539
    https://doi.org/10.1017/CBO9780511620539 [Google Scholar]
  5. Cohen, J.
    (1960) “A coefficient of agreement for nominal scales,” Educational and psychological measurement, vol.20, no.1, pp. 37–46. 10.1177/001316446002000104
    https://doi.org/10.1177/001316446002000104 [Google Scholar]
  6. Collins, F.
    (1999) “The use of traditional storytelling in education to the learning of literacy skills,” Early Child Development and Care, vol.152, no.1, pp. 77–108. 10.1080/0300443991520106
    https://doi.org/10.1080/0300443991520106 [Google Scholar]
  7. Dahlbäck, N., A. Jönsson, and L. Ahrenberg
    (1993) “Wizard of Oz studies: why and how,” inProceedings of the 1st international conference on Intelligent user interfaces, Orlando, Florida, USA, pp. 193–200. 10.1145/169891.169968
    https://doi.org/10.1145/169891.169968 [Google Scholar]
  8. Fails, J. A., A. Druin, and M. L. Guha
    (2014) “Interactive storytelling: Interacting with people, environment, and technology,” International Journal of Arts and Technology, vol.7, no.1, pp. 112–124. 10.1504/IJART.2014.058946
    https://doi.org/10.1504/IJART.2014.058946 [Google Scholar]
  9. Fourati, N., A. Richard, S. Caillou, N. Sabouret, J.-C. Martin, E. Chanoni, and C. Clavel
    (2016) “Facial Expressions of Appraisals Displayed by a Virtual Storyteller for Children,” Intelligent Virtual Agents: 16th International Conference, IVA 2016, Los Angeles, CA, USA, September 20–23, 2016, Proceedings, D. Traum, W. Swartout, P. Khooshabeh , eds., pp. 234–244, Cham: Springer International Publishing. 10.1007/978‑3‑319‑47665‑0_21
    https://doi.org/10.1007/978-3-319-47665-0_21 [Google Scholar]
  10. Fridin, M.
    (2014) “Storytelling by a kindergarten social assistive robot: A tool for constructive learning in preschool education,” Computers & education, vol.70, pp. 53–64. 10.1016/j.compedu.2013.07.043
    https://doi.org/10.1016/j.compedu.2013.07.043 [Google Scholar]
  11. Gharbi, M., P. V. Paubel, A. Clodic, O. Carreras, R. Alami, and J. M. Cellier
    (2015) “Toward a better understanding of the communication cues involved in a human-robot object transfer,” inRobot and Human Interactive Communication (RO-MAN), 2015 24th IEEE International Symposium on, pp. 319–324. 10.1109/ROMAN.2015.7333626
    https://doi.org/10.1109/ROMAN.2015.7333626 [Google Scholar]
  12. Glas, D. F., T. Kanda, H. Ishiguro, and N. Hagita
    (2011) “Teleoperation of multiple social robots,” IEEE Transactions on Systems, Man, and Cybernetics-Part A: Systems and Humans, vol.42, no.3, pp. 530–544. 10.1109/TSMCA.2011.2164243
    https://doi.org/10.1109/TSMCA.2011.2164243 [Google Scholar]
  13. Goffman, E.
    (1981) Forms of talk: University of Pennsylvania Press.
    [Google Scholar]
  14. Gordon, G., S. Spaulding, J. Kory Westlund, J. J. Lee, L. Plummer, M. Martinez, M. Das, and C. Breazeal
    (2016) “Affective Personalization of a Social Robot Tutor for Children’s Second Language Skills,” inThirtieth AAAI Conference on Artificial Intelligence.
    [Google Scholar]
  15. Gross, H.-M., H. Boehme, C. Schroeter, S. Müller, A. König, E. Einhorn, C. Martin, M. Merten, and A. Bley
    (2009) “TOOMAS: interactive shopping guide robots in everyday use-final implementation and experiences from long-term field trials,” inIntelligent Robots and Systems, 2009. IROS 2009. The IEEE/RSJ International Conference on Intelligent Robots and Systems, pp. 2005–2012. 10.1109/IROS.2009.5354497
    https://doi.org/10.1109/IROS.2009.5354497 [Google Scholar]
  16. Hato, Y., S. Satake, T. Kanda, M. Imai, and N. Hagita
    (2010) “Pointing to space: modeling of deictic interaction referring to regions,” inHuman-Robot Interaction (HRI), 2010 5th ACM/IEEE International Conference on, pp. 301–308. 10.1145/1734454.1734559
    https://doi.org/10.1145/1734454.1734559 [Google Scholar]
  17. Huisman, G.
    (2017) “Social Touch Technology: A Survey of Haptic Technology for Social Touch,” IEEE Transactions on Haptics, vol.10, no.3, pp. 391–408. 10.1109/TOH.2017.2650221
    https://doi.org/10.1109/TOH.2017.2650221 [Google Scholar]
  18. Iio, T., Y. Yoshikawa, and H. Ishiguro
    (2016) “Pre-scheduled Turn-Taking between Robots to Make Conversation Coherent,” inProceedings of the Fourth International Conference on Human Agent Interaction, Biopolis, Singapore, pp. 19–25. 10.1145/2974804.2974819
    https://doi.org/10.1145/2974804.2974819 [Google Scholar]
  19. (2017) “Retaining Human-Robots Conversation: Comparing Single Robot to Multiple Robots in a Real Event,” Journal of Advanced Computational Intelligence Intelligent Informatics, vol.21, no.4, pp. 675–685. 10.20965/jaciii.2017.p0675
    https://doi.org/10.20965/jaciii.2017.p0675 [Google Scholar]
  20. Isbell, R., J. Sobol, L. Lindauer, and A. Lowrance
    (2004) “The effects of storytelling and story reading on the oral language complexity and story comprehension of young children,” Early childhood education journal, vol.32, no.3, pp. 157–163. 10.1023/B:ECEJ.0000048967.94189.a3
    https://doi.org/10.1023/B:ECEJ.0000048967.94189.a3 [Google Scholar]
  21. Kanda, T., H. Ishiguro, T. Ono, M. Imai, and K. Mase
    (2002) “Multi-robot cooperation for human-robot communication,” inRobot and Human Interactive Communication, 2002. Proceedings. 11th IEEE International Workshop on, pp. 271–276. 10.1109/ROMAN.2002.1045634
    https://doi.org/10.1109/ROMAN.2002.1045634 [Google Scholar]
  22. Karatas, N., S. Yoshikawa, P. R. S. De Silva, and M. Okada
    (2015) “Namida: Multiparty conversation based driving agents in futuristic vehicle,” inInternational Conference on Human-Computer Interaction, pp. 198–207.
    [Google Scholar]
  23. Kennedy, J., V. Lemaignan, C. Montassier, P. Lavalade, B. Irfan, F. Papadopoulos, E. Senft, and T. Belpaeme
    (2017, “Child Speech Recognition in Human-Robot Interaction: Evaluations and Recommendations,” inProceedings of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, Vienna, Austria, pp. 82–90. 10.1145/2909824.3020229
    https://doi.org/10.1145/2909824.3020229 [Google Scholar]
  24. Komatsubara, T., M. Shiomi, T. Kanda, and H. Ishiguro
    (2017) “Can Using Pointing Gestures Encourage Children to Ask Questions?,” International Journal of Social Robotics, vol.10, no.4, pp. 387–399. 10.1007/s12369‑017‑0444‑5
    https://doi.org/10.1007/s12369-017-0444-5 [Google Scholar]
  25. Komatsubara, T., M. Shiomi, T. Kanda, H. Ishiguro, and N. Hagita
    (2014) “Can a social robot help children’s understanding of science in classrooms?,” inProceedings of the second international conference on Human-agent interaction, Tsukuba, Japan, pp. 83–90. 10.1145/2658861.2658881
    https://doi.org/10.1145/2658861.2658881 [Google Scholar]
  26. Kory, J., and C. Breazeal
    (2014) “Storytelling with robots: Learning companions for preschool children’s language development,” inRobot and Human Interactive Communication, 2014 RO-MAN: The 23rd IEEE International Symposium on, pp. 643–648.
    [Google Scholar]
  27. Lee, M. K., S. Kiesler, and J. Forlizzi
    (2010) “Receptionist or information kiosk: how do people talk with a robot?,” inProceedings of the 2010 ACM conference on Computer supported cooperative work, Savannah, Georgia, USA, pp. 31–40. 10.1145/1718918.1718927
    https://doi.org/10.1145/1718918.1718927 [Google Scholar]
  28. Leite, I., M. McCoy, M. Lohani, D. Ullman, N. Salomons, C. Stokes, S. Rivers, and B. Scassellati
    (2015) “Emotional Storytelling in the Classroom: Individual versus Group Interaction between Children and Robots,” inProceedings of the Tenth Annual ACM/IEEE International Conference on Human-Robot Interaction, Portland, Oregon, USA, pp. 75–82. 10.1145/2696454.2696481
    https://doi.org/10.1145/2696454.2696481 [Google Scholar]
  29. (2017) “Narratives with robots: The impact of interaction context and individual Differences on story recall and emotional Understanding,” Frontiers in Robotics and AI, vol.4, pp. 29. 10.3389/frobt.2017.00029
    https://doi.org/10.3389/frobt.2017.00029 [Google Scholar]
  30. Lombard, M., J. Snyder-Duch, and C. C. Bracken
    (2002) “Content analysis in mass communication: Assessment and reporting of intercoder reliability,” Human Communication Research, vol.28, no.4, pp. 587–604. 10.1111/j.1468‑2958.2002.tb00826.x
    https://doi.org/10.1111/j.1468-2958.2002.tb00826.x [Google Scholar]
  31. Matsuyama, Y., I. Akiba, A. Saito, and T. Kobayashi
    (2013) “A Four-Participant Group Facilitation Framework for Conversational Robots,” inProceedings of the SIGDIAL 2013 Conference, pp. 284–293.
    [Google Scholar]
  32. Matsumura, R., M. Shiomi, T. Miyashita, H. Ishiguro, and N. Hagita
    (2014) “Who is Interacting With me?; Identification of an Interacting Person Through Playful Interaction With a Small Robot,” IEEE Transactions on Human-Machine Systems, vol.44, no.2, pp. 169–179. 10.1109/THMS.2013.2296872
    https://doi.org/10.1109/THMS.2013.2296872 [Google Scholar]
  33. (2015) “What kind of floor am I standing on? Floor surface identification by a small humanoid robot through full-body motions,” Advanced Robotics. 10.1080/01691864.2014.996601
    https://doi.org/10.1080/01691864.2014.996601 [Google Scholar]
  34. Mutlu, B., J. Forlizzi, and J. Hodgins
    (2006) “A storytelling robot: Modeling and evaluation of human-like gaze behavior,” inThe 6th IEEE-RAS international conference on Humanoid robots, pp. 518–523.
    [Google Scholar]
  35. Mutlu, B., T. Kanda, J. Forlizzi, J. Hodgins, and H. Ishiguro
    (2012) “Conversational gaze mechanisms for human-like robots,” ACM Transactions on Interactive Intelligent Systems (TiiS), vol.1, no.2, pp. 12.
    [Google Scholar]
  36. Mutlu, B., T. Shiwa, T. Kanda, H. Ishiguro, and N. Hagita
    (2009) “Footing in human-robot conversations: how robots might shape participant roles using gaze cues,” inProceedings of the 4th ACM/IEEE international conference on Human robot interaction, pp. 61–68. 10.1145/1514095.1514109
    https://doi.org/10.1145/1514095.1514109 [Google Scholar]
  37. Nourbakhsh, I. R., C. Kunz, and T. Willeke
    (2003) “The mobot museum robot installations: A five year experiment,” inIEEE/RSJ International Conference on Intelligent Robots and Systems, 2003.(IROS 2003). pp. 3636–3641.
    [Google Scholar]
  38. Park, H. W., M. Gelsomini, J. J. Lee, and C. Breazeal
    (2017) “Telling stories to robots: The effect of backchanneling on a child’s storytelling,” inProceedings of the 2017 ACM/IEEE international conference on human-robot interaction, pp. 100–108. 10.1145/2909824.3020245
    https://doi.org/10.1145/2909824.3020245 [Google Scholar]
  39. Powers, A., S. Kiesler, S. Fussell, and C. Torrey
    (2007) “Comparing a computer agent with a humanoid robot,” inHuman-Robot Interaction (HRI), 2007 2nd ACM/IEEE International Conference on, pp. 145–152. 10.1145/1228716.1228736
    https://doi.org/10.1145/1228716.1228736 [Google Scholar]
  40. Roussou, M.
    (2004) “Learning by doing and learning through play: an exploration of interactivity in virtual environments for children,” Computers in Entertainment (CIE), vol.2, no.1, pp. 10–10. 10.1145/973801.973818
    https://doi.org/10.1145/973801.973818 [Google Scholar]
  41. Sakamoto, D., K. Hayashi, T. Kanda, M. Shiomi, S. Koizumi, H. Ishiguro, T. Ogasawara, and N. Hagita
    (2009) “Humanoid Robots as a Broadcasting Communication Medium in Open Public Spaces,” International Journal of Social Robotics, vol.1, no.2, pp. 157–169. 10.1007/s12369‑009‑0015‑5
    https://doi.org/10.1007/s12369-009-0015-5 [Google Scholar]
  42. Satake, S., K. Hayashi, K. Nakatani, and T. Kanda
    (2015) “Field trial of an information-providing robot in a shopping mall,” inThe IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS 2015)pp. 1832–1839. 10.1109/IROS.2015.7353616
    https://doi.org/10.1109/IROS.2015.7353616 [Google Scholar]
  43. Satake, S., T. Kanda, D. F. Glas, M. Imai, H. Ishiguro, and N. Hagita
    (2013) “A robot that approaches pedestrians,” IEEE Transactions on Robotics, vol.29, no.2, pp. 508–524. 10.1109/TRO.2012.2226387
    https://doi.org/10.1109/TRO.2012.2226387 [Google Scholar]
  44. Shi, C., M. Shiomi, T. Kanda, H. Ishiguro, and N. Hagita
    (2015) “Measuring Communication Participation to Initiate Conversation in Human–Robot Interaction,” International Journal of Social Robotics, vol.7, no.5, pp. 889–910. 10.1007/s12369‑015‑0285‑z
    https://doi.org/10.1007/s12369-015-0285-z [Google Scholar]
  45. Shi, C., M. Shiomi, C. Smith, T. Kanda, and H. Ishiguro
    (2013) “A Model of Distributional Handing Interaction for a Mobile Robot,” inRobotics: Science and Systems.
    [Google Scholar]
  46. Shinozawa, K., F. Naya, J. Yamato, and K. Kogure
    (2005) “Differences in effect of robot and screen agent recommendations on human decision-making,” International Journal of Human-Computer Studies, vol.62, no.2, pp. 267–279. 10.1016/j.ijhcs.2004.11.003
    https://doi.org/10.1016/j.ijhcs.2004.11.003 [Google Scholar]
  47. Shiomi, M., T. Kanda, I. Howley, K. Hayashi, and N. Hagita
    (2015) “Can a Social Robot Stimulate Science Curiosity in Classrooms?,” International Journal of Social Robotics, vol.7, no.5, pp. 641–652. 10.1007/s12369‑015‑0303‑1
    https://doi.org/10.1007/s12369-015-0303-1 [Google Scholar]
  48. Shiomi, M., T. Kanda, H. Ishiguro, and N. Hagita
    (2007) “Interactive Humanoid Robots for a Science Museum,” IEEE Intelligent Systems, no.2, pp. 25–32. 10.1109/MIS.2007.37
    https://doi.org/10.1109/MIS.2007.37 [Google Scholar]
  49. Shiomi, M., D. Sakamoto, T. Kanda, C. T. Ishi, H. Ishiguro, and N. Hagita
    (2010) “Field Trial of a Networked Robot at a Train Station,” International Journal of Social Robotics, vol.3, no.1, pp. 27–40. 10.1007/s12369‑010‑0077‑4
    https://doi.org/10.1007/s12369-010-0077-4 [Google Scholar]
  50. Shiomi, M., K. Shinozawa, Y. Nakagawa, T. Miyashita, T. Sakamoto, T. Terakubo, H. Ishiguro, and N. Hagita
    (2013) “Recommendation Effects of a Social Robot for Advertisement-Use Context in a Shopping Mall,” International Journal of Social Robotics, vol.5, no.2, pp. 251–262. 10.1007/s12369‑013‑0180‑4
    https://doi.org/10.1007/s12369-013-0180-4 [Google Scholar]
  51. Short, E. S., K. Swift-Spong, H. Shim, K. M. Wisniewski, D. K. Zak, S. Wu, E. Zelinski, and M. J. Matarić
    (2017) “Understanding social interactions with socially assistive robotics in intergenerational family groups,” inRobot and Human Interactive Communication (RO-MAN), 2017 26th IEEE International Symposium on, pp. 236–241. 10.1109/ROMAN.2017.8172308
    https://doi.org/10.1109/ROMAN.2017.8172308 [Google Scholar]
  52. Tamura, Y., M. Kimoto, M. Shiomi, T. Iio, K. Shimohara, and N. Hagita
    (2017) “Effects of a Listener Robot with Children in Storytelling,” inProceedings of the 5th International Conference on Human Agent Interaction, Bielefeld, Germany, pp. 35–43. 10.1145/3125739.3125750
    https://doi.org/10.1145/3125739.3125750 [Google Scholar]
  53. Toh, L. P. E., A. Causo, P. W. Tzuo, I.-M. Chen, and S. H. Yeo
    (2016) “A Review on the Use of Robots in Education and Young Children,” Educational Technology & Society, vol.19, no.2, pp. 148–163.
    [Google Scholar]
  54. Van Gils, F.
    (2005) “Potential applications of digital storytelling in education,” in3rd Twente student conference on IT.
    [Google Scholar]
  55. Vázquez, M., A. Steinfeld, S. E. Hudson, and J. Forlizzi
    (2014) “Spatial and other social engagement cues in a child-robot interaction: Effects of a sidekick,” inProceedings of the 2014 ACM/IEEE international conference on Human-robot interaction, pp. 391–398. 10.1145/2559636.2559684
    https://doi.org/10.1145/2559636.2559684 [Google Scholar]
  56. Wada, K., D. F. Glas, M. Shiomi, T. Kanda, H. Ishiguro, and N. Hagita
    (2015) “Capturing Expertise: Developing Interaction Content for a Robot Through Teleoperation by Domain Experts,” International Journal of Social Robotics, vol.7, no.5, pp. 653–672. 10.1007/s12369‑015‑0288‑9
    https://doi.org/10.1007/s12369-015-0288-9 [Google Scholar]
  57. Yamaoka, F., T. Kanda, H. Ishiguro, and N. Hagita
    (2010) “A model of proximity control for information-presenting robots,” IEEE Transactions on Robotics, vol.26, no.1, pp. 187–195. 10.1109/TRO.2009.2035747
    https://doi.org/10.1109/TRO.2009.2035747 [Google Scholar]
  58. Yamazaki, A., K. Yamazaki, Y. Kuno, M. Burdelski, M. Kawashima, and H. Kuzuoka
    (2008) “Precision timing in human-robot interaction: coordination of head movement and utterance,” inProceeding of the twenty-sixth annual SIGCHI conference on Human factors in computing systems, Florence, Italy, pp. 131–140. 10.1145/1357054.1357077
    https://doi.org/10.1145/1357054.1357077 [Google Scholar]
  59. Yohanan, S., and K. E. MacLean
    (2012) “The role of affective touch in human-robot interaction: Human intent and expectations in touching the haptic creature,” International Journal of Social Robotics, vol.4, no.2, pp. 163–180. 10.1007/s12369‑011‑0126‑7
    https://doi.org/10.1007/s12369-011-0126-7 [Google Scholar]
/content/journals/10.1075/is.18033.tam
Loading
/content/journals/10.1075/is.18033.tam
Loading

Data & Media loading...

  • Article Type: Research Article
Keyword(s): child-robot interaction; group interaction; multiple robots; storytelling
This is a required field
Please enter a valid email address
Approval was successful
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error