1887
Volume 23, Issue 1
  • ISSN 1572-0373
  • E-ISSN: 1572-0381
USD
Buy:$35.00 + Taxes

Abstract

Abstract

Despite their potential for facilitating interaction between a user and computer, an embodied agent and voice command have not been examined enough for their matching effects. The current study proposes that an embodied agent and voice command generate positive evaluative outcomes, particularly when they are accompanied by each other. To test this prediction, we conducted a 2 (visual output: embodied agent . geometric figure) × 2 (input modality: voice command . remote controller) between-subjects experiment ( = 52), and examined whether visual output and input modality jointly influence participants’ social attribution (i.e., anthropomorphism, animacy, likability, and perceived intelligence), social presence, and satisfaction. Results show that voice command does facilitate users’ social attribution and social presence, but only when an embodied agent was presented. Also, the effects of voice command on social presence and satisfaction were mediated by anthropomorphism and perceived intelligence respectively, but only when the interface displayed an embodied agent. The present study evidences the holistic nature of human-computer interaction, revealing the importance of matches in the input and output interface.

Loading

Article metrics loading...

/content/journals/10.1075/is.20030.lee
2022-10-20
2024-10-14
Loading full text...

Full text loading...

References

  1. Allison, F., Luger, E., & Hofmann, K.
    (2017) Spontaneous Interactions with a Virtually Embodied Intelligent Assistant in Minecraft. Proceedings of the 2017 CHI Conference Extended Abstracts on Human Factors in Computing Systems, 2337–2344. 10.1145/3027063.3053266
    https://doi.org/10.1145/3027063.3053266 [Google Scholar]
  2. Bartneck, C., Kulić, D., Croft, E., & Zoghbi, S.
    (2009) Measurement instruments for the anthropomorphism, animacy, likeability, perceived intelligence, and perceived safety of robots. International Journal of Social Robotics, 1(1), 71–81. 10.1007/s12369‑008‑0001‑3
    https://doi.org/10.1007/s12369-008-0001-3 [Google Scholar]
  3. Baylor, A. L., & Ryu, J.
    (2003) The effects of image and animation in enhancing pedagogical agent persona. Journal of Educational Computing Research, 28, 373–394. 10.2190/V0WQ‑NWGN‑JB54‑FAT4
    https://doi.org/10.2190/V0WQ-NWGN-JB54-FAT4 [Google Scholar]
  4. Brave, S., Nass, C., & Hutchinson, K.
    (2005) Computers that care: Investigating the effects of orientation of emotion exhibited by an embodied computer agent. International Journal of Human-Computer Studies, 62, 161–178. 10.1016/j.ijhcs.2004.11.002
    https://doi.org/10.1016/j.ijhcs.2004.11.002 [Google Scholar]
  5. Brennan, S. E., & Hulteen, E. A.
    (1995) Interaction and feedback in a spoken language system: A theoretical framework. Knowledge-Based Systems, 8, 143–151. 10.1016/0950‑7051(95)98376‑H
    https://doi.org/10.1016/0950-7051(95)98376-H [Google Scholar]
  6. Clark, H. H.
    (1999) How do real people communicate with virtual partners. Proceedings of 1999 AAAI Fall Symposium, Psychological Models of Communication in Collaborative Systems, 43–47. North Falmouth, MA.
    [Google Scholar]
  7. Cohen, P. R., & Oviatt, S. L.
    (1995) The role of voice input for human-machine communication. Proceedings of the National Academy of Sciences, 92, 9921–9927. Retrieved fromwww.pnas.org/content/92/22/9921. 10.1073/pnas.92.22.9921
    https://doi.org/10.1073/pnas.92.22.9921 [Google Scholar]
  8. Cordasco, G., Esposito, M., Masucci, F., Riviello, M. T., Esposito, A., Chollet, G., … Pelosi, G.
    (2014) Assessing voice user interfaces: The vAssist system prototype. 5th IEEE Conference on Cognitive Infocommunications, 91–96. 10.1109/CogInfoCom.2014.7020425
    https://doi.org/10.1109/CogInfoCom.2014.7020425 [Google Scholar]
  9. Dehn, D. M., & Van Mulken, S.
    (2000) The impact of animated interface agents: A review of empirical research. International Journal of Human-Computer Studies, 52, 1–22. 10.1006/ijhc.1999.0325
    https://doi.org/10.1006/ijhc.1999.0325 [Google Scholar]
  10. Dworkin, M., Chakraborty, A., Lee, S., Monahan, C., Hightow-Weidman, L., Garofalo, R., … Jimenez, A.
    (2018) A realistic talking human embodied agent mobile phone intervention to promote hiv medication adherence and retention in care in young HIV-positive african american men who have sex with men: Qualitative study. JMIR MHealth and UHealth, 6, e10211. 10.2196/10211
    https://doi.org/10.2196/10211 [Google Scholar]
  11. Fogg, B., & Nass, C.
    (1997) How users reciprocate to computers: An experiment that demonstrates behavior change. Proceedings of the CHI ’97 Extended Abstracts on Human Factors in Computing Systems, 331–332. 10.1145/1120212.1120419
    https://doi.org/10.1145/1120212.1120419 [Google Scholar]
  12. Hasegawa, D., Cassell, J., & Araki, K.
    (2010) The role of embodiment and perspective in direction-giving systems. 2010 AAAI Fall Symposium. Retrieved fromhttps://www.aaai.org/ocs/index.php/FSS/FSS10/paper/view/2186
    [Google Scholar]
  13. Hayes, A. F.
    (2013) Introduction to mediation, moderation, and conditional process analysis: A regression-based approach. New York, NY, US: Guilford Press.
    [Google Scholar]
  14. Hong, W., Hess, T. J., & Hardin, A.
    (2013) When filling the wait makes it feel longer: A paradigm shift perspective for managing online delay. MIS Quarterly, 37, 383–406. Retrieved fromhttps://www.jstor.org/stable/43825915. 10.25300/MISQ/2013/37.2.04
    https://doi.org/10.25300/MISQ/2013/37.2.04 [Google Scholar]
  15. Hutchins, E. L., Hollan, J. D., & Norman, D. A.
    (1985) Direct manipulation interfaces. Human–Computer Interaction, 1, 311–338. 10.1207/s15327051hci0104_2
    https://doi.org/10.1207/s15327051hci0104_2 [Google Scholar]
  16. Jeong, J.-W., & Lee, D.-H.
    (2014) Inferring search intents from remote control movement patterns: A new content search method for smart TV. IEEE Transactions on Consumer Electronics, 60, 92–98. 10.1109/TCE.2014.6780930
    https://doi.org/10.1109/TCE.2014.6780930 [Google Scholar]
  17. Kätsyri, J., Förger, K., Mäkäräinen, M., & Takala, T.
    (2015) A review of empirical evidence on different uncanny valley hypotheses: Support for perceptual mismatch as one road to the valley of eeriness. Frontiers in Psychology, 6, 390. 10.3389/fpsyg.2015.00390
    https://doi.org/10.3389/fpsyg.2015.00390 [Google Scholar]
  18. Kim, Yanghee, Thayne, J., & Wei, Q.
    (2017) An embodied agent helps anxious students in mathematics learning. Educational Technology Research and Development, 65, 219–235. 10.1007/s11423‑016‑9476‑z
    https://doi.org/10.1007/s11423-016-9476-z [Google Scholar]
  19. Kim, Youjeong, & Sundar, S. S.
    (2012) Anthropomorphism of computers: Is it mindful or mindless?Computers in Human Behavior, 28, 241–250. 10.1016/j.chb.2011.09.006
    https://doi.org/10.1016/j.chb.2011.09.006 [Google Scholar]
  20. Koda, T., & Maes, P.
    (1996) Agents with faces: The effect of personification. Proceedings of 5th International Workshop on Robot and Human Communication, 189–194. 10.1109/ROMAN.1996.568812
    https://doi.org/10.1109/ROMAN.1996.568812 [Google Scholar]
  21. Lee, B., Isenberg, P., Riche, N. H., & Carpendale, S.
    (2012) Beyond Mouse and Keyboard: Expanding Design Considerations for Information Visualization Interactions. IEEE Transactions on Visualization and Computer Graphics, 18, 2689–2698. 10.1109/TVCG.2012.204
    https://doi.org/10.1109/TVCG.2012.204 [Google Scholar]
  22. Lee, D., Sah, Y. J., & Lee, S.
    (2019) Improving usability perception of error-prone ai speakers: Elaborated feedback mitigates negative consequences of errors. International Journal of Human–Computer Interaction, 35, 1645–1652. 10.1080/10447318.2018.1561069
    https://doi.org/10.1080/10447318.2018.1561069 [Google Scholar]
  23. Lee, K. M.
    (2004) Presence, explicated. Communication Theory, 14(1), 27–50. 10.1111/j.1468‑2885.2004.tb00302.x
    https://doi.org/10.1111/j.1468-2885.2004.tb00302.x [Google Scholar]
  24. Lee, K. M., & Nass, C.
    (2005) Social-psychological origins of feelings of presence: Creating social presence with machine-generated voices. Media Psychology, 7, 31–45. 10.1207/S1532785XMEP0701_2
    https://doi.org/10.1207/S1532785XMEP0701_2 [Google Scholar]
  25. Lee, K. M., Peng, W., Jin, S.-A., & Yan, C.
    (2006) Can robots manifest personality?: An empirical test of personality recognition, social responses, and social presence in human–robot interaction. Journal of Communication, 56, 754–772. 10.1111/j.1460‑2466.2006.00318.x
    https://doi.org/10.1111/j.1460-2466.2006.00318.x [Google Scholar]
  26. Mitchell, W. J., Szerszen, K. A., Lu, A. S., Schermerhorn, P. W., Scheutz, M., & MacDorman, K. F.
    (2011) A mismatch in the human realism of face and voice produces an uncanny valley. I-Perception, 2, 10–12. 10.1068/i0415
    https://doi.org/10.1068/i0415 [Google Scholar]
  27. Mori, M.
    (1970) The uncanny valley. Energy, 7(4), 33–35.
    [Google Scholar]
  28. Nass, C., & Gong, L.
    (2000) Speech interfaces from an evolutionary perspective. Communications of the ACM, 43(9), 36–43. 10.1145/348941.348976
    https://doi.org/10.1145/348941.348976 [Google Scholar]
  29. Nass, C., & Moon, Y.
    (2000) Machines and mindlessness: Social responses to computers. Journal of Social Issues, 56, 81–103. 10.1111/0022‑4537.00153
    https://doi.org/10.1111/0022-4537.00153 [Google Scholar]
  30. Nass, C., Moon, Y., & Carney, P.
    (1999) Are people polite to computers? Responses to computer-based interviewing systems. Journal of Applied Social Psychology, 29, 1093–1109. 10.1111/j.1559‑1816.1999.tb00142.x
    https://doi.org/10.1111/j.1559-1816.1999.tb00142.x [Google Scholar]
  31. Nass, C., Moon, Y., & Green, N.
    (1997) Are machines gender neutral? Gender-stereotypic responses to computers with voices. Journal of Applied Social Psychology, 27, 864–876. 10.1111/j.1559‑1816.1997.tb00275.x
    https://doi.org/10.1111/j.1559-1816.1997.tb00275.x [Google Scholar]
  32. Nass, C., Steuer, J., & Tauber, E. R.
    (1994) Computers are social actors. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 72–78. 10.1145/191666.191703
    https://doi.org/10.1145/191666.191703 [Google Scholar]
  33. Pyae, A., & Joelsson, T. N.
    (2018) Investigating the usability and user experiences of voice user interface: A case of google home smart speaker. Proceedings of the 20th International Conference on Human-Computer Interaction with Mobile Devices and Services Adjunct, 127–131. 10.1145/3236112.3236130
    https://doi.org/10.1145/3236112.3236130 [Google Scholar]
  34. Rhee, E., Shin, I., Jung, Y., & Lee, H.
    (2013) Cloud-based 3D gaming system for smart TV. Proceedings of the 2013 International Conference on ICT Convergence (ICTC), 714–715. 10.1109/ICTC.2013.6675459
    https://doi.org/10.1109/ICTC.2013.6675459 [Google Scholar]
  35. Rosenberg-Kima, R. B., Baylor, A. L., Plant, E. A., & Doerr, C. E.
    (2007) The importance of interface agent visual presence: Voice alone is less effective in impacting young women’s attitudes toward engineering. InY. de Kort, W. IJsselsteijn, C. Midden, B. Eggen, & B. J. Fogg (Eds.), Persuasive Technology (pp.214–222). Berlin: Springer. 10.1007/978‑3‑540‑77006‑0_27
    https://doi.org/10.1007/978-3-540-77006-0_27 [Google Scholar]
  36. Sah, Y. J., & Peng, W.
    (2015) Effects of visual and linguistic anthropomorphic cues on social perception, self-awareness, and information disclosure in a health website. Computers in Human Behavior, 45, 392–401. 10.1016/j.chb.2014.12.055
    https://doi.org/10.1016/j.chb.2014.12.055 [Google Scholar]
  37. Seyama, J., & Nagayama, R. S.
    (2007) The uncanny valley: Effect of realism on the impression of artificial human faces. Presence: Teleoperators and Virtual Environments, 16, 337–351. 10.1162/pres.16.4.337
    https://doi.org/10.1162/pres.16.4.337 [Google Scholar]
  38. Shamekhi, A., Liao, Q. V., Wang, D., Bellamy, R. K. E., & Erickson, T.
    (2018) Face value? Exploring the effects of embodiment for a group facilitation agent. Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems, 1–13. 10.1145/3173574.3173965
    https://doi.org/10.1145/3173574.3173965 [Google Scholar]
  39. Silvia, P. J., & Duval, T. S.
    (2001) Objective self-awareness theory: Recent progress and enduring problems. Personality and Social Psychology Review, 5, 230–241. 10.1207/S15327957PSPR0503_4
    https://doi.org/10.1207/S15327957PSPR0503_4 [Google Scholar]
  40. Sproull, L., Subramani, M., Kiesler, S., Walker, J. H., & Waters, K.
    (1996) When the interface is a face. Human–Computer Interaction, 11, 97–124. 10.1207/s15327051hci1102_1
    https://doi.org/10.1207/s15327051hci1102_1 [Google Scholar]
  41. Stafford, R. Q., MacDonald, B. A., Jayawardena, C., Wegner, D. M., & Broadbent, E.
    (2014) Does the robot have a mind? Mind perception and attitudes towards robots predict use of an eldercare robot. International Journal of Social Robotics, 6, 17–32. 10.1007/s12369‑013‑0186‑y
    https://doi.org/10.1007/s12369-013-0186-y [Google Scholar]
  42. Walther, J. B.
    (1996) Computer-mediated communication: Impersonal, interpersonal, and hyperpersonal interaction. Communication Research, 23, 3–43. 10.1177/009365096023001001
    https://doi.org/10.1177/009365096023001001 [Google Scholar]
  43. (2011) Visual cues in computer-mediated communication: Sometimes less is more. InA. Kappas & N. C. Krämer (Eds.), Face-to-Face Communication over the Internet (pp.17–38). Cambridge, UK: Cambridge University Press. 10.1017/CBO9780511977589.003
    https://doi.org/10.1017/CBO9780511977589.003 [Google Scholar]
  44. Walther, J. B., Slovacek, C. L., & Tidwell, L. C.
    (2001) Is a picture worth a thousand words? Photographic images in long-term and short-term computer-mediated communication. Communication Research, 28(1), 105–134. 10.1177/009365001028001004
    https://doi.org/10.1177/009365001028001004 [Google Scholar]
  45. Yee, N., Bailenson, J. N., & Rickertsen, K.
    (2007) A meta-analysis of the impact of the inclusion and realism of human-like faces on user experiences in interfaces. Proceedings of the SIGCHI Conference on Human Factors in Computing Systems, 1–10. 10.1145/1240624.1240626
    https://doi.org/10.1145/1240624.1240626 [Google Scholar]
  46. Yi, J. S., Kang, Y. a, & Stasko, J.
    (2007) Toward a Deeper Understanding of the Role of Interaction in Information Visualization. IEEE Transactions on Visualization and Computer Graphics, 13, 1224–1231. 10.1109/TVCG.2007.70515
    https://doi.org/10.1109/TVCG.2007.70515 [Google Scholar]
  47. Yu, E., Hong, A., & Hwang, J.
    (2016) A socio-technical analysis of factors affecting the adoption of smart TV in Korea. Computers in Human Behavior, 61, 89–102. 10.1016/j.chb.2016.02.099
    https://doi.org/10.1016/j.chb.2016.02.099 [Google Scholar]
/content/journals/10.1075/is.20030.lee
Loading
/content/journals/10.1075/is.20030.lee
Loading

Data & Media loading...

  • Article Type: Research Article
Keyword(s): embodied agent; satisfaction; smart TV; social attribution; social presence; voice control
This is a required field
Please enter a valid email address
Approval was successful
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error