1887
Volume 22, Issue 3
  • ISSN 1572-0373
  • E-ISSN: 1572-0381
USD
Buy:$35.00 + Taxes

Abstract

Abstract

In this paper, we analyze what effects indicators of a shared situation have on a speaker’s persuasiveness by investigating how a robot’s advice is received when it indicates that it is sharing the situational context with its user. In our experiment, 80 participants interacted with a robot that referred to aspects of the shared context: Face tracking indicated that the robot saw the participant, incremental feedback suggested that the robot was following their actions, and comments about, and gestures towards, the shared physical situation and linguistic references to the dialog history indicated to participants that the robot had learned from the interaction and perceived its surroundings. The results show that especially the linguistic and gestural references to the shared context have a significant influence on participants’ compliance with the robot’s suggestions. Thus, indicating that it is ‘in the same boat’ with the user, i.e. that it is sharing the situational context, increases a robot’s persuasiveness during advice giving.

Loading

Article metrics loading...

/content/journals/10.1075/is.00013.fis
2022-03-28
2025-04-27
Loading full text...

Full text loading...

References

  1. Ahmad, M. I., Mubin, O., & Orlando, J.
    (2017) Adaptive Social Robot for Sustaining Social Engagement during Long-Term Children–Robot Interaction. International Journal of Human–Computer Interaction, 33(12), 943–962. 10.1080/10447318.2017.1300750
    https://doi.org/10.1080/10447318.2017.1300750 [Google Scholar]
  2. Bainbridge, W. A., Hart, J. W., Kim, E. S., & Scassellati, B.
    (2010) The Benefits of Interactions with Physically Present Robots over Video-Displayed Agents. International Journal of Social Robotics, 3(1), 41–52. 10.1007/s12369‑010‑0082‑7
    https://doi.org/10.1007/s12369-010-0082-7 [Google Scholar]
  3. Bangerter, A., & Mayor, E.
    (2013) 14 Interactional theories of communication. InTheories and models of communication. Handbook of Communication Science, 257–272. De Gruyter Mouton. 10.1515/9783110240450.257
    https://doi.org/10.1515/9783110240450.257 [Google Scholar]
  4. Baxter, P., Kennedy, J., Vollmer, A. L., de Greeff, J., & Belpaeme, T.
    (2014) Tracking gaze over time in HRI as a proxy for engagement and attribution of social agency. InProceedings of the 2014 ACM/IEEE international conference on Human-robot interaction. ACM, New York, NY, USA, 126–127. 10.1145/2559636.2559829
    https://doi.org/10.1145/2559636.2559829 [Google Scholar]
  5. Breazeal, C., Kidd, C. D., Thomaz, A. L., Hoffman, G., & Berlin, M.
    (2005) Effects of nonverbal communication on efficiency and robustness in human-robot teamwork. In2005 IEEE/RSJ international conference on intelligent robots and systems, 708–713. IEEE, Edmonton, Alta., Canada. 10.1109/IROS.2005.1545011
    https://doi.org/10.1109/IROS.2005.1545011 [Google Scholar]
  6. Campos, J., & Paiva, A.
    (2010) May: My memories are yours. InInternational Conference on Intelligent Virtual Agents, 406–412. Springer, Berlin, Heidelberg. 10.1007/978‑3‑642‑15892‑6_44
    https://doi.org/10.1007/978-3-642-15892-6_44 [Google Scholar]
  7. Carlmeyer, B., Schlangen, D., & Wrede, B.
    (2014) Towards closed feedback loops in hri: Integrating inprotk and pamini. InProceedings of the 2014 Workshop on Multimodal, Multi-Party, Real-World Human-Robot Interaction, 1–6. 10.1145/2666499.2666500
    https://doi.org/10.1145/2666499.2666500 [Google Scholar]
  8. Chidambaram, V., Chiang, Y. H., & Mutlu, B.
    (2012) Designing persuasive robots: how robots might persuade people using vocal and nonverbal cues. InProceedings of the seventh annual ACM/IEEE international conference on Human-Robot Interaction, 293–300. ACM, Boston, Massachusetts, USA. 10.1145/2157689.2157798
    https://doi.org/10.1145/2157689.2157798 [Google Scholar]
  9. Christian, B.
    (2011) The most human human: What talking with computers teaches us about what it means to be alive. Doubleday, New York, USA.
    [Google Scholar]
  10. Chromik, M., Carlmeyer, B., & Wrede, B.
    (2017) Ready for the Next Step? Investigating the Effect of Incremental Information Presentation in an Object Fetching Task. InProceedings of the Companion of the 2017 ACM/IEEE International Conference on Human-Robot Interaction, 95–96. ACM. 10.1145/3029798.3038352
    https://doi.org/10.1145/3029798.3038352 [Google Scholar]
  11. Cialdini, R. B.
    ([1987] 2010) Influence (Vol.3). Port Harcourt: A. Michel.
    [Google Scholar]
  12. Clark, H. H.
    (1996) Using Language. Cambridge University Press, Cambridge, USA. 10.1017/CBO9780511620539
    https://doi.org/10.1017/CBO9780511620539 [Google Scholar]
  13. (1998) Communal lexicons. InK. Malmkjaer and J. Williams (Eds.), Context in Language Learning and Language Understanding, pp.63–87. Cambridge: Cambridge University Press.
    [Google Scholar]
  14. Cohen, J.
    (1969) Statistical Power Analysis for the Behavioural Sciences. Academic.
    [Google Scholar]
  15. Dautenhahn, K., Ogden, B., & Quick, T.
    (2002) From embodied to socially embedded agents – Implications for interaction-aware robots. Cognitive Systems Research, 3(3), 397–428. 10.1016/S1389‑0417(02)00050‑5
    https://doi.org/10.1016/S1389-0417(02)00050-5 [Google Scholar]
  16. Fetzer, A.
    (2004) Recontextualizing Context: Grammaticality meets appropriateness. John Benjamins Publishing Company. 10.1075/pbns.121
    https://doi.org/10.1075/pbns.121 [Google Scholar]
  17. Fischer, K., & Saunders, J.
    (2012) Getting acquainted with a developing robot. InInternational Workshop on Human Behavior Understanding, 125–133. Springer, Berlin, Heidelberg. 10.1007/978‑3‑642‑34014‑7_11
    https://doi.org/10.1007/978-3-642-34014-7_11 [Google Scholar]
  18. Fischer, K., Lohan, K., & Foth, K.
    (2012) Levels of embodiment: Linguistic analyses of factors influencing HRI. In2012 7th ACM/IEEE International Conference on Human-Robot Interaction (HRI), 463–470. ACM, Boston, MA., USA. 10.1145/2157689.2157839
    https://doi.org/10.1145/2157689.2157839 [Google Scholar]
  19. Fischer, K., Jensen, L. C., Suvei, S. D., & Bodenhagen, L.
    (2016) Between legibility and contact: The role of gaze in robot approach. In2016 25th IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), 646–651. IEEE, New York City, USA. 10.1109/ROMAN.2016.7745186
    https://doi.org/10.1109/ROMAN.2016.7745186 [Google Scholar]
  20. Fischer, Kerstin
    (2016a) Robots as confederates: How robots can and should support research in the humanities. InProceedings of the Robophilosophy 2016 Conference, Aarhus, Denmark 2016.
    [Google Scholar]
  21. Fischer, K.
    (2016b) Designing Speech for a Recipient: The roles of partner modeling, alignment and feedback in so-called simplified registers. John Benjamins Publishing Company. Pragmatics and Beyond New Series, Vol. 270. 10.1075/pbns.270
    https://doi.org/10.1075/pbns.270 [Google Scholar]
  22. Fischer, K., Lohan, K. S., Nehaniv, C., & Lehmann, H.
    (2013a) Effects of different kinds of robot feedback. InInternational Conference on Social Robotics, 260–269. Springer, Berlin Heidelberg, Germany. 10.1007/978‑3‑319‑02675‑6_26
    https://doi.org/10.1007/978-3-319-02675-6_26 [Google Scholar]
  23. Fischer, K., Lohan, K., Saunders, J., Nehaniv, C., Wrede, B., & Rohlfing, K.
    (2013b) The impact of the contingency of robot feedback on HRI. In2013 International Conference on Collaboration Technologies and Systems (CTS), 210–217. IEEE, San Diego, CA, USA. 10.1109/CTS.2013.6567231
    https://doi.org/10.1109/CTS.2013.6567231 [Google Scholar]
  24. Fischer, Kerstin & Niebuhr, Oliver
    (2020) Studying Language Attitudes Using Robots. HRI’20 Companion, March23–26 2020, Cambridge, United Kingdom. 10.1145/3371382.3378377
    https://doi.org/10.1145/3371382.3378377 [Google Scholar]
  25. Fischer, Kerstin, Naik, Lakshadeep, Langedijk, Rosalyn, Baumann, Timo, Jelinek, Matous & Palinko, Oskar
    (2021) Initiating Human-Robot Interactions Using Incremental Speech Adaptation. HRI’21 Companion, Boulder, Colorado. 10.1145/3434074.3447205
    https://doi.org/10.1145/3434074.3447205 [Google Scholar]
  26. Ghigi, F., Eskenazi, M., Torres, M. I., & Lee, S.
    (2014) Incremental dialog processing in a task-oriented dialog. InFifteenth Annual Conference of the International Speech Communication Association. 10.21437/Interspeech.2014‑74
    https://doi.org/10.21437/Interspeech.2014-74 [Google Scholar]
  27. Goldstein, N. J., Cialdini, R. B., & Griskevicius, V.
    (2008) A room with a viewpoint: Using social norms to motivate environmental conservation in hotels. Journal of consumer Research, 35(3), 472–482. 10.1086/586910
    https://doi.org/10.1086/586910 [Google Scholar]
  28. Gumperz, J. J.
    (1992) Contextualization and understanding. Rethinking context: Language as an interactive phenomenon, 11, 229–252.
    [Google Scholar]
  29. Ham, Jaap
    (2022) Influencing robot influence: Personalization of persuasive robots. Interaction Studies (in this special issue).
    [Google Scholar]
  30. Ham, J., Cuijpers, R. H., & Cabibihan, J. J.
    (2015) Combining robotic persuasive strategies: the persuasive power of a storytelling robot that uses gazing and gestures. International Journal of Social Robotics, 7(4), 479–487. 10.1007/s12369‑015‑0280‑4
    https://doi.org/10.1007/s12369-015-0280-4 [Google Scholar]
  31. Higgins, C., & Walker, R.
    (2012) Ethos, logos, pathos: Strategies of persuasion in social/environmental reports. InAccounting Forum, (36)3, 194–208. No longer published by Elsevier. 10.1016/j.accfor.2012.02.003
    https://doi.org/10.1016/j.accfor.2012.02.003 [Google Scholar]
  32. Ishii, R., Nakano, Y. I., & Nishida, T.
    (2013) Gaze awareness in conversational agents: Estimating a user’s conversational engagement from eye gaze. ACM Transactions on Interactive Intelligent Systems (TiiS), 3(2), 1–25. 10.1145/2499474.2499480
    https://doi.org/10.1145/2499474.2499480 [Google Scholar]
  33. Jefferson, Gail
    (1984) Transcription conventions. Structures of social action, ix–xvi.
    [Google Scholar]
  34. Johnson, D. O., & Agah, A.
    (2009) Human robot interaction through semantic integration of multiple modalities, dialog management, and contexts. International Journal of Social Robotics, 1(4), 283–305. 10.1007/s12369‑009‑0028‑0
    https://doi.org/10.1007/s12369-009-0028-0 [Google Scholar]
  35. Kanda, T., Shiomi, M., Miyashita, Z., Ishiguro, H., & Hagita, N.
    (2010) A communication robot in a shopping mall. IEEE Transactions on Robotics, 26(5), 897–913. 10.1109/TRO.2010.2062550
    https://doi.org/10.1109/TRO.2010.2062550 [Google Scholar]
  36. Kasap, Z., & Magnenat-Thalmann, N.
    (2012) Building long-term relationships with virtual and robotic characters: the role of remembering. The Visual Computer, 28(1), 87–97. 10.1007/s00371‑011‑0630‑7
    https://doi.org/10.1007/s00371-011-0630-7 [Google Scholar]
  37. Kennington, C., Kousidis, S., Baumann, T., Buschmeier, H., Kopp, S., & Schlangen, D.
    (2014) Better driving and recall when in-car information presentation uses situationally-aware incremental speech output generation. InProceedings of the 6th international conference on automotive user interfaces and interactive vehicular applications, 1–7. ACM. 10.1145/2667317.2667332
    https://doi.org/10.1145/2667317.2667332 [Google Scholar]
  38. Kidd, C. D., & Breazeal, C.
    (2008) Robots at home: Understanding long-term human-robot interaction. In2008 IEEE/RSJ International Conference on Intelligent Robots and Systems, 3230–3235. IEEE, Nice, France. 10.1109/IROS.2008.4651113
    https://doi.org/10.1109/IROS.2008.4651113 [Google Scholar]
  39. Kiesler, S.
    (2005) Fostering common ground in human-robot interaction. InROMAN 2005. IEEE International Workshop on Robot and Human Interactive Communication, 729–734. IEEE Press, Piscataway, NJ, USA. 10.1109/ROMAN.2005.1513866
    https://doi.org/10.1109/ROMAN.2005.1513866 [Google Scholar]
  40. Kim, T., & Hinds, P.
    (2006) Who should I blame? Effects of autonomy and transparency on attributions in human-robot interaction. InROMAN 2006 – The 15th IEEE International Symposium on Robot and Human Interactive Communication, 80–85. IEEE. 10.1109/ROMAN.2006.314398
    https://doi.org/10.1109/ROMAN.2006.314398 [Google Scholar]
  41. Kipp, A., & Kummert, F.
    (2016) “I know how you performed!”: Fostering Engagement in a Gaming Situation Using Memory of Past Interaction. InProceedings of the Fourth International Conference on Human Agent Interaction, 281–288. ACM, Biopolis, Singapore. 10.1145/2974804.2974818
    https://doi.org/10.1145/2974804.2974818 [Google Scholar]
  42. Klamer, T., Allouch, S. B., & Heylen, D.
    (2010) “Adventures of Harvey”–Use, acceptance of and relationship building with a social robot in a domestic environment. InInternational Conference on Human-Robot Personal Relationship, 74–82. Springer, Berlin, Heidelberg, Germany. 10.1007/978‑3‑642‑19385‑9_10
    https://doi.org/10.1007/978-3-642-19385-9_10 [Google Scholar]
  43. Langedijk, R., & Ham, J.
    (2022) More than advice: The influence of adding references to prior discourse and signals of empathy on the persuasiveness of an advice-giving robot. Interaction Studies (in this special issue).
    [Google Scholar]
  44. Lee, S., Lau, I. Y., Kiesler, S., & Chiu, C.
    (2005) Human mental model of humanoid robot. InIEEE International Conference on Robotics and Automation. 10.1109/ROBOT.2005.1570532
    https://doi.org/10.1109/ROBOT.2005.1570532 [Google Scholar]
  45. Leite, I., Pereira, A., & Lehman, J. F.
    (2017) Persistent memory in repeated child-robot conversations. InProceedings of the 2017 conference on interaction design and children, 238–247. ACM, New York, NY, USA. 10.1145/3078072.3079728
    https://doi.org/10.1145/3078072.3079728 [Google Scholar]
  46. Leite, I., Castellano, G., Pereira, A., Martinho, C., & Paiva, A.
    (2014) Empathic robots for long-term interaction. International Journal of Social Robotics, 6(3), 329–341. 10.1007/s12369‑014‑0227‑1
    https://doi.org/10.1007/s12369-014-0227-1 [Google Scholar]
  47. Lemaignan, S., Garcia, F., Jacq, A., & Dillenbourg, P.
    (2016) From real-time attention assessment to “with-me-ness” in human-robot interaction. In2016 11th ACM/IEEE International Conference on Human-Robot Interaction (HRI), 157–164. IEEE. 10.1109/HRI.2016.7451747
    https://doi.org/10.1109/HRI.2016.7451747 [Google Scholar]
  48. Leyzberg, D., Spaulding, S., & Scassellati, B.
    (2014) Personalizing robot tutors to individuals’ learning differences. In2014 9th ACM/IEEE International Conference on Human-Robot Interaction (HRI), 423–430. ACM, New York, NY, USA. 10.1145/2559636.2559671
    https://doi.org/10.1145/2559636.2559671 [Google Scholar]
  49. Lockridge, C. B., & Brennan, S. E.
    (2002) Addressees’ needs influence speakers’ early syntactic choices. Psychonomic bulletin & review, 9(3), 550–557. 10.3758/BF03196312
    https://doi.org/10.3758/BF03196312 [Google Scholar]
  50. Lohan, K. S., Pitsch, K., Rohlfing, K. J., Fischer, K., Saunders, J., Lehmann, H., Nehaniv, C. L., & Wrede, B.
    (2011) Contingency allows the robot to spot the tutor and to learn from interaction. In2011 IEEE International Conference on Development and Learning (ICDL) (2), 1–8. IEEE, Frankfurt and Main, Germany. 10.1109/DEVLRN.2011.6037341
    https://doi.org/10.1109/DEVLRN.2011.6037341 [Google Scholar]
  51. Lohan, K. S., Rohlfing, K. J., Pitsch, K., Saunders, J., Lehmann, H., Nehaniv, C. L., Fischer, K., & Wrede, B.
    (2012) Tutor spotter: Proposing a feature set and evaluating it in a robotic system. International Journal of Social Robotics, 4(2), 131–146. 10.1007/s12369‑011‑0125‑8
    https://doi.org/10.1007/s12369-011-0125-8 [Google Scholar]
  52. Lyons, J. B.
    (2013) Being transparent about transparency: A model for human-robot interaction. In2013 AAAI Spring Symposium Series. (pp.48–53). Stanford, CA, USA.
    [Google Scholar]
  53. Manuvinakurike, R., Paetzel, M., Qu, C., Schlangen, D., & DeVault, D.
    (2016) Toward incremental dialogue act segmentation in fast-paced interactive dialogue systems. InProceedings of the 17th Annual Meeting of the Special Interest Group on Discourse and Dialogue, 252–262. 10.18653/v1/W16‑3632
    https://doi.org/10.18653/v1/W16-3632 [Google Scholar]
  54. Nass, C.
    (2004) Etiquette equality: exhibitions and expectations of computer politeness. Communications of the ACM, 47(4), 35–37. 10.1145/975817.975841
    https://doi.org/10.1145/975817.975841 [Google Scholar]
  55. Riek, L. D., Paul, P. C., & Robinson, P.
    (2010) When my robot smiles at me: Enabling human-robot rapport via real-time head gesture mimicry. Journal on Multimodal User Interfaces, 3(1), 99–108. 10.1007/s12193‑009‑0028‑2
    https://doi.org/10.1007/s12193-009-0028-2 [Google Scholar]
  56. Rossi, S., Staffa, M., Giordano, M., De Gregorio, M., Rossi, A., Tamburro, A., & Vellucci, C.
    (2015) Robot head movements and human effort in the evaluation of tracking performance. In2015 24th IEEE international symposium on robot and human interactive communication (RO-MAN), 791–796. IEEE, Kobe, Japan. 10.1109/ROMAN.2015.7333652
    https://doi.org/10.1109/ROMAN.2015.7333652 [Google Scholar]
  57. Ruijten, P. A., Haans, A., Ham, J., & Midden, C. J.
    (2019) Perceived human-likeness of social robots: testing the Rasch model as a method for measuring anthropomorphism. International Journal of Social Robotics, 11(3), 477–494. 10.1007/s12369‑019‑00516‑z
    https://doi.org/10.1007/s12369-019-00516-z [Google Scholar]
  58. Sacks, H., Schegloff, E. A., & Jefferson, G.
    (1974) A Simplest Systematics for the Organization of Turn-Taking for. Language, 50(4 Part 1), 696–735.
    [Google Scholar]
  59. Sadouohi, N., Pereira, A., Jain, R., Leite, L., & Lehman, J. F.
    (2017) Creating prosodic synchrony for a robot co-player in a speech-controlled game for children. In2017 12th ACM/IEEE International Conference on Human-Robot Interaction (HRI), 91–99). IEEE. 10.1145/2909824.3020244
    https://doi.org/10.1145/2909824.3020244 [Google Scholar]
  60. Schegloff, E. A.
    (1997) Whose text, whose context?Discourse & Society8(2), 165–187. 10.1177/0957926597008002002
    https://doi.org/10.1177/0957926597008002002 [Google Scholar]
  61. Schober, M. F., & Brennan, S. E.
    (2003) Processes of interactive spoken discourse: The role of the partner. InHandbook of discourse processes (pp.128–169). Routledge.
    [Google Scholar]
  62. Sirkin, D., Fischer, K., Jensen, L., & Ju, W.
    (2015) How Effective an Odd Message Can Be: Appropriate and Inappropriate Topics in Speech-Based Vehicle Interfaces. InAAAI Conference on Human Computation and Crowdsourcing, 36–37. San Diego, CA, USA.
    [Google Scholar]
  63. (2016) Eliciting conversation in robot vehicle interactions. InProceedings of the Association for the Advancement of Artificial Intelligence Spring Symposium Series: Enabling Computing Research in Socially Intelligent Human Robot Interaction, 164–171. AAAI, Stanford, CA, USA.
    [Google Scholar]
  64. Skantze, G., & Hjalmarsson, A.
    (2010) Towards incremental speech generation in dialogue systems. InProceedings of the the 11th annual meeting of the special interest group on discourse and dialogue (SIGDIAL) 2010 Conference, 1–8.
    [Google Scholar]
  65. (2013) Towards incremental speech generation in conversational systems. Computer Speech & Language, 27(1), 243–262. 10.1016/j.csl.2012.05.004
    https://doi.org/10.1016/j.csl.2012.05.004 [Google Scholar]
  66. Skantze, G., Hjalmarsson, A., & Oertel, C.
    (2014) Turn-taking, feedback and joint attention in situated human–robot interaction. Speech Communication, 65, 50–66. 10.1016/j.specom.2014.05.005
    https://doi.org/10.1016/j.specom.2014.05.005 [Google Scholar]
  67. Smedegaard, C. V.
    (2019) Reframing the role of novelty within social HRI: from noise to information. In14th ACM/IEEE International Conference on Human-Robot Interaction (HRI), 411–420. IEEE. 10.1109/HRI.2019.8673219
    https://doi.org/10.1109/HRI.2019.8673219 [Google Scholar]
  68. Stanton, C., & Stevens, C. J.
    (2014) Robot pressure: the impact of robot eye gaze and lifelike bodily movements upon decision-making and trust. InInternational conference on social robotics, 330–339. Springer, Sydney, Australia. 10.1007/978‑3‑319‑11973‑1_34
    https://doi.org/10.1007/978-3-319-11973-1_34 [Google Scholar]
  69. Staudte, M., & Crocker, M. W.
    (2011) Investigating joint attention mechanisms through spoken human–robot interaction. Cognition, 120(2), 268–291. 10.1016/j.cognition.2011.05.005
    https://doi.org/10.1016/j.cognition.2011.05.005 [Google Scholar]
  70. Wainer, J., Feil-Seifer, D. J., Shell, D. A., & Mataric, M. J.
    (2006) The role of physical embodiment in human-robot interaction. InROMAN 2006 – The 15th IEEE International Symposium on Robot and Human Interactive Communication, 117–122. IEEE, Hatfield, UK. 10.1109/ROMAN.2006.314404
    https://doi.org/10.1109/ROMAN.2006.314404 [Google Scholar]
  71. Wang, E., Lignos, C., Vatsal, A., & Scassellati, B.
    (2006) Effects of head movement on perceptions of humanoid robot behavior. InProceedings of the 1st ACM SIGCHI/SIGART Conference on Human-robot Interaction, 180–185. ACM, Salt Lake City, Utah, USA. 10.1145/1121241.1121273
    https://doi.org/10.1145/1121241.1121273 [Google Scholar]
  72. Zhang, J., Zheng, J., & Thalmann, N. M.
    (2018) MCAEM: mixed-correlation analysis-based episodic memory for companion–user interactions. The Visual Computer, 34(6), 1129–1141. 10.1007/s00371‑018‑1537‑3
    https://doi.org/10.1007/s00371-018-1537-3 [Google Scholar]
  73. Zheng, M., Moon, A., Croft, E. A., & Meng, M. Q. H.
    (2015) Impacts of robot head gaze on robot-to-human handovers. International Journal of Social Robotics, 7(5), 783–798. 10.1007/s12369‑015‑0305‑z
    https://doi.org/10.1007/s12369-015-0305-z [Google Scholar]
/content/journals/10.1075/is.00013.fis
Loading
/content/journals/10.1075/is.00013.fis
Loading

Data & Media loading...

This is a required field
Please enter a valid email address
Approval was successful
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error