1887
Volume 22, Issue 1
  • ISSN 1572-0373
  • E-ISSN: 1572-0381
USD
Buy:$35.00 + Taxes

Abstract

Abstract

In human-chatbot interaction, users casually and regularly offend and abuse the chatbot they are interacting with. The current paper explores the relationship between chatbot humanlikeness on the one hand and sexual advances and verbal aggression by the user on the other hand. 283 conversations between the Cleverbot chatbot and its users were harvested and analysed. Our results showed higher counts of user verbal aggression and sexual comments towards Cleverbot when Cleverbot appeared more humanlike in its behaviour. Caution is warranted with the interpretation of the results however as no experimental manipulation was conducted and causality can thus not be inferred. Nonetheless, the findings are relevant for both the research on the abuse of conversational agents, and the development of efficient approaches to discourage or prevent verbal aggression by chatbot users.

Loading

Article metrics loading...

/content/journals/10.1075/is.20002.kei
2021-09-17
2025-02-12
Loading full text...

Full text loading...

References

  1. Ardissono, L., Boella, G., and Lesmo, L.
    (2000) A plan-based agent architecture for interpreting natural language dialogue. International Journal of Human-Computer Studies, 52(4):583–635. 10.1006/ijhc.1999.0347
    https://doi.org/10.1006/ijhc.1999.0347 [Google Scholar]
  2. Bartneck, C., Reichenbach, J., and Carpenter, J.
    (2008) The carrot and the stick – the role of praise and punishment in human-robot interaction. Interaction Studies – Social Behaviour and Communication in Biological and Artificial Systems, 9(2):179–203. 10.1075/is.9.2.03bar
    https://doi.org/10.1075/is.9.2.03bar [Google Scholar]
  3. Bozdogan, H.
    (1987) Model selection and akaike’s information criterion (aic): The general theory and its analytical extensions. Psychometrika, 52(3):345–370. 10.1007/BF02294361
    https://doi.org/10.1007/BF02294361 [Google Scholar]
  4. Brahnam, S.
    (2005) Strategies for handling customer abuse of ecas. Abuse: The darker side of humancomputer interaction, pages62–67.
    [Google Scholar]
  5. Brahnam, S. and De Angeli, A.
    (2012) Gender affordances of conversational agents. Interacting with Computers, 24(3):139–153. 10.1016/j.intcom.2012.05.001
    https://doi.org/10.1016/j.intcom.2012.05.001 [Google Scholar]
  6. Brscić, D., Kidokoro, H., Suehiro, Y., and Kanda, T.
    (2015) Escaping from children’s abuse of social robots. InProceedings of the International Conference on Human-Robot Interaction, pages59–66, Portland, USA. ACM/IEEE. 10.1145/2696454.2696468
    https://doi.org/10.1145/2696454.2696468 [Google Scholar]
  7. Burnham, K. P. and Anderson, D. R.
    (2003) Model selection and multimodel inference: a practical information-theoretic approach. Springer Science & Business Media, New York.
    [Google Scholar]
  8. Chin, H. and Yi, M. Y.
    (2019) Should an agent be ignoring it?: A study of verbal abuse types and conversational agents’ response styles. InExtended Abstracts of the 2019 CHI Conference on Human Factors in Computing Systems, pages1–6. ACM. 10.1145/3290607.3312826
    https://doi.org/10.1145/3290607.3312826 [Google Scholar]
  9. Connolly, J.
    (2020) Preventing robot abuse through emotional robot responses. InCompanion of the 2020 ACM/IEEE International Conference on Human-Robot Interaction, pages558–560. 10.1145/3371382.3377433
    https://doi.org/10.1145/3371382.3377433 [Google Scholar]
  10. Cowie, H. and Berdondini, L.
    (2002) The expression of emotion in response to bullying. Emotional and Behavioural Difficulties, 7(4):207–214. 10.1080/13632750200507018
    https://doi.org/10.1080/13632750200507018 [Google Scholar]
  11. Curry, A. C. and Rieser, V.
    (2018) # metoo alexa: How conversational systems respond to sexual harassment. InProceedings of the Second ACL Workshop on Ethics in Natural Language Processing, pages7–14. 10.18653/v1/W18‑0802
    https://doi.org/10.18653/v1/W18-0802 [Google Scholar]
  12. Darling, K.
    (2012) Extending legal rights to social robots. InWe Robot Conference, University of Miami, pages1–24, Miami, USA. University of Miami. 10.2139/ssrn.2044797
    https://doi.org/10.2139/ssrn.2044797 [Google Scholar]
  13. De Angeli, A.
    (2009) Ethical implications of verbal disinhibition with conversational agents. Psych-Nology Journal, 7(1):49–57.
    [Google Scholar]
  14. De Angeli, A. and Brahnam, S.
    (2006) Sex stereotypes and conversational agents. InGender and Interaction: real and virtual women in a male world. Venice, Italy, pages1–4.
    [Google Scholar]
  15. (2008) I hate you! disinhibition with virtual partners. Interacting with computers, 20(3):302–310. 10.1016/j.intcom.2008.02.004
    https://doi.org/10.1016/j.intcom.2008.02.004 [Google Scholar]
  16. De Angeli, A., Brahnam, S., Wallis, P., and Dix, A.
    (2006) Misuse and abuse of interactive technologies. InCHI’06 Extended Abstracts on Human Factors in Computing Systems, pages1647–1650, Montreal, Canada. ACM. 10.1145/1125451.1125753
    https://doi.org/10.1145/1125451.1125753 [Google Scholar]
  17. De Angeli, A. and Carpenter, R.
    (2005) Stupid computer! abuse and social identities. InProceedings of Abuse: The dark side of human-computer interaction, An INTERACT 2005 workshop, pages19–25.
    [Google Scholar]
  18. De Angeli, A., Johnson, G. I., and Coventry, L.
    (2001) The unfriendly user: exploring social reactions to chatterbots. InProceedings of The International Conference on Affective Human Factors Design, London, pages467–474.
    [Google Scholar]
  19. De Swert, K.
    (2012) Calculating inter-coder reliability in media content analysis using Krippendorffs Alpha. Center for Politics and Communication, University of Amsterdam, the Netherlands.
    [Google Scholar]
  20. Dindia, K., Fitzpatrick, M. A., and Kenny, D. A.
    (1997) Self-disclosure in spouse and stranger interaction: A social relations analysis. Human Communication Research, 23(3):388–412. 10.1111/j.1468‑2958.1997.tb00402.x
    https://doi.org/10.1111/j.1468-2958.1997.tb00402.x [Google Scholar]
  21. Fessler, L.
    (2017a) Apple and amazon are under fire for siri and alexas responses to sexual harassment. https://qz.com/work/1151282/siri-and-alexa-are-under-fire-for-their-replies-to-sexual-harassment/
  22. Haslam, N.
    (2006) Dehumanization: An integrative review. Personality and social psychology review, 10(3):252–264. 10.1207/s15327957pspr1003_4
    https://doi.org/10.1207/s15327957pspr1003_4 [Google Scholar]
  23. Haslam, N., Loughnan, S., Kashima, Y., and Bain, P.
    (2008) Attributing and denying humanness to others. European review of social psychology, 19(1):55–85. 10.1080/10463280801981645
    https://doi.org/10.1080/10463280801981645 [Google Scholar]
  24. Hern, A.
    (2010) Apple made siri deflect questions on feminism, leaked papers reveal. https://www.theguardian.com/technology/2019/sep/06/apple-rewrote-siri-to-deflect-questions-about-feminism. [Online; recovered23 June 2020].
  25. Hill, J., Ford, W. R., and Farreras, I. G.
    (2015) Real conversations with artificial intelligence: A comparison between human-human online conversations and human-chatbot conversations. Computers in Human Behavior, 49:245–250. 10.1016/j.chb.2015.02.026
    https://doi.org/10.1016/j.chb.2015.02.026 [Google Scholar]
  26. Ho, A., Hancock, J., and Miner, A. S.
    (2018) Psychological, relational, and emotional effects of self-disclosure after conversations with a chatbot. Journal of Communication, 68(4):712–733. 10.1093/joc/jqy026
    https://doi.org/10.1093/joc/jqy026 [Google Scholar]
  27. Hutchinson, M. K. and Holtman, M. C.
    (2005) Analysis of count data using poisson regression. Research in Nursing & Health, 28(5):408–418. 10.1002/nur.20093
    https://doi.org/10.1002/nur.20093 [Google Scholar]
  28. Jay, T.
    (2009) The utility and ubiquity of taboo words. Perspectives on Psychological Science, 4(2):153–161. 10.1111/j.1745‑6924.2009.01115.x
    https://doi.org/10.1111/j.1745-6924.2009.01115.x [Google Scholar]
  29. Katsyri, J., Forger, K., Mäkäräinen, M., and Takala, T.
    (2015) A review of empirical evidence on different uncanny valley hypotheses: Support for perceptual mismatch as one road to the valley of eeriness. Frontiers in psychology, 6. 10.3389/fpsyg.2015.00390
    https://doi.org/10.3389/fpsyg.2015.00390 [Google Scholar]
  30. Keijsers, M. and Bartneck, C.
    (2018) Mindless robots get bullied. InProceedings of the International Conference on Human-Robot Interaction, pages205–214, New York, USA. ACM/IEEE. 10.1145/3171221.3171266
    https://doi.org/10.1145/3171221.3171266 [Google Scholar]
  31. Keijsers, M., Bartneck, C., and Kazmi, H. S.
    (2019a) Cloud-based sentiment analysis for interactive agents. InProceedings of the 7th International Conference on Human-Agent Interaction, pages43–50. 10.1145/3349537.3351883
    https://doi.org/10.1145/3349537.3351883 [Google Scholar]
  32. (2019b) Cloud-based sentiment analysis for interactive agents. InProceedings of the 7th International Conference on Human-Agent Interaction (HAI), pages43–50. 10.1145/3349537.3351883
    https://doi.org/10.1145/3349537.3351883 [Google Scholar]
  33. Krach, S., Hegel, F., Wrede, B., Sagerer, G., Binkofski, F., and Kircher, T.
    (2008) Can machines think? interaction and perspective taking with robots investigated via fmri. PloS One, 3(7):e2597. 10.1371/journal.pone.0002597
    https://doi.org/10.1371/journal.pone.0002597 [Google Scholar]
  34. Lee, M. K., Kiesler, S., and Forlizzi, J.
    (2010) Receptionist or information kiosk: how do people talk with a robot?InProceedings of the 2010 ACM conference on Computer supported cooperative work, pages31–40. 10.1145/1718918.1718927
    https://doi.org/10.1145/1718918.1718927 [Google Scholar]
  35. Lortie, C. L. and Guitton, M. J.
    (2011) Judgment of the humanness of an interlocutor is in the eye of the beholder. PLoS One, 6(9):e25085. 10.1371/journal.pone.0025085
    https://doi.org/10.1371/journal.pone.0025085 [Google Scholar]
  36. Lowry, P. B., Zhang, J., Wang, C., and Siponen, M.
    (2016) Why do adults engage in cyberbullying on social media? an integration of online disinhibition and deindividuation effects with the social structure and social learning model. Information Systems Research, 27(4):962–986. 10.1287/isre.2016.0671
    https://doi.org/10.1287/isre.2016.0671 [Google Scholar]
  37. MacDorman, K. F. and Chattopadhyay, D.
    (2016) Reducing consistency in human realism increases the uncanny valley effect; increasing category uncertainty does not. Cognition, 146:190–205. 10.1016/j.cognition.2015.09.019
    https://doi.org/10.1016/j.cognition.2015.09.019 [Google Scholar]
  38. Mauldin, M. L.
    (1994) Chatterbots, tinymuds, and the turing test: Entering the loebner prize competition. InAAAI, volume94, pages16–21.
    [Google Scholar]
  39. Moore, S.
    (2018) Gartner says 25 percent of customer service operations will use virtual customer assistants by 2020.
    [Google Scholar]
  40. Mori, M.
    (1970) The uncanny valley. Energy, 7(4):33–35.
    [Google Scholar]
  41. Nass, C., Steuer, J., and Tauber, E. R.
    (1994) Computers are social actors. InProceedings of the SIGCHI conference on Human factors in computing systems, pages72–78, Boston, USA. ACM.
    [Google Scholar]
  42. Nomura, T., Kanda, T., Kidokoro, H., Suehiro, Y., and Yamada, S.
    (2017) Why do children abuse robots?Interaction Studies, 17(3):347–369. 10.1075/is.17.3.02nom
    https://doi.org/10.1075/is.17.3.02nom [Google Scholar]
  43. Oberman, L. M., McCleery, J. P., Ramachandran, V. S., and Pineda, J. A.
    (2007) Eeg evidence for mirror neuron activity during the observation of human and robot actions: Toward an analysis of the human qualities of interactive robots. Neurocomputing, 70(13–15):2194–2203. 10.1016/j.neucom.2006.02.024
    https://doi.org/10.1016/j.neucom.2006.02.024 [Google Scholar]
  44. Paetzel, M., Peters, C., Nyström, I., and Castellano, G.
    (2016) Congruency matters – how ambiguous gender cues increase a robots uncanniness. InInternational Conference on Social Robotics, pages402–412. Springer. 10.1007/978‑3‑319‑47437‑3_39
    https://doi.org/10.1007/978-3-319-47437-3_39 [Google Scholar]
  45. Pennebaker, J. W., Booth, R. J., Boyd, R. L., and Francis, M. E.
    (2015a) Linguistic inquiry and word count: Liwc 2015 [computer software]. pennebaker conglomerates.
    [Google Scholar]
  46. Pennebaker, J. W., Boyd, R. L., Jordan, K., and Blackburn, K.
    (2015b) The development and psychometric properties of liwc2015. Technical report, The University of Texas at Austin.
    [Google Scholar]
  47. Reeves, B. and Nass, C.
    (1996) The Media Equation. CSLI Publications and Cambridge University Press, Cambridge.
    [Google Scholar]
  48. Rehm, M. and Krogsager, A.
    (2013) Negative affect in human robot interaction – impoliteness in unexpected encounters with robots. InProceedings of the 22nd IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), pages45–50. IEEE. 10.1109/ROMAN.2013.6628529
    https://doi.org/10.1109/ROMAN.2013.6628529 [Google Scholar]
  49. Slater, M., Antley, A., Davison, A., Swapp, D., Guger, C., Barker, C., Pistrang, N., and Sanchez-Vives, M. V.
    (2006) A virtual reprise of the stanley milgram obedience experiments. PloS one, 1(1):e39. 10.1371/journal.pone.0000039
    https://doi.org/10.1371/journal.pone.0000039 [Google Scholar]
  50. Sokol, N., Bussey, K., and Rapee, R. M.
    (2016) Victims responses to bullying: The gap between students evaluations and reported responses. School Mental Health, 8(4):461–475. 10.1007/s12310‑016‑9185‑0
    https://doi.org/10.1007/s12310-016-9185-0 [Google Scholar]
  51. Strait, M., Contreras, V., and Vela, C. D.
    (2018) Verbal disinhibition towards robots is associated with general antisociality. arXiv e-prints.
    [Google Scholar]
  52. Suler, J.
    (2004) The online disinhibition effect. Cyberpsychology & behavior, 7(3):321–326. 10.1089/1094931041291295
    https://doi.org/10.1089/1094931041291295 [Google Scholar]
  53. Tan, X. Z., Vázquez, M., Carter, E. J., Morales, C. G., and Steinfeld, A.
    (2018) Inducing bystander interventions during robot abuse with social mechanisms. InProceedings of the International Conference on Human-Robot Interaction, pages169–177, New York, USA. ACM/IEEE. 10.1145/3171221.3171247
    https://doi.org/10.1145/3171221.3171247 [Google Scholar]
  54. Veletsianos, G., Scharber, C., and Doering, A.
    (2008) When sex, drugs, and violence enter the classroom: Conversations between adolescents and a female pedagogical agent. Interacting with computers, 20(3):292–301. 10.1016/j.intcom.2008.02.007
    https://doi.org/10.1016/j.intcom.2008.02.007 [Google Scholar]
  55. Whitby, B.
    (2008) Sometimes its hard to be a robot: A call for action on the ethics of abusing artificial agents. Interacting with Computers, 20(3):326–333. 10.1016/j.intcom.2008.02.002
    https://doi.org/10.1016/j.intcom.2008.02.002 [Google Scholar]
  56. Zhang, Z.
    (2016) Variable selection with stepwise and best subset approaches. Annals of Translational Medicine, 4(7). 10.21037/atm.2016.03.35
    https://doi.org/10.21037/atm.2016.03.35 [Google Scholar]
/content/journals/10.1075/is.20002.kei
Loading
/content/journals/10.1075/is.20002.kei
Loading

Data & Media loading...

This is a required field
Please enter a valid email address
Approval was successful
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error