Volume 20, Issue 3
  • ISSN 1572-0373
  • E-ISSN: 1572-0381
Buy:$35.00 + Taxes



This paper outlines the methodology and experiments associated with the reshaping of human intentions based on robot movements within Human-Robot Interactions (HRIs). Although studies on estimating human intentions are well studied in the literature, reshaping intentions through robot-initiated interactions is a new significant branching in the field of HRI. In this paper, we analyze how estimated human intentions can intentionally change through cooperation with mobile robots in real Human-Robot environments. This paper proposes an intention-reshaping system that includes either the Observable Operator Models (OOMs) or Hidden Markov Models (HMMs) to estimate human intention and decide which moves a robot should perform to reshape previously estimated human intentions into desired ones. At the low level, the system needs to track the locations of all mobile agents using cameras. We test our system on videos taken in a real HRI environment that has been developed as our experimental setup. The results show that OOMs are faster than HMMs and both models give correct decisions for testing sequences.


Article metrics loading...

Loading full text...

Full text loading...


  1. Aarno, D., & Kragic, D.
    (2006) Layered HMM for motion intention recognition, InIEEE/RSJ International Conference on Intelligent Robots and Systems, Beijing 2006, pp.5130–5135. doi:  10.1109/IROS.2006.282606
    https://doi.org/10.1109/IROS.2006.282606 [Google Scholar]
  2. Arasaratnam, I., Haykin, S., Kirubarajan, T., & Dilkes, F.
    (2006) Tracking the mode of operation of multi-function radars. 2006 IEEE Conference on Radar, Verona, NY, USA. doi:  10.1109/RADAR.2006.1631804
    https://doi.org/10.1109/RADAR.2006.1631804 [Google Scholar]
  3. Bratman, M.
    (1999) Intention, plans, and practical reason. Harvard University Press. doi:  10.2307/2185304
    https://doi.org/10.2307/2185304 [Google Scholar]
  4. Baum, L. E., Petrie, T., Soules, G., & Weiss, N.
    (1970) A maximization technique occurring in the statistical analysis of probabilistic functions of Markov chains. Ann. Math. Statist., 41, 1, pp.164–171. 10.1214/aoms/1177697196
    https://doi.org/10.1214/aoms/1177697196 [Google Scholar]
  5. Charniak, E., & Goldman, R.
    (1993) A Bayesian model of plan recognition, Artif. Intell. 64, pp.53–79. 10.1016/0004‑3702(93)90060‑O
    https://doi.org/10.1016/0004-3702(93)90060-O [Google Scholar]
  6. Chouchourelou, A., Matsuka, T., Harber, K., & Shiffrar, M.
    (2006) The visual analysis of emotional actions. Social Neuroscience, 1, pp.63–74. 10.1080/17470910600630599
    https://doi.org/10.1080/17470910600630599 [Google Scholar]
  7. Clarke, T. J., Bradshaw, M. F., Field, D. T., Hampson, S. E., & Rose, D.
    (2005) The perception of emotion from body movement in point-light displays of interpersonal dialogue. Perception, 34, pp.1171–1180. 10.1068/p5203
    https://doi.org/10.1068/p5203 [Google Scholar]
  8. Cutting, J. E., & Kozlowski, L. T.
    (1977) Recognizing friends by their walk-gait perception without familiarity cues. Bulletin of the Psychonomic Society, 9, pp.353–356. 10.3758/BF03337021
    https://doi.org/10.3758/BF03337021 [Google Scholar]
  9. Daprati, E., Wriessnegger, S., & Lacquaniti, F.
    (2007) Kinematic cues and recognition of self-generated actions. Experimental Brain Research, 177, pp.31–44. 10.1007/s00221‑006‑0646‑9
    https://doi.org/10.1007/s00221-006-0646-9 [Google Scholar]
  10. Dennett, D. C.
    (1987) The Intentional Stance. MIT Press. doi:  10.2307/2185215
    https://doi.org/10.2307/2185215 [Google Scholar]
  11. Dielmann, A., & Renals, S.
    (2004) Dynamic Bayesian Networks for Meeting Structuring. InProc. of the IEEE Int. Conf. on Acoustics, Speech, and Signal Processing, pp.629–632. 10.1109/ICASSP.2004.1327189
    https://doi.org/10.1109/ICASSP.2004.1327189 [Google Scholar]
  12. Dittrich, W. H., Troscianko, T., Lea, S. E. G., & Morgan, D.
    (1996) Perception of emotion from dynamic point-light displays represented in dance. Perception, 25, 727–738. 10.1068/p250727
    https://doi.org/10.1068/p250727 [Google Scholar]
  13. Durdu, A., Erkmen, I., Erkmen, A. M., Yilmaz, A.
    (2011) Morphing Estimated Human Intention via Human-Robot Interactions, Lecture Notes in Engineering and Computer Science: Proceedings of The World Congress on Engineering and Computer Science 2011, WCECS 2011, San Francisco, USA, pp.354–359.
    [Google Scholar]
  14. (2012) Chapter 13: Robotic Hardware and Software Integration for Changing Human Intentions. InPrototyping of Robotic Systems: Applications of Design and Implementation, Edited byT. Sobh and X. Xiong, Pennsylvania: IGI Global Publisher 2012 10.4018/978‑1‑4666‑0176‑5.ch013
    https://doi.org/10.4018/978-1-4666-0176-5.ch013 [Google Scholar]
  15. Durdu, A., Erkmen, I., Erkmen, A. M.
    (2012) Observable Operator Models for Reshaping Estimated Human Intention by Robot Moves in Human-Robot Interactions, IEEE–INISTA-12 International Symposium on Innovations in Intelligent Systems and Applications, July 2012, Trabzon, TURKIYE. 10.1109/INISTA.2012.6247009
    https://doi.org/10.1109/INISTA.2012.6247009 [Google Scholar]
  16. (2016) Estimating and Reshaping Human Intention via Human-Robot Interaction, Turkish J. Elec Eng & Comp Sci, 24, 1, pp. 88–104.
    [Google Scholar]
  17. Grezes, J., Frith, C., & Passingham, R. E.
    (2004) Brain mechanisms for inferring deceit in the actions of others. Journal of Neuroscience, 24, pp.5500–5505. 10.1523/JNEUROSCI.0219‑04.2004
    https://doi.org/10.1523/JNEUROSCI.0219-04.2004 [Google Scholar]
  18. Jaeger, H., Zhao, M., and Kolling, A.
    (2005) Efficient estimation of OOMs. InAdvances in Neural Information Processing Systems (NIPS). MIT Press.
    [Google Scholar]
  19. Kelley, R., Nicolescu, M., Tavakkoli, A., Nicolescu, M., Christopher King, George Bebis
    (2008) Understanding Human Intentions via Hidden Markov Models in Autonomous Mobile Robots. HRI’08, March12–15 2008, Amsterdam, Netherlands. 10.1145/1349822.1349870
  20. Knoblich, G., & Prinz, W.
    (2001) Recognition of self-generated actions from kinematic displays of drawing. Journal of Experimental Psychology: Human Perception and Performance, 27, pp.456–465.
    [Google Scholar]
  21. Kohler, E., Keysers, C., Umilta, M. A., Fogassi, L., Gallese, V., & Rizzolatti, G.
    (2002) Hearing sounds, understanding actions: Action representation in mirror neurons. Science, 297, pp.846–848. 10.1126/science.1070311
    https://doi.org/10.1126/science.1070311 [Google Scholar]
  22. Lee, K. K., & Xu, Y.
    (2004) Modeling human actions from learning. inProceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS ’04), 3, pp.2787–2792.
    [Google Scholar]
  23. Loula, F., Prasad, S., Harber, K., & Shiffrar, M.
    (2005) Recognizing people from their movement. Journal of Experimental Psychology: Human Perception and Performance, 31, pp.210–220.
    [Google Scholar]
  24. Manera, V., Schouten, B., Becchio, C., Bara, B. G., & Verfaillie, K.
    (2010) Inferring intentions from biological motion: A stimulus set of point-light communicative interactions. Behavior Research Methods, 42, pp.168–178. 10.3758/BRM.42.1.168
    https://doi.org/10.3758/BRM.42.1.168 [Google Scholar]
  25. Meltzoff, A. N.
    (1995) Understanding the intentions of others: Reenactment of intended acts by 18-month-old children. Developmental Psychology, 31, 5, pp1–16. 10.1037/0012‑1649.31.5.838
    https://doi.org/10.1037/0012-1649.31.5.838 [Google Scholar]
  26. Miyake, T., Matsumoto, T., Imamura, T., & Zhang, Z. E.
    (2011) Estimation of facial expression from its change in time. ICIC Express Letters, Part B: Applications, 2, 3, pp641–645.
    [Google Scholar]
  27. Nakauchi, Y., Noguchi, K., Somwong, P., Matsubara, T., & Namatame, A.
    (2003) Vivid room: human intention detection and activity support environment for ubiquitous autonomy. Intelligent Robots and Systems (IROS 2003).
    [Google Scholar]
  28. Noguchi, K., Somwong, P., Matsubara, T., & Nakauchi, Y.
    (2003) Human Intention Detection and Activity Support System for Ubiquitous Autonomy. Proceedings of IEEE International Symposium on Computational Intelligence in Robotics and Automation 2003, pp.906–911. 10.1109/CIRA.2003.1222300
    https://doi.org/10.1109/CIRA.2003.1222300 [Google Scholar]
  29. Pynadath, D.
    (1999) Probabilistic Grammars for Plan Recognition, Doctoral Thesis, the University of Michigan, MI.
  30. Rabiner, L. R.
    (1989) A tutorial on Hidden Markov Models and selected applications in speech recognition. Proceedings of the IEEE77 (2): 257–286. doi:  10.1109/5.18626
    https://doi.org/10.1109/5.18626 [Google Scholar]
  31. Roether, C. L., Omlor, L., Christensen, A., & Giese, M.
    (2009) Critical features for the perception of emotion from gait. Journal of Vision, 9, pp.1–32. 10.1167/9.6.15
    https://doi.org/10.1167/9.6.15 [Google Scholar]
  32. Runeson, S., & Frykholm, G.
    (1981) Visual perception of lifted weight. Journal of Experimental Psychology: Human Perception and Performance, 7, pp.733–740.
    [Google Scholar]
  33. Sebanz, N., & Shiffrar, M.
    (2009) Detecting deception in a bluffing body: The role of expertise. Psychonomic Bulletin & Review, 16, pp.170–175. 10.3758/PBR.16.1.170
    https://doi.org/10.3758/PBR.16.1.170 [Google Scholar]
  34. Russell, S. J., & Norvig, P.
    (2003) Artificial Intelligence: A Modern Approach, Prentice Hall series in artificial intelligence. Prentice Hall, second edition.
    [Google Scholar]
  35. Schmidt, S., & Färber, B.
    (2009) Pedestrians at the kerb – Recognising the action intentions of humans. Transportation Research Part F, 12, pp.300–310. 10.1016/j.trf.2009.02.003
    https://doi.org/10.1016/j.trf.2009.02.003 [Google Scholar]
  36. Schrempf, O. C., Albrecht, D., & Hanebeck, U. D.
    (2007) Tractable Probabilistic Models for Intention Recognition Based on Expert Knowledge. Proceedings of the 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems, San Diego, CA, USA, pp.1429–1434. 10.1109/IROS.2007.4399226
    https://doi.org/10.1109/IROS.2007.4399226 [Google Scholar]
  37. Sevdalis, V., & Keller, P. E.
    (2009) Self-recognition in the perception of actions performed in synchrony with music. Annals of the New York Academy of Sciences, 1169, pp.499–502. 10.1111/j.1749‑6632.2009.04773.x
    https://doi.org/10.1111/j.1749-6632.2009.04773.x [Google Scholar]
  38. (2010) Cues for self-recognition in point-light displays of actions performed in synchrony with music. Consciousness and Cognition, 19, 2, pp.617–626. 10.1016/j.concog.2010.03.017
    https://doi.org/10.1016/j.concog.2010.03.017 [Google Scholar]
  39. Spanczer, I.
    (2007) Observable Operator Models. Austrian Journal of Statistics, 36, 1, pp.41–52. 10.17713/ajs.v36i1.319
    https://doi.org/10.17713/ajs.v36i1.319 [Google Scholar]
  40. Tahboub, K. A.
    (2005) Compliant Human-Robot Cooperation based on Intention Recognition. Proceedings of the 2005 IEEE International Symposium on Intelligent Control, Limassol, Cyprus, June27–29, pp.1417–1422.
    [Google Scholar]
  41. (2006) Intelligent Human-Machine Interaction Based on Dynamic Bayesian Networks Probabilistic Intention Recognition. Journal of Intelligent Robotics Systems, 45, 1, pp.31–52. 10.1007/s10846‑005‑9018‑0
    https://doi.org/10.1007/s10846-005-9018-0 [Google Scholar]
  42. Terada, K., Shamoto, T., Mei, H., & Ito, A.
    (2007) Reactive Movements of Non-humanoid Robots Cause Intention Attribution in Humans. Proceedings of the 2007 IEEE/RSJ International Conference on Intelligent Robots and Systems, San Diego, CA, USA, pp.3715–3720. 10.1109/IROS.2007.4399429
    https://doi.org/10.1109/IROS.2007.4399429 [Google Scholar]
  43. Viterbi, A. J.
    (1967) “Error Bounds for Convolutional Codes and an Asymptotically Optimum Decoding Algorithm”, IEEE Transactions on Information Theory, 13, 2, pp.260–269. 10.1109/TIT.1967.1054010
    https://doi.org/10.1109/TIT.1967.1054010 [Google Scholar]
  44. Webb, T. L., & Sheeran, P.
    (2006) Does changing behavioral intentions engender behavior change? A metaanalysis of the experimental evidence. Psychological Bulletin, 132, 2, pp.249–268. 10.1037/0033‑2909.132.2.249
    https://doi.org/10.1037/0033-2909.132.2.249 [Google Scholar]
  45. Zhang, D., Gatica-Perez, D., Bengio, S., McCowan, I., & Lathoud, G.
    (2004) Modeling Individual and Group Actions in Meetings: A Two-Layer HMM Framework. InProc. of the IEEE Computer Society Conf. on Computer Vision and Pattern Recognition Workshop, pp.117–117.
    [Google Scholar]

Data & Media loading...

This is a required field
Please enter a valid email address
Approval was successful
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error