- Home
- e-Journals
- Interaction Studies
- Previous Issues
- Volume 14, Issue, 2013
Interaction Studies - Volume 14, Issue 3, 2013
Volume 14, Issue 3, 2013
-
Design of a gaze behavior at a small mistake moment for a robot
Author(s): Masahiro Shiomi, Kayako Nakagawa and Norihiro Hagitapp.: 317–328 (12)More LessA change of gaze behavior at a small mistake moment is a natural response that reveals our own mistakes and suggests an apology to others with whom we are working or interacting. In this paper we investigate how robot gaze behaviors at small mistake moments change the impressions of others. To prepare gaze behaviors for a robot, first, we identified by questionnaires how human gaze behaviors change in such situations and extracted three kinds: looking at the other, looking down, and looking away. We prepared each gaze behavior, added a no-gaze behavior, and investigated how a robot’s gaze behavior at a small mistake moment changes the impressions of the interacting people in a simple cooperative task. Experiment results show that the ‘looking at the other’ gaze behavior outperforms the other gaze behaviors and indicates the degrees of perceived apologeticness and friendliness. Keywords: Communication robots; gaze; mistake; mitigation
-
Robots can be perceived as goal-oriented agents
Author(s): Alessandra Sciutti, Ambra Bisio, Francesco Nori, Giorgio Metta, Luciano Fadiga and Giulio Sandinipp.: 329–350 (22)More LessUnderstanding the goals of others is fundamental for any kind of interpersonal interaction and collaboration. From a neurocognitive perspective, intention understanding has been proposed to depend on an involvement of the observer’s motor system in the prediction of the observed actions (Nyström et al. 2011; Rizzolatti & Sinigaglia 2010; Southgate et al. 2009). An open question is if a similar understanding of the goal mediated by motor resonance can occur not only between humans, but also for humanoid robots. In this study we investigated whether goal-oriented robotic actions can induce motor resonance by measuring the appearance of anticipatory gaze shifts to the goal during action observation. Our results indicate a similar implicit processing of humans’ and robots’ actions and propose to use anticipatory gaze behaviour as a tool for the evaluation of human-robot interactions. Keywords: Humanoid robot; motor resonance; anticipation; proactive gaze; action understanding
-
Can infants use robot gaze for object learning?: The effect of verbalization
Author(s): Yuko Okumura, Yasuhiro Kanakogi, Takayuki Kanda, Hiroshi Ishiguro and Shoji Itakurapp.: 351–365 (15)More LessPrevious research has shown that although infants follow the gaze direction of robots, robot gaze does not facilitate infants’ learning for objects. The present study examined whether robot gaze affects infants’ object learning when the gaze behavior was accompanied by verbalizations. Twelve-month-old infants were shown videos in which a robot with accompanying verbalizations gazed at an object. The results showed that infants not only followed the robot’s gaze direction but also preferentially attended to the cued object when the ostensive verbal signal was present. Moreover, infants showed enhanced processing of the cued object when ostensive and referential verbal signals were increasingly present. These effects were not observed when mere nonverbal sound stimuli instead of verbalizations were added. Taken together, our findings indicate that robot gaze accompanying verbalizations facilitates infants’ object learning, suggesting that verbalizations are important in the design of robot agents from which infants can learn. Keywords: gaze following; humanoid robot; infant learning; verbalization; cognitive development
-
Interactions between a quiz robot and multiple participants: Focusing on speech, gaze and bodily conduct in Japanese and English speakers
pp.: 366–389 (24)More LessThis paper reports on a quiz robot experiment in which we explore similarities and differences in human participant speech, gaze, and bodily conduct in responding to a robot’s speech, gaze, and bodily conduct across two languages. Our experiment involved three-person groups of Japanese and English-speaking participants who stood facing the robot and a projection screen that displayed pictures related to the robot’s questions. The robot was programmed so that its speech was coordinated with its gaze, body position, and gestures in relation to transition relevance places (TRPs), key words, and deictic words and expressions (e.g. this, this picture) in both languages. Contrary to findings on human interaction, we found that the frequency of English speakers’ head nodding was higher than that of Japanese speakers in human-robot interaction (HRI). Our findings suggest that the coordination of the robot’s verbal and non-verbal actions surrounding TRPs, key words, and deictic words and expressions is important for facilitating HRI irrespective of participants’ native language. Keywords: coordination of verbal and non-verbal actions; robot gaze comparison between English and Japanese; human-robot interaction (HRI); transition relevance place (TRP); conversation analysis
-
Cooperative gazing behaviors in human multi-robot interaction
pp.: 390–418 (29)More LessWhen humans are addressing multiple robots with informative speech acts (Clark & Carlson 1982), their cognitive resources are shared between all the participating robot agents. For each moment, the user’s behavior is not only determined by the actions of the robot that they are directly gazing at, but also shaped by the behaviors from all the other robots in the shared environment. We define cooperative behavior as the action performed by the robots that are not capturing the user’s direct attention. In this paper, we are interested in how the human participants adjust and coordinate their own behavioral cues when the robot agents are performing different cooperative gaze behaviors. A novel gaze-contingent platform was designed and implemented. The robots’ behaviors were triggered by the participant’s attentional shifts in real time. Results showed that the human participants were highly sensitive when the robot agents were performing different cooperative gazing behaviors. Keywords: human-robot interaction; multi-robot interaction; multiparty interaction; eye gaze cue; embodied conversational agent
-
Learning where to look
Author(s): Yasser F.O. Mohammad and Toyoaki Nishidapp.: 419–450 (32)More LessAutonomous development of gaze behavior for natural human-robot interaction
-
Designing robot eyes for communicating gaze
Author(s): Tomomi Onuki, Takafumi Ishinoda, Emi Tsuburaya, Yuki Miyata, Yoshinori Kobayashi and Yoshinori Kunopp.: 451–479 (29)More LessAbstract—Human eyes not only serve the function of enabling us “to see” something, but also perform the vital role of allowing us “to show” our gaze for non-verbal communication, such as through establishing eye contact and joint attention. The eyes of service robots should therefore also perform both of these functions. Moreover, they should be friendly in appearance so that humans may feel comfortable with the robots. Therefore we maintain that it is important to consider gaze communication capability and friendliness in designing the appearance of robot eyes. In this paper, we propose a new robot face with rear-projected eyes for changing their appearance while simultaneously realizing the showing of gaze by incorporating stereo cameras. Additionally, we examine which shape of robot eyes is most suitable for gaze reading and gives the friendliest impression, through experiments where we altered the shape and iris size of robot eyes. Keywords: Gaze reading; facial design; projector camera system
-
Course of maternal prosodic incitation (motherese) during early development in autism: An exploratory home movie study
pp.: 480–496 (17)More LessWe examined the course of caregiver (CG) motherese and the course of the infant’s response based on home movies from two single cases: a boy with typical development (TD) and a boy with autistic development (AD). We first blindly assessed infant CG interaction using the Observer computer-based coding procedure, then analyzed speech CG production using a computerized algorithm. Finally we fused the two procedures and filtered for co-occurrence. In this exploratory study we found that the course of CG parentese differed based on gender (father vs. mother) and child status (TD vs. AD). The course of an infant’s response to CG vocalization differed according to the type of speech (motherese vs. other speech) and child status (TD vs. AD). Mothers spent more time interacting with infants, and fathers appeared to interact with their child preferentially between 12 and 18 months in the TD boy, but not in the AD boy. The TD boy responded equally well to motherese compared to other speech after 1 year of age. For the AD boy, the responses to both types of speech were lower than in the boy with TD and decreased from the second to the third semester. Keywords: Autism; motherese; early interaction; computational methods
Volumes & issues
-
Volume 25 (2024)
-
Volume 24 (2023)
-
Volume 23 (2022)
-
Volume 22 (2021)
-
Volume 21 (2020)
-
Volume 20 (2019)
-
Volume 19 (2018)
-
Volume 18 (2017)
-
Volume 17 (2016)
-
Volume 16 (2015)
-
Volume 15 (2014)
-
Volume 14 (2013)
-
Volume 13 (2012)
-
Volume 12 (2011)
-
Volume 11 (2010)
-
Volume 10 (2009)
-
Volume 9 (2008)
-
Volume 8 (2007)
-
Volume 7 (2006)
-
Volume 6 (2005)
-
Volume 5 (2004)
Most Read This Month
Article
content/journals/15720381
Journal
10
5
false
