- Home
- e-Journals
- Interaction Studies
- Previous Issues
- Volume 9, Issue, 2008
Interaction Studies - Volume 9, Issue 2, 2008
Volume 9, Issue 2, 2008
-
The carrot and the stick: The role of praise and punishment in human–robot interaction
Author(s): Christoph Bartneck, Juliane Reichenbach and Julie Carpenterpp.: 179–203 (25)More LessThis paper presents two studies that investigate how people praise and punish robots in a collaborative game scenario. In a first study, subjects played a game together with humans, computers, and anthropomorphic and zoomorphic robots. The different partners and the game itself were presented on a computer screen. Results showed that praise and punishment were used the same way for computer and human partners. Yet robots, which are essentially computers with a different embodiment, were treated differently. Very machine-like robots were treated just like the computer and the human; robots very high on anthropomorphism / zoomorphism were praised more and punished less. However, barely any of the participants believed that they actually played together with a robot. After this first study, we refined the method and also tested if the presence of a real robot, in comparison to a screen representation, would influence the measurements. The robot, in the form of an AIBO, would either be present in the room or only be represented on the participants’ computer screen (presence). Furthermore, the robot would either make 20% errors or 40% errors (error rate) in the collaborative game. We automatically measured the praising and punishing behavior of the participants towards the robot and also asked the participant to estimate their own behavior. Results show that even the presence of the robot in the room did not convince all participants that they played together with the robot. To gain full insight into this human–robot relationship it might be necessary to directly interact with the robot. The participants unconsciously praised AIBO more than the human partner, but punished it just as much. Robots that adapt to the users’ behavior should therefore pay extra attention to the users’ praises, compared to their punishments.
-
The influence of robot personality on perceived and preferred level of user control
Author(s): Bernt Meerbeek, Jettie Hoonhout, Peter Bingley and Jacques M.B. Terkenpp.: 204–229 (26)More LessThis paper describes the design and evaluation of a personality for the robotic user interface “iCat”. An application was developed that helps users find a TV-programme that fits their interests. Two experiments were conducted to investigate what personality users prefer for the robotic TV-assistant, what level of control they prefer (i.e. how autonomous the robot should behave), and how personality and the level of control relate to each other. The first experiment demonstrated that it is possible to create convincing personalities of the TV-assistant by applying various social cues. The results of the second experiment showed that an extravert and agreeable TV-assistant was preferred over a more introvert and formal one. Overall, the most preferred combination was an extravert and friendly personality with low user control. Additionally, it was found that perceived level of control was influenced by the robot’s personality. This suggests that the robot’s personality can be used as a means to increase the amount of control that users perceive.
-
Interaction between human and robot: An affect-inspired approach
Author(s): Pramila Agrawal, Changchun Liu and Nilanjan Sarkarpp.: 230–257 (28)More LessThis paper presents a human–robot interaction framework where a robot can infer implicit affective cues of a human and respond to them appropriately. Affective cues are inferred by the robot in real-time from physiological signals. A robot-based basketball game is designed where a robotic “coach” monitors the human participant’s anxiety to dynamically reconfigure game parameters to allow skill improvement while maintaining desired anxiety levels. The results of the above-mentioned anxiety-based sessions are compared with performance-based sessions where in the latter sessions, the game is adapted only according to the player’s performance. It was observed that 79% of the participants showed lower anxiety during anxiety-based session than in the performance-based session, 64% showed a greater improvement in performance after the anxiety-based session and 71% of the participants reported greater overall satisfaction during the anxiety-based sessions. This is the first time, to our knowledge, that the impact of real-time affective communication between a robot and a human has been demonstrated experimentally.
-
Social and physiological influences of robot therapy in a care house
Author(s): Kazuyoshi Wada and Takanori Shibatapp.: 258–276 (19)More LessThis article presents research on robot therapy for elderly residents in a care house. Experiments were conducted from June 2005, lasting more than 2 months. Two therapeutic baby seal robots were introduced to the residents, and activated for over 9 hours daily. To investigate the psychological and social effects of the robots, the residents’ activities in public areas were recorded using video cameras, during the daytime (8:30–18:00) for over 2 months. In addition, urinalysis of the residents was performed for 17-ketosteroid sulfate and 17-hydroxycorticosteroid. Results of the video analysis indicated that social interaction increased through interaction with the seal robots. Results of the urine tests showed that the reactions of the subjects’ vital organs to stress improved after the introduction of the robots.
-
Analyzing social situations for human–robot interaction
Author(s): Alan R. Wagner and Ronald C. Arkinpp.: 277–300 (24)More LessThis paper presents an algorithm for analyzing social situations within a robot. We contribute a method that allows the robot to use information about the situation to select interactive behaviors. This work is based on interdependence theory, a social psychological theory of interaction and interpersonal situation analysis. Experiments demonstrate the utility of the information provided by the situation analysis algorithm and of the value of this method for guiding robot interaction. We conclude that the situation analysis algorithm offers a viable, principled, and general approach to explore interactive robotics problems.
-
A cognitive approach to goal-level imitation
Author(s): Antonio Chella, Haris Dindo and Ignazio Infantinopp.: 301–318 (18)More LessImitation in robotics is seen as a powerful means to reduce the complexity of robot programming. It allows users to instruct robots by simply showing them how to execute a given task. Through imitation robots can learn from their environment and adapt to it just as human newborns do. Despite different facets of imitative behaviours observed in humans and higher primates, imitation in robotics has usually been implemented as a process of copying demonstrated actions onto the movement apparatus of the robot. While the results being reached are impressive, we believe that a shift towards a higher expression of imitation, namely the comprehension of human actions and inference of its intentions, is needed. In order to be useful as human companions, robots must act for a purpose by achieving goals and fulfilling human expectations. In this paper we present ConSCIS (Conceptual Space based Cognitive Imitation System), an architecture for goal-level imitation in robotics where the focus is put on final effects of actions on objects. The architecture tightly links low-level data with high-level knowledge, and integrates, in a unified framework, several aspects of imitation, such as perception, learning, knowledge representation, action generation and robot control. Some preliminary experimental results with an anthropomorphic arm/hand robotic system are shown.
-
Learning behavior fusion from demonstration
Author(s): Monica Nicolescu, Odest Chadwicke Jenkins, Adam Olenderski and Eric Fritzingerpp.: 319–352 (34)More LessA critical challenge in robot learning from demonstration is the ability to map the behavior of the trainer onto a robot’s existing repertoire of basic/primitive capabilities. In part, this problem is due to the fact that the observed behavior of the teacher may consist of a combination (or superposition) of the robot’s individual primitives. The problem becomes more complex when the task involves temporal sequences of goals. We introduce an autonomous control architecture that allows for learning of hierarchical task representations, in which: (1) every goal is achieved through a linear superposition (or fusion) of robot primitives and (2) sequencing across goals is achieved through arbitration. We treat learning of the appropriate superposition as a state estimation problem over the space of possible linear fusion weights, inferred through a particle filter. We validate our approach in both simulated and real world environments with a Pioneer 3DX mobile robot.
-
Content-based control of goal-directed attention during human action perception
Author(s): Yiannis Demiris and Bassam Khadhouripp.: 353–376 (24)More LessDuring the perception of human actions by robotic assistants, the robotic assistant needs to direct its computational and sensor resources to relevant parts of the human action. In previous work we have introduced HAMMER (Hierarchical Attentive Multiple Models for Execution and Recognition) (Demiris and Khadhouri, 2006), a computational architecture that forms multiple hypotheses with respect to what the demonstrated task is, and multiple predictions with respect to the forthcoming states of the human action. To confirm their predictions, the hypotheses request information from an attentional mechanism, which allocates the robot’s resources as a function of the saliency of the hypotheses. In this paper we augment the attention mechanism with a component that considers the content of the hypotheses’ requests, with respect to the content’s reliability, utility and cost. This content-based attention component further optimises the utilisation of the resources while remaining robust to noise. Such computational mechanisms are important for the development of robotic devices that will rapidly respond to human actions, either for imitation or collaboration purposes.
Volumes & issues
-
Volume 25 (2024)
-
Volume 24 (2023)
-
Volume 23 (2022)
-
Volume 22 (2021)
-
Volume 21 (2020)
-
Volume 20 (2019)
-
Volume 19 (2018)
-
Volume 18 (2017)
-
Volume 17 (2016)
-
Volume 16 (2015)
-
Volume 15 (2014)
-
Volume 14 (2013)
-
Volume 13 (2012)
-
Volume 12 (2011)
-
Volume 11 (2010)
-
Volume 10 (2009)
-
Volume 9 (2008)
-
Volume 8 (2007)
-
Volume 7 (2006)
-
Volume 6 (2005)
-
Volume 5 (2004)
Most Read This Month
Article
content/journals/15720381
Journal
10
5
false
