- Home
- e-Journals
- Interaction Studies
- Previous Issues
- Volume 8, Issue, 2007
Interaction Studies - Volume 8, Issue 3, 2007
Volume 8, Issue 3, 2007
-
What is a Human?: Toward psychological benchmarks in the field of human–robot interaction
pp.: 363–390 (28)More LessIn this paper, we move toward offering psychological benchmarks to measure success in building increasingly humanlike robots. By psychological benchmarks we mean categories of interaction that capture conceptually fundamental aspects of human life, specified abstractly enough to resist their identity as a mere psychological instrument, but capable of being translated into testable empirical propositions. Nine possible benchmarks are considered: autonomy, imitation, intrinsic moral value, moral accountability, privacy, reciprocity, conventionality, creativity, and authenticity of relation. Finally, we discuss how getting the right group of benchmarks in human–robot interaction will, in future years, help inform on the foundational question of what constitutes essential features of being human.
-
Intersubjectivity in human–agent interaction
Author(s): Justine Cassell and Andrea Tartaropp.: 391–410 (20)More LessWhat is the hallmark of success in human–agent interaction? In animation and robotics, many have concentrated on the looks of the agent — whether the appearance is realistic or lifelike. We present an alternative benchmark that lies in the dyad and not the agent alone: Does the agent’s behavior evoke intersubjectivity from the user? That is, in both conscious and unconscious communication, do users react to behaviorally realistic agents in the same way they react to other humans? Do users appear to attribute similar thoughts and actions? We discuss why we distinguish between appearance and behavior, why we use the benchmark of intersubjectivity, our methodology for applying this benchmark to embodied conversational agents (ECAs), and why we believe this benchmark should be applied to human–robot interaction.
-
Nonverbal intimacy as a benchmark for human–robot interaction
Author(s): Billy Leepp.: 411–422 (12)More LessStudies of human–human interactions indicate that relational dimensions, which are largely nonverbal, include intimacy/involvement, status/control, and emotional valence. This paper devises codes from a study of couples and strangers which may be behavior-mapped on to next generation android bodies. The codes provide act specifications for a possible benchmark of nonverbal intimacy in human–robot interaction. The appropriateness of emotionally intimate behaviors for androids is considered. The design and utility of the android counselor/psychotherapist is explored, whose body is equipped with semi-autonomous visceral and behavioral capacities for ‘doing intimacy.’
-
Benchmarks for evaluating socially assistive robotics
Author(s): David Feil-Seifer, Kristine Skinner and Maja J. Matarićpp.: 423–439 (17)More LessSocially assistive robotics (SAR) is a growing area of research. Evaluating SAR systems presents novel challenges. Using a robot for a socially assistive task can have various benefits and ethical implications. Many questions are important to understanding whether a robot is effective for a given application domain. This paper describes several benchmarks for evaluating SAR systems. There exist numerous methods for evaluating the many factors involved in a robot’s design. Benchmarks from psychology, anthropology, medicine, and human–robot interaction are proposed as measures of success in evaluating a given SAR system and its impact on the user and broader population.
-
What is the teacher’s role in robot programming by demonstration?: Toward benchmarks for improved learning
Author(s): Sylvain Calinon and Aude G. Billardpp.: 441–464 (24)More LessRobot programming by demonstration (RPD) covers methods by which a robot learns new skills through human guidance. We present an interactive, multimodal RPD framework using active teaching methods that places the human teacher in the robot’s learning loop. Two experiments are presented in which observational learning is first used to demonstrate a manipulation skill to a HOAP–3 humanoid robot by using motion sensors attached to the teacher’s body. Then, putting the robot through the motion, the teacher incrementally refines the robot’s skill by moving its arms manually, providing the appropriate scaffolds to reproduce the action. An incremental teaching scenario is proposed based on insights from various fields addressing developmental, psychological, and social issues related to teaching mechanisms in humans. Based on this analysis, different benchmarks are suggested to evaluate the setup further.
-
Working with a robot: Exploring relationship potential in human–robot systems
Author(s): Debra Bernstein, Kevin Crowley and Illah Nourbakhshpp.: 465–482 (18)More LessResearch on human–robot interaction has often ignored the human cognitive changes that might occur when humans and robots work together to solve problems. Facilitating human–robot collaboration will require understanding how the collaboration functions system-wide. We present detailed examples drawn from a study of children and an autonomous rover, and examine how children’s beliefs can guide the way they interact with and learn about the robot. Our data suggest that better collaboration might require that robots be designed to maximize their relationship potential with specific users.
-
Can robots be teammates?: Benchmarks in human–robot teams
Author(s): Victoria Groom and Clifford Nasspp.: 483–500 (18)More LessThe team has become a popular model to organize joint human–robot behavior. Robot teammates are designed with high-levels of autonomy and well-developed coordination skills to aid humans in unpredictable environments. In this paper, we challenge the assumption that robots will succeed as teammates alongside humans. Drawing from the literature on human teams, we evaluate robots’ potential to meet the requirements of successful teammates. We argue that lacking humanlike mental models and a sense of self, robots may prove untrustworthy and will be rejected from human teams. Benchmarks for evaluating human–robot teams are included, as are guidelines for defining alternative structures for human–robot groups.
-
Authenticity in the age of digital companions
Author(s): Sherry Turklepp.: 501–517 (17)More LessThe first generation of children to grow up with electronic toys and games saw computers as our “nearest neighbors.” They spoke of computers as rational machines and of people as emotional machines, a fragile formulation destined to be challenged. By the mid-1990s, computational creatures, including robots, were presenting themselves as “relational artifacts,” beings with feelings and needs. One consequence of this development is a crisis in authenticity in many quarters. In an increasing number of situations, people behave as though they no longer place value on living things and authentic emotion. This paper examines watershed moments in the history of human–machine interaction, focusing on the pertinence of relational artifacts to our collective perception of aliveness, life’s purposes, and the implications of relational artifacts for relationships. For now, the exploration of human–robot encounters leads us to questions about the morality of creating believable digital companions that are evocative but not authentic.
Volumes & issues
-
Volume 25 (2024)
-
Volume 24 (2023)
-
Volume 23 (2022)
-
Volume 22 (2021)
-
Volume 21 (2020)
-
Volume 20 (2019)
-
Volume 19 (2018)
-
Volume 18 (2017)
-
Volume 17 (2016)
-
Volume 16 (2015)
-
Volume 15 (2014)
-
Volume 14 (2013)
-
Volume 13 (2012)
-
Volume 12 (2011)
-
Volume 11 (2010)
-
Volume 10 (2009)
-
Volume 9 (2008)
-
Volume 8 (2007)
-
Volume 7 (2006)
-
Volume 6 (2005)
-
Volume 5 (2004)
Most Read This Month
Article
content/journals/15720381
Journal
10
5
false
