Full text loading...
, Nancie Gunson1, Angus Addlesee1, Neeraj Cherakara1, Christian Dondrup1, Weronika Sieińska1, Marta Romeo1 and Oliver Lemon1
Abstract
Managing conversational interactions with groups of people is still an open challenge in human-robot interaction, requiring a multi-modal combination of sensory inputs/outputs and dialogue systems. In this paper, we present the development of an integrated multi-modal system connecting a Large Language Model (LLM) with a social robot’s perception and action modules for managing situated multi-party interactions. We describe and discuss the exploratory results of a system-wide performance evaluation via a within-subjects user study in which 27 unique pairs of participants interacted with a social robot under two conditions: a multi-party capable system and a baseline system with only single-party capabilities. Participants interacted with the two systems in a combination of task-based and open-ended scenarios, for a total of 108 interactions with each of the two systems. Our evaluation demonstrated a slight preference for the Multi-Party system and a more balanced interaction overall, and highlights potentials and open challenges in the integration of LLMs capabilities into robotic conversational systems.
Article metrics loading...
Full text loading...
References
Data & Media loading...