- Home
- e-Journals
- Interaction Studies
- Previous Issues
- Volume 26, Issue 3, 2025
Interaction Studies - Volume 26, Issue 3, 2025
Volume 26, Issue 3, 2025
-
Moderating multi-party conversations with social robots
Author(s): Lucrezia Grassi, Carmine Tommaso Recchiuto and Antonio Sgorbissapp.: 392–421 (30)show More to view fulltext, buy and share links for: show Less to hide fulltext, buy and share links for:AbstractSocial robotics is a multidisciplinary field focused on designing and implementing robots capable of interacting with humans in social environments. However, group conversations challenge robots in interpreting social signals for effective participation. This study evaluates control policies for moderating multi-party conversation dynamics using a humanoid robot. The system employs a cloud-based framework to calculate speaker dominance as a weighted combination of speaking time and word count, while the Louvain algorithm identifies subgroups among participants. Control policies aim to minimize dominance disparities and subgroup formation, fostering balanced participation and group cohesion. A study with 300 middle school students compared these policies to a baseline in which the robot did not address individuals directly. The results demonstrated that the proposed policies reduced dominance gaps and subgroup formation, promoting more balanced interactions. These findings highlight the potential applicability of the approach across education, healthcare, and entertainment.
-
Evaluating multi-party interactions with social robots using large language models and multi-modal systems
pp.: 422–476 (55)show More to view fulltext, buy and share links for: show Less to hide fulltext, buy and share links for:AbstractManaging conversational interactions with groups of people is still an open challenge in human-robot interaction, requiring a multi-modal combination of sensory inputs/outputs and dialogue systems. In this paper, we present the development of an integrated multi-modal system connecting a Large Language Model (LLM) with a social robot’s perception and action modules for managing situated multi-party interactions. We describe and discuss the exploratory results of a system-wide performance evaluation via a within-subjects user study in which 27 unique pairs of participants interacted with a social robot under two conditions: a multi-party capable system and a baseline system with only single-party capabilities. Participants interacted with the two systems in a combination of task-based and open-ended scenarios, for a total of 108 interactions with each of the two systems. Our evaluation demonstrated a slight preference for the Multi-Party system and a more balanced interaction overall, and highlights potentials and open challenges in the integration of LLMs capabilities into robotic conversational systems.
-
If they disagree, will you conform?
Author(s): Giulia Pusceddu, Giulio Antonio Abbo, Francesco Rea, Tony Belpaeme and Alessandra Sciuttipp.: 477–505 (29)show More to view fulltext, buy and share links for: show Less to hide fulltext, buy and share links for:AbstractThis study investigates whether the opinions of robotic agents are more likely to influence human decision-making when the robots are perceived as value-aware (i.e., when they display an understanding of human principles). We designed an experiment in which participants interacted with two Furhat robots — one programmed to be Value-Aware and the other Non-Value-Aware — during a labeling task for images representing human values. Results indicate that participants distinguished the Value-Aware robot from the Non-Value-Aware one. Although their explicit choices did not indicate a clear preference for one robot over the other, participants directed their gaze more toward the Value-Aware robot. Additionally, the Value-Aware robot was perceived as more loyal, suggesting that value awareness in a social robot may enhance its perceived commitment to the group. Finally, when both robots disagreed with the participant, conformity occurred in about one out of four trials, and participants took longer to confirm their responses, suggesting that two robots expressing dissent may introduce hesitation in decision-making. On one hand, this highlights the potential risk that robots, if misused, could manipulate users for unethical purposes. On the other hand, it reinforces the idea that social robots might encourage reflection in ambiguous situations and help users avoid scams.
Volumes & issues
-
Volume 26 (2025)
-
Volume 25 (2024)
-
Volume 24 (2023)
-
Volume 23 (2022)
-
Volume 22 (2021)
-
Volume 21 (2020)
-
Volume 20 (2019)
-
Volume 19 (2018)
-
Volume 18 (2017)
-
Volume 17 (2016)
-
Volume 16 (2015)
-
Volume 15 (2014)
-
Volume 14 (2013)
-
Volume 13 (2012)
-
Volume 12 (2011)
-
Volume 11 (2010)
-
Volume 10 (2009)
-
Volume 9 (2008)
-
Volume 8 (2007)
-
Volume 7 (2006)
-
Volume 6 (2005)
-
Volume 5 (2004)
Most Read This Month