- Home
- e-Journals
- Interaction Studies
- Previous Issues
- Volume 24, Issue 1, 2023
Interaction Studies - Volume 24, Issue 1, 2023
Volume 24, Issue 1, 2023
-
Toward a multimodal and continuous approach of infant-adult interactions
Author(s): Marianne Jover and Maya Gratierpp.: 5–47 (43)More LessAbstractAdult-infant early dyadic interactions have been extensively explored by developmental psychologists. Around the age of 2 months, infants already demonstrate complex, delicate and very sensitive behaviors that seem to express their ability to interact and share emotions with their caregivers. This paper presents 3 pilot studies of parent-infant dyadic interaction in various set-ups. The first two present longitudinal data collected on two infants aged between 1 and 6 months and their mothers. We analyzed the development of coordination between them, at the motor and at the vocal level. The 3rd pilot study aims to explore interpersonal coordination in both vocal behavior and motor activity for one infant and his mother at 2, 4 and 6 months. These pilot studies however leave a number of questions open concerning developmental changes and infants’ progressive mastery of interaction. We identify areas worth examining and try to tease out specific issues that may help develop new methodological pathways for the study of early naturalistic social interaction. We assume that a continuous, rather than discrete, approach would better capture the changes taking place in the various communicative modalities, while also displaying each dyad’s specificity and the narrative dimension of social engagement between infants and caregivers.
-
The Puss in Boots effect
Author(s): Jemma Forman, Louise Brown, Holly Root-Gutteridge, Graham Hole, Raffaela Lesch, Katarzyna Pisanski and David Rebypp.: 48–65 (18)More LessAbstractPet-directed speech (PDS) is often produced by humans when addressing dogs. Similar to infant-directed speech, PDS is marked by a relatively higher and more modulated fundamental frequency (f0) than is adult-directed speech. We tested the prediction that increasing eye size in dogs, one facial feature of neoteny (juvenilisation), would elicit exaggerated prosodic qualities or pet-directed speech. We experimentally manipulated eye size in photographs of twelve dog breeds by −15%, +15% and +30%. We first showed that dogs with larger eyes were indeed perceived as younger. We then recorded men and women speaking towards these photographs, who also rated these images for cuteness. Linear mixed-effects models demonstrated that increasing eye size by 15% significantly increased pitch range (f0 range) and variability (f0CV) among women only. Cuteness ratings did not vary with eye size, due to a possible ceiling effect across eye sizes. Our results offer preliminary evidence that large eyes can elicit pet-directed speech and suggest that PDS may be modulated by perceived juvenility rather than cuteness. We discuss these findings in the context of inter-species vocal communication.
-
Full-duplex acoustic interaction system for cognitive experiments with cetaceans
Author(s): Jörg Rychen, Julie Semoroz, Alexander Eckerle, Richard HR Hahnloser and Rébecca Kleinbergerpp.: 66–87 (22)More LessAbstractCetaceans show high cognitive abilities and strong social bonds. Their primary sensory modality to communicate and sense the environment is acoustics. Research on their echolocation and social vocalizations typically uses visual and tactile systems adapted from research on primates or birds. Such research would benefit from a purely acoustic communication system to better match their natural capabilities. We argue that a full duplex system, in which signals can flow in both directions simultaneously is essential for communication research. We designed and implemented a full duplex system to acoustically interact with cetaceans in the wild, featuring digital echo-suppression. We pilot tested the system in Arctic Norway and achieved an echo suppression of 18 dB. We discuss the limiting factors and how to improve the echo suppression further. The system enabled vocal interaction with the underwater acoustic scene by allowing experimenters to listen while producing sounds. We describe our motivations, then present our pilot deployment and give examples of initial explorative attempts to vocally interact with wild orcas and humpback whales.
-
Grapheme–phoneme correspondence learning in parrots
Author(s): Jennifer M. Cunha, Ilyena Hirskyj-Douglas, Rèbecca Kleinberger, Susan Clubb and Lynn K. Perrypp.: 88–130 (43)More LessAbstractSymbolic representation acquisition is the complex cognitive process consisting of learning to use a symbol to stand for something else. A variety of non-human animals can engage in symbolic representation learning. One particularly complex form of symbol representation is the associations between orthographic symbols and speech sounds, known as grapheme–phoneme correspondence. To date, there has been little evidence that animals can learn this form of symbolic representation. Here, we evaluated whether an Umbrella cockatoo (Cacatua alba) can learn letter-speech correspondence using English words. The bird-participant was trained with phonics instruction and then tested on pairs of index cards while the experimenter spoke the word. The words were unknown to the bird and the experimenter was blinded to the correct card position. The cockatoo’s accuracy (M = 71%) was statistically significant. Further, we found a strong correlation between the bird’s word-identification success and the number of overlapping letters between words, where the more overlapping letters between words, the more likely the cockatoo answered incorrectly. Our results strongly suggest that parrots may have the ability to learn grapheme–phoneme correspondences.
-
From vocal prosody to movement prosody, from HRI to understanding humans
Author(s): Philip Scales, Véronique Aubergé and Olivier Aycardpp.: 131–168 (38)More LessAbstractHuman–Human and Human–Robot Interaction are known to be influenced by a variety of modalities and parameters. Nevertheless, it remains a challenge to anticipate how a given mobile robot’s navigation and appearance will impact how it is perceived by humans. Drawing a parallel with vocal prosody, we introduce the notion of movement prosody, which encompasses spatio-temporal and appearance dimensions which are involved in a person’s perceptual experience of interacting with a mobile robot. We design a novel robot motion corpus, encompassing variables related to the kinematics, gaze, and appearance of the robot, which we hypothesize are involved in movement prosody. Initial results of three perception experiments suggest that these variables have significant influences on participants’ perceptions of robot socio-affects and physical attributes.
-
Quietly angry, loudly happy
Author(s): Eric Bolo, Muhammad Samoul, Nicolas Seichepine and Mohamed Chetouanipp.: 169–193 (25)More LessAbstractPhone calls are an essential communication channel in today’s contact centers, but they are more difficult to analyze than written or form-based interactions. To that end, companies have traditionally used surveys to gather feedback and gauge customer satisfaction. In this work, we study the relationship between self-reported customer satisfaction (CSAT) and automatic utterance-level indicators of emotion produced by affect recognition models, using a real dataset of contact center calls. We find (1) that positive valence is associated with higher CSAT scores, while the presence of anger is associated with lower CSAT scores; (2) that automatically detected affective events and CSAT response rate are linked, with calls containing anger/positive valence exhibiting respectively a lower/higher response rate; (3) that the dynamics of detected emotions are linked with both CSAT scores and response rate, and that emotions detected at the end of the call have a greater weight in the relationship. These findings highlight a selection bias in self-reported CSAT leading respectively to an over/under-representation of positive/negative affect.
Volumes & issues
-
Volume 25 (2024)
-
Volume 24 (2023)
-
Volume 23 (2022)
-
Volume 22 (2021)
-
Volume 21 (2020)
-
Volume 20 (2019)
-
Volume 19 (2018)
-
Volume 18 (2017)
-
Volume 17 (2016)
-
Volume 16 (2015)
-
Volume 15 (2014)
-
Volume 14 (2013)
-
Volume 13 (2012)
-
Volume 12 (2011)
-
Volume 11 (2010)
-
Volume 10 (2009)
-
Volume 9 (2008)
-
Volume 8 (2007)
-
Volume 7 (2006)
-
Volume 6 (2005)
-
Volume 5 (2004)
Most Read This Month
