- Home
- e-Journals
- Gesture
- Previous Issues
- Volume 21, Issue 2-3, 2022
Gesture - Volume 21, Issue 2-3, 2022
Volume 21, Issue 2-3, 2022
-
Gestures are modulated by social context
Author(s): Lucien Brown, Hyunji Kim, Iris Hübscher and Bodo Winterpp.: 167–200 (34)More LessAbstractThis paper investigates gesture as a resource for marking politeness-related meanings. We asked 14 Korean and 14 Catalan participants to retell a cartoon, once to an unknown superior and once to a close friend. Participants in both languages curtail gestures when interacting with a socially distant superior. Speakers of both languages produced fewer gestures when addressing the superior, reduced their gesture space, decreased the encoding of manner, and reduced the use of character-viewpoint gestures. We see the decrease in gesture frequency and the less frequent encoding of manner as indicators of lower levels of iconicity when talking with status superiors. Curtailing gesture marks a less playful communicative context, and a more serious and deferential persona. Altogether, our research speaks to the importance of politeness in gesture production, and the social nature of gestures in human communication.
-
Searching for the roots of signs in children’s early gestures
Author(s): Olga Capirci, Morgana Proietti and Virginia Volterrapp.: 201–238 (38)More LessA consolidated tendency considers ‘gestures’ and ‘signs’ as distinct categories separated by a ‘cataclysmic break’. According to a different approach, gestures and signs have their common origin in actions, and are considered as part of language. The aim of this study was to compare the productions of preschool speaking hearing children and signing deaf children in response to the same visual stimuli. The execution parameters and representational strategies observed in gestures and signs were analyzed using the same coding. The results showed that hearing children exposed to Italian and deaf children exposed to Italian Sign Language are consistent in their productions of gestures and signs, respectively. Furthermore, the hearing children’s gestures and the deaf children’s signs for some items were produced with the same parameters and according to similar representational strategies. This indicates that these two forms of communication are not separate behaviors, but should rather be considered as a continuum.
-
Co-speech gestures can interfere with learning foreign language words*
Author(s): Elena Nicoladis, Paula Marentette and Candace Lampp.: 239–263 (25)More LessAbstractCo-speech gestures can help the learning, processing, and memory of words and concepts, particularly motoric and spatial concepts such as verbs. The purpose of the present studies was to test whether co-speech gestures support the learning of words through gist traces of movement. We asked English monolinguals to learn 40 Cantonese words (20 verbs and 20 nouns). In two studies, we found support for the gist traces of congruent gestures being movement: participants who saw congruent gestures while hearing Cantonese words thought they had seen more verbs than participants in any other condition. However, gist traces were unrelated to the accurate recall of either nouns or verbs. In both studies, learning Cantonese words accompanied by congruent gestures tended to interfere with the learning of nouns (but not verbs). In Study 2, we ruled out the possibility that this interference was due either to gestures conveying representational information in another medium or to distraction from moving hands. We argue that gestures can interfere with learning foreign language words when they represent the referents (e.g., show shape or size) because learners must interpret the hands as something other than hands.
-
The Raised Index Finger gesture in Hebrew multimodal interaction
Author(s): Anna Inbarpp.: 264–295 (32)More LessAbstractThe present study examines the roles that the gesture of the Raised Index Finger (RIF) plays in Hebrew multimodal interaction. The study reveals that the RIF is associated with diverse linguistic phenomena and tends to appear in contexts in which the speaker presents a message or speech act that violates the hearer’s expectations (based on either general knowledge or prior discourse). The study suggests that the RIF serves the function of discourse deixis: Speakers point to their message, creating a referent in the extralinguistic context to which they refer as an object of their stance, evaluating the content of the utterance or speech act as unexpected by the hearer, and displaying epistemic authority. Setting up such a frame by which the information is to be interpreted provides the basis for a swifter update of the common ground in situations of (assumed) differences between the assumptions of the speaker and the hearer.
-
Iconic gestures serve as primes for both auditory and visual word forms
Author(s): Iván Sánchez-Borges and Carlos J. Álvarezpp.: 296–319 (24)More LessAbstractPrevious studies using cross-modal semantic priming have found that iconic gestures prime target words that are related with the gestures. In the present study, two analogous experiments examined this priming effect presenting prime and targets in high synchrony. In Experiment 1, participants performed an auditory primed lexical decision task where target words (e.g., “push”) and pseudowords had to be discriminated, primed by overlapping iconic gestures that could be semantically related (e.g., moving both hands forward) or not with the words. Experiment 2 was similar but with both gestures and words presented visually. The grammatical category of the words was also manipulated: they were nouns and verbs. It was found that words related to gestures were recognized faster and with fewer errors than the unrelated ones in both experiments and similarly for both types of words.
-
Automatic tool to annotate smile intensities in conversational face-to-face interactions
Author(s): Stéphane Rauzy and Mary Amoyalpp.: 320–364 (45)More LessAbstractThis study presents an automatic tool that allows to trace smile intensities along a video record of conversational face-to-face interactions. The processed output proposes a sequence of adjusted time intervals labeled following the Smiling Intensity Scale (Gironzetti, Attardo, and Pickering, 2016), a 5 levels scale varying from neutral facial expression to laughing smile. The underlying statistical model of this tool is trained on a manually annotated corpus of conversations featuring spontaneous facial expressions. This model will be detailed in this study. This tool can be used with benefits for annotating smile in interactions. The results are twofold. First, the evaluation reveals an observed agreement of 68% between manual and automatic annotations. Second, manually correcting the labels and interval boundaries of the automatic outputs reduces by a factor 10 the annotation time as compared with the time spent for manually annotating smile intensities without pretreatment. Our annotation engine makes use of the state-of-the-art toolbox OpenFace for tracking the face and for measuring the intensities of the facial Action Units of interest all along the video. The documentation and the scripts of our tool, the SMAD software, are available to download at the HMAD open source project URL page https://github.com/srauzy/HMAD (last access 31 July 2023).
-
Review of Galhano-Rodrigues, Galvão & Cruz-Santos (2019): Recent perspectives on gesture and multimodality
Author(s): Xi Wang and Fangfei Lvpp.: 365–373 (9)More LessThis article reviews Recent perspectives on gesture and multimodality
-
Review of Bressem (2021): Repetitions in Gesture: A Cognitive-Linguistic and Usage-Based Perspective
Author(s): Zhibin Peng and Muhammad Afzaalpp.: 374–381 (8)More LessThis article reviews Repetitions in Gesture: A Cognitive-Linguistic and Usage-Based Perspective
Volumes & issues
-
Volume 22 (2023)
-
Volume 21 (2022)
-
Volume 20 (2021)
-
Volume 19 (2020)
-
Volume 18 (2019)
-
Volume 17 (2018)
-
Volume 16 (2017)
-
Volume 15 (2016)
-
Volume 14 (2014)
-
Volume 13 (2013)
-
Volume 12 (2012)
-
Volume 11 (2011)
-
Volume 10 (2010)
-
Volume 9 (2009)
-
Volume 8 (2008)
-
Volume 7 (2007)
-
Volume 6 (2006)
-
Volume 5 (2005)
-
Volume 4 (2004)
-
Volume 3 (2003)
-
Volume 2 (2002)
-
Volume 1 (2001)
Most Read This Month
-
-
Home position
Author(s): Harvey Sacks and Emanuel A. Schegloff
-
-
-
Depicting by gesture
Author(s): Jürgen Streeck
-
-
-
Some uses of the head shake
Author(s): Adam Kendon
-
-
-
Linguistic influences on gesture’s form
Author(s): Jennifer Gerwing and Janet Bavelas
-
- More Less