- Home
- e-Journals
- Gesture
- Previous Issues
- Volume 9, Issue, 2009
Gesture - Volume 9, Issue 3, 2009
Volume 9, Issue 3, 2009
-
Dual viewpoint gestures
Author(s): Fey Parrillpp.: 271–289 (19)More LessThis paper examines gestures that simultaneously express multiple physical perspectives, known as dual viewpoint gestures. These gestures were first discussed in McNeill’s 1992 book, Hand and mind. We examine a corpus of approximately fifteen hours of narrative data, and use these data to extend McNeill’s observations about the different possibilities for combining viewpoints. We also show that a phenomenon thought to be present only in the narrations of children is present in the narrations of adults. We discuss the significance of these gestures for theories of speech-gesture integration.
-
Gesture–speech integration in narrative: Are children less redundant than adults?
Author(s): Martha W. Alibali, Julia L. Evans, Autumn B. Hostetter, Kristin Ryan and Elina Mainela-Arnoldpp.: 290–311 (22)More LessSpeakers sometimes express information in gestures that they do not express in speech. In this research, we developed a system that could be used to assess the redundancy of gesture and speech in a narrative task. We then applied this system to examine whether children and adults produce non-redundant gesture–speech combinations at similar rates. The coding system was developed based on a sample of 30 children. A crucial feature of the system is that gesture meanings can be assessed based on form alone; thus, the meanings speakers express in gesture and speech can be assessed independently and compared. We then collected narrative data from a new sample of 17 children (ages 5–10), as well as a sample of 20 adults, and we determined the average proportion of non-redundant gesture–speech combinations produced by individuals in each group. Children produced more non-redundant gesture–speech combinations than adults, both at the clause level and at the word level. These findings suggest that gesture–speech integration is not constant over the life span, but instead appears to change with development.
-
The relationship between verbal and gestural contributions in conversation: A comparison of three methods
Author(s): Jennifer Gerwing and Meredith Allisonpp.: 312–336 (25)More LessGestures and their concurrent words are often said to be meaningfully related and co-expressive. Research has shown that gestures and words are each particularly suited to conveying different kinds of information. In this paper, we describe and compare three methods for investigating the relationship between gestures and words: (1) an analysis of deictic expressions referring to gestures, (2) an analysis of the redundancy between information presented in words vs. in gestures, and (3) an analysis of the semantic features represented in words and gestures. We also apply each of these three methods to one set of data, in which 22 pairs of participants used words and gestures to design the layout of an apartment. Each of the three analyses revealed a different picture of the complementary relationship between gesture and speech. According to the deictic analysis, participant speakers marked only a quarter of their gestures as providing essential information that was missing from the speech, but the redundancy analysis indicated that almost all gestures contributed information that was not in the words. The semantic feature analysis showed that participants conveyed spatial information in their gestures more often than in their words. A follow-up analysis showed that participants contributed categorical information (i.e., the name of each room) in their words. Of the three methods, the semantic feature analysis yielded the most complex picture of the data, and it served to generate additional analyses. We conclude that although analyses of deictic expressions and redundancy are useful for characterizing gesture use in differing conditions, the semantic feature method is best for exploring the complementary, semantic relationship between gesture and speech.
-
Repetition in infant-directed action depends on the goal structure of the object: Evidence for statistical regularities
Author(s): Rebecca J. Brand, Anna McGee, Jonathan F. Kominsky, Kristen Briggs, Aline Gruneisen and Tessa Orbachpp.: 337–353 (17)More LessAdults automatically adjust their speech and actions in a way that may facilitate infants’ processing (e.g., Brand, Baldwin, & Ashburn, 2002). This research examined whether mothers’ use of repetition for infants depended on whether the object being demonstrated required a series of actions in sequence in order to reach a salient goal (called an “enabling” sequence). Mothers (n = 39) demonstrated six objects, three with an enabling sequence and three with an arbitrary sequence, to their 6- to 8- or 11- to 13-month-olds. As predicted, in demonstrations of objects with an enabling sequence, mothers were more likely to repeat series of actions, whereas for those without such structure, mothers were more likely to repeat individual units of action. This may or may not have been deliberately pedagogical on mothers’ part, but nevertheless indicates another way in which input to infants is richly patterned to support their learning.
Volumes & issues
-
Volume 22 (2023)
-
Volume 21 (2022)
-
Volume 20 (2021)
-
Volume 19 (2020)
-
Volume 18 (2019)
-
Volume 17 (2018)
-
Volume 16 (2017)
-
Volume 15 (2016)
-
Volume 14 (2014)
-
Volume 13 (2013)
-
Volume 12 (2012)
-
Volume 11 (2011)
-
Volume 10 (2010)
-
Volume 9 (2009)
-
Volume 8 (2008)
-
Volume 7 (2007)
-
Volume 6 (2006)
-
Volume 5 (2005)
-
Volume 4 (2004)
-
Volume 3 (2003)
-
Volume 2 (2002)
-
Volume 1 (2001)
Most Read This Month
Article
content/journals/15699773
Journal
10
5
false
-
-
Depicting by gesture
Author(s): Jürgen Streeck
-
-
-
Home position
Author(s): Harvey Sacks and Emanuel A. Schegloff
-
-
-
Some uses of the head shake
Author(s): Adam Kendon
-
-
-
Linguistic influences on gesture’s form
Author(s): Jennifer Gerwing and Janet Bavelas
-
- More Less