- Home
- e-Journals
- Interaction Studies
- Previous Issues
- Volume 6, Issue, 2005
Interaction Studies - Volume 6, Issue 2, 2005
Volume 6, Issue 2, 2005
-
Interweaving protosign and protospeech: Further developments beyond the mirror
Author(s): Michael A. Arbibpp.: 145–171 (27)More LessWe distinguish “language readiness” (biological) from “having language” (cultural) and outline a hypothesis for the evolution of the language-ready brain and language involving seven stages: S1: grasping; S2: a mirror system for grasping; S3: a simple imitation system for grasping, shared with the common ancestor of human and chimpanzee; S4: a complex imitation system for grasping; S5: protosign, breaking through the fixed repertoire of primate vocalizations to yield an open repertoire for communication; S6: protospeech, the open-ended production and perception of sequences of vocal gestures, without these sequences constituting a full language; and S7: a process of cultural evolution in Homo sapiens yielding full human languages. The present paper will examine the subhypothesis that protosign (S5) formed a scaffolding for protospeech (S6), but that the two interacted with each other in supporting the evolution of brain and body that made Homo sapiens “language-ready”.
-
The Frame/Content theory of evolution of speech: A comparison with a gestural-origins alternative
Author(s): Peter F. MacNeilage and Barbara L. Davispp.: 173–199 (27)More LessThe Frame/Content theory deals with how and why the first language evolved the present-day speech mode of programming syllable “Frame” structures with segmental (consonant and vowel) “Content” elements. The first words are considered, for biomechanical reasons, to have had the simple syllable frame structures of pre-speech babbling (e.g., “bababa”), and were perhaps parental terms, generated within the parent–infant dyad. Although all gestural origins theories (including Arbib’s theory reviewed here) have iconicity as a plausible alternative hypothesis for the origin of the meaning-signal link for words, they all share the problems of how and why a fully fledged sign language, necessarily involving a structured phonology, changed to a spoken language.
-
Intentional communication and the anterior cingulate cortex
Author(s): Oana Bengapp.: 201–221 (21)More LessThis paper presents arguments for considering the anterior cingulate cortex (ACC) as a critical structure in intentional communication. Different facets of intentionality are discussed in relationship to this neural structure. The macrostructural and microstructural characteristics of ACC are proposed to sustain the uniqueness of its architecture, as an overlap region of cognitive, affective and motor components. At the functional level, roles played by this region in communication include social bonding in mammals, control of vocalization in humans, semantic and syntactic processing, and initiation of speech. The involvement of the anterior cingulate cortex in social cognition is suggested where, for infants, joint attention skills are considered both prerequisites of social cognition and prelinguistic communication acts. Since the intentional dimension of gestural communication seems to be connected to a region previously equipped for vocalization, ACC might well be a starting point for linguistic communication.
-
Gestural-vocal deixis and representational skills in early language development
Author(s): Elena Antinoro Pizzuto, Micaela Capobianco and Antonella Devescovipp.: 223–252 (30)More LessThis study explores the use of deictic gestures, vocalizations and words compared to content-loaded, or representational gestures and words in children’s early one- and two-element utterances. We analyze the spontaneous production of four children, observed longitudinally from 10–12 to 24–25 months of age, focusing on the components of children’s utterances (deictic vs. representational), the information encoded, and the temporal relationship between gestures and vocalizations or words that were produced in combination. Results indicate that while the gestural and vocal modalities are meaningfully and temporally integrated form the earliest stages, deictic and representational elements are unevenly distributed in the gestural vs. the vocal modality, and in one vs. two-element utterances. The findings suggest that while gestural deixis plays a primary role in allowing children to define and articulate their vocal productions, representational skills appear to be markedly more constrained in the gestural as compared to the vocal modality.
-
Building a talking baby robot: A contribution to the study of speech acquisition and evolution
Author(s): Jihène Serkhane, Jean-Luc Schwartz and Pierre Bessièrepp.: 253–286 (34)More LessSpeech is a perceptuo-motor system. A natural computational modeling framework is provided by cognitive robotics, or more precisely speech robotics, which is also based on embodiment, multimodality, development, and interaction. This paper describes the bases of a virtual baby robot which consists in an articulatory model that integrates the non-uniform growth of the vocal tract, a set of sensors, and a learning model. The articulatory model delivers sagittal contour, lip shape and acoustic formants from seven input parameters that characterize the configurations of the jaw, the tongue, the lips and the larynx. To simulate the growth of the vocal tract from birth to adulthood, a process modifies the longitudinal dimension of the vocal tract shape as a function of age. The auditory system of the robot comprises a “phasic” system for event detection over time, and a “tonic” system to track formants. The model of visual perception specifies the basic lips characteristics: height, width, area and protrusion. The orosensorial channel, which provides the tactile sensation on the lips, the tongue and the palate, is elaborated as a model for the prediction of tongue-palatal contacts from articulatory commands. Learning involves Bayesian programming, in which there are two phases: (i) specification of the variables, decomposition of the joint distribution and identification of the free parameters through exploration of a learning set, and (ii) utilization which relies on questions about the joint distribution. Two studies were performed with this system. Each of them focused on one of the two basic mechanisms, which ought to be at work in the initial periods of speech acquisition, namely vocal exploration and vocal imitation. The first study attempted to assess infants’ motor skills before and at the beginning of canonical babbling. It used the model to infer the acoustic regions, the articulatory degrees of freedom and the vocal tract shapes that are the likeliest explored by actual infants according to their vocalizations. Subsequently, the aim was to simulate data reported in the literature on early vocal imitation, in order to test whether and how the robot was able to reproduce them and to gain some insights into the actual cognitive representations that might be involved in this behavior. Speech modeling in a robotics framework should contribute to a computational approach of sensori-motor interactions in speech communication, which seems crucial for future progress in the study of speech and language ontogeny and phylogeny.
-
Aspects of descriptive, referential, and information structure in phrasal semantics: A construction-based model
Author(s): Peter F. Domineypp.: 287–310 (24)More LessPhrasal semantics is concerned with how the meaning of a sentence is composed both from the meaning of the constituent words, and from extra meaning contained within the structural organization of the sentence itself. In this context, grammatical constructions correspond to form-meaning mappings that essentially capture this “extra” meaning and allow its representation. The current research examines how a computational model of language processing based on a construction grammar approach can account for aspects of descriptive, referential and information content of phrasal semantics.
-
First in, last out?: The evolution of aphasic lexical speech automatisms to agrammatism and the evolution of human communication
Author(s): Chris Codepp.: 311–334 (24)More LessCurrent work in the evolution of language and communication is emphasising a close relationship between the evolution of speech, language and gestural action. This paper briefly explores some evolutionary implications of a range of closely related impairments of speech, language and gesture arising from left frontal brain lesions. I discuss aphasic lexical speech automatisms (LSAs) and their resolution with some recovery into agrammatism with apraxia of speech, an impairment of speech planning and programming. I focus attention on the most common forms of LSAs, expletives and the pronoun+modal/aux subtype, and propose that further research into these phenomena can contribute to the debate. I briefly discuss recent studies of progressively degenerating neurological conditions resulting in progressive patterns of cognitive impairments that promises to provide insight into the evolutionary relationships between speech, language and gesture.
Volumes & issues
-
Volume 25 (2024)
-
Volume 24 (2023)
-
Volume 23 (2022)
-
Volume 22 (2021)
-
Volume 21 (2020)
-
Volume 20 (2019)
-
Volume 19 (2018)
-
Volume 18 (2017)
-
Volume 17 (2016)
-
Volume 16 (2015)
-
Volume 15 (2014)
-
Volume 14 (2013)
-
Volume 13 (2012)
-
Volume 12 (2011)
-
Volume 11 (2010)
-
Volume 10 (2009)
-
Volume 9 (2008)
-
Volume 8 (2007)
-
Volume 7 (2006)
-
Volume 6 (2005)
-
Volume 5 (2004)
Most Read This Month
Article
content/journals/15720381
Journal
10
5
false