
Full text loading...
We present our work on the temporal integration of hierarchies of communicative actions: kinesic, prosodic and discursive. We use the device of the ‘catchment’ as the locus around which this integration proceeds. We present a detailed case study of a gesture and speech elicitation experiment in which a subject describes her living space to an interlocutor. First, we process the video data to obtain the motion traces of both of the subject’s hands using the vector coherence mapping algorithm. We code the gestures to identify the catchments. We recover discourse purposes utilizing a system of guided questions. Finally, we define prosody in terms of the ToBI system. The results of these analyses are compared against the computed motion traces to identify the cues accessible in the gestural and audio data that correlate well with the psycholinguistic analyses. The results show that motion, prosody and discourse structure are integrated at each moment of speaking.The electronic edition of this article includes audio-visual data.