The Structure of Multimodal Dialogue II
GBP
- Editor(s): M. Martin Taylor 1 , Françoise Néel 2 and Don Bouwhuis 3
-
View Affiliations Hide AffiliationsAffiliations:1 Defence and Civil Institute of Environmental Medicine, Toronto2 LIMSI-CNRS, Orsay, France3 Institute for Perception Research (IPO), Eindhoven
- Format: PDF
- Publication Date March 2000
- e-Book ISBN: 9789027273871
- DOI: https://doi.org/10.1075/z.99
Most dialogues are multimodal. When people talk, they use not only their voices, but also facial expressions and other gestures, and perhaps even touch. When computers communicate with people, they use pictures and perhaps sounds, together with textual language, and when people communicate with computers, they are likely to use mouse “gestures” almost as much as words. How are such multimodal dialogues constructed? This is the main question addressed in this selection of papers of the second “Venaco Workshop”, sponsored by the NATO Research Study Group RSG-10 on Automatic Speech Processing, and by the European Speech Communication Association (ESCA).
Related Topics:
Natural language processing
http://instance.metastore.ingenta.com/content/books/9789027273871
-
From This Site
/content/books/9789027273871dcterms_subject,pub_keyword-contentType:Journal -contentType:Chapter105
/content/books/9789027273871
dcterms_subject,pub_keyword
-contentType:Journal -contentType:Chapter
10
5
Chapter
content/books/9789027273871
Book
false