The Structure of Multimodal Dialogue II
Most dialogues are multimodal. When people talk, they use not only their voices, but also facial expressions and other gestures, and perhaps even touch. When computers communicate with people, they use pictures and perhaps sounds, together with textual language, and when people communicate with computers, they are likely to use mouse “gestures” almost as much as words. How are such multimodal dialogues constructed? This is the main question addressed in this selection of papers of the second “Venaco Workshop”, sponsored by the NATO Research Study Group RSG-10 on Automatic Speech Processing, and by the European Speech Communication Association (ESCA).