1887
Volume 1, Issue 2
  • ISSN 1569-2167
  • E-ISSN: 1569-9803
USD
Buy:$35.00 + Taxes

Abstract

Recent developments in speech, network and embedded-computer technologies indicate that human–computer interfaces that use speech as one or the main mode of interaction will become increasingly prevalent. Such interfaces must move beyond simple voice commands to support a dialogue-based interface if they are to provide for common requirements such as description resolution, perceptual anchoring, and deixis. To support human–computer dialogue effectively, architectures must support active language understanding: that is, they must support the close integration of dialogue planning and execution with general task planning and execution.

Loading

Article metrics loading...

/content/journals/10.1075/ijct.1.2.04fit
2002-01-01
2024-12-02
Loading full text...

Full text loading...

/content/journals/10.1075/ijct.1.2.04fit
Loading
This is a required field
Please enter a valid email address
Approval was successful
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error