Features as an emergent product of computing perceptual cues relative to expectations
Speech perception must ultimately contrast between discrete units of meaning, words, which are minimally distinguished by phonological features. While traditional approaches argued that discreteness is imposed by mechanisms like categorical perception that discard within-category detail, recent research suggests that fine-grained detail is preserved throughout processing. We develop an alternative that argues that discreteness emerges from processes that parse overlapping sources of variance from the signal. These need not discard acoustic detail and may make it more useful to listeners. We develop a computational implementation (Computing Cues Relative to Expectations, C-CuRE) and test it on a corpus of vowel productions. It shows how C-CuRE reveals underlying vowel features despite contextual variance, and simultaneously uses the variance to better predict upcoming vowels.