Full text loading...
-
Aligning mispronounced words to meaning: Evidence from ERP and reaction time studies
- Source: The Mental Lexicon, Volume 8, Issue 2, Jan 2013, p. 140 - 163
Abstract
Many models have been proposed to account for the role that the mental lexicon plays in the initial stages of speech perception. One fundamental disparity between these models is how speech is phonologically represented in the mental lexicon. Theories range from full specification and representation of all phonological information to sparse specification. We report on two perception experiments using context independent mispronunciations (i.e. mispronunciations not governed by phonological rules) to test the predictions of the two most divergent models. Models assuming full specification and storage of all phonological information (e.g. exemplar models) predict symmetric acceptance or rejection of mispronunciations that only differ from real words in place of articulation of the medial consonant (*temor-tenor, *inage-image). Models assuming that only contrasting phonological information is stored (as in FUL) predict asymmetric patterns of acceptance, i.e. mispronunciations with medial coronal consonants will be better tolerated (*temor) than mispronunciations with medial labial or velar consonants. Results of two experiments using lexical decision with semantic priming in British English reveal an asymmetry in the acceptance of mispronunciations for coronal vs. noncoronal consonants. Both reaction time latencies as well as N400 amplitudes exhibit asymmetries, supporting the notion of abstract asymmetric lexical representation.