- Home
- e-Journals
- The Mental Lexicon
- Previous Issues
- Volume 5, Issue, 2010
The Mental Lexicon - Volume 5, Issue 3, 2010
Volume 5, Issue 3, 2010
-
Measures of phonological typicality: Robust coherence and psychological validity
Author(s): Padraic Monaghan, Morten H. Christiansen, Thomas A. Farmer and Stanka A. Fitnevapp.: 281–299 (19)More LessPhonological Typicality (PT) is a measure of the extent to which a word’s phonology is typical of other words in the lexical category to which it belongs. There is a general coherence among words from the same category in terms of speech sounds, and we have found that words that are phonologically typical of their category tend to be processed more quickly and accurately than words that are less typical. In this paper we describe in greater detail the operationalisation of measures of a word’s PT, and report validations of different parameterisations of the measure. For each variant of PT, we report the extent to which it reflects the coherence of the lexical categories of words in terms of their sound, as well as the extent to which the measure predicts naming and lexical decision response times from a database of monosyllabic word processing. We show that PT is robust to parameter variation, but that measures based on PT of uninflected words (lemmas) best predict response time data for naming and lexical decision of single words.
-
Assessing language impairment in aphasia: Going beyond pencils and paper in the computer age
Author(s): Chris Westburypp.: 300–323 (24)More LessLanguage is complicated and so, therefore, is language assessment. One complication is that there are many reasons to undertake language assessments, each of which may have different methods and goals. In this article I focus on the specific difficulties faced in aphasia assessment, the assessment of acquired language deficits. As might be expected, the history of aphasia assessment closely mirrors the history of our understanding of the neurological underpinnings of language. Early assessment was based on classical disconnection theories, dating from the 19th century, that conceptualized language as consisting of independent connected modality-specific language centers that could be disconnected by brain damage. Although these models were recognized early on as being too simplistic, aphasia assessment instruments followed the models until quite recently due to the lack of any fully specified alternative language model. It was only in the 1990s, after aphasiology had come increasingly under the influence of experimental psycholinguistics, that attempts were made to create aphasia assessment instruments that did not explicitly follow disconnection models. The most successful of these is the Psycholinguistic Assessment of Language Processing in Aphasia (PALPA; Kay, Coltheart, & Lesser, 1992). These psycholinguistically influenced instruments conceptualize language as a complex multi-dimensional system consisting of many partially independent sub-systems that may be compromised to a greater or lesser degree. Aphasia assessment instruments become longer and more detailed as a reflection of our growing understanding of the complexity of the language system. As they do, the problem of collating and integrating assessment information becomes more intractable. The future of aphasia assessment will require increasing automation to deal with the large amounts of information that must now be synthesized to fully characterize an individual deficit. I discuss recent attempts to computerize aphasia assessment and what benefits they can offer over traditional pencil-and-paper instruments.
-
Behavioral profiles: A fine-grained and quantitative approach in corpus-based lexical semantics
Author(s): Stefan Th. Griespp.: 323–346 (24)More LessThis paper introduces a fairly recent corpus-based approach to lexical semantics, the Behavioral Profile (BP) approach. After a short review of traditional corpus-based work on lexical semantics and its shortcomings, I explain the logic and methodology of the BP approach and exemplify its application to different lexical relations (polysemy, synonymy, antonymy) in English and Russian with an eye to illustrating how the BP approach allows for the incorporation of different statistical techniques. Finally, I briefly discuss how first experimental approaches that validate the BP method and outline its theoretical commitments and motivations.
-
Using a maze task to track lexical and sentence processing
Author(s): Kenneth I. Forsterpp.: 347–357 (11)More LessA word maze consists of a sequence of frames, each containing two alternatives. Subjects are required to select one of those alternatives according to some criterion defined by the experimenter. This simple technique can be used to investigate a wide range of issues. For example, if one alternative is a word and the other is a nonword, the subject may be required to press a key to indicate where the word is. This provides an interesting variant of the lexical decision task, since the difficulty of the lexical discrimination can be manipulated on a trial-by-trial basis by varying the properties of the nonword alternative. On the other hand, a version of a self-paced reading task is created if each successive frame contains a word that can continue a sentence, and the subject is required to identify which word that is. Once again, by manipulating the properties of the incorrect alternative one may be able to control the mode of processing adopted by the subject. Although this is a highly artificial form of reading, it does allow one to study the sentence processing under more tightly controlled conditions.
-
Stimulus norming: It is too soon to close down brick-and-mortar labs
Author(s): Lee H. Wurm and Annmarie Canopp.: 358–370 (13)More LessPsycholinguists grapple with an ever-increasing list of control variables, in addition to any that are of theoretical interest. Some variables are subjective constructs like familiarity, concreteness, and semantic or affective connotations. Historically researchers approached these by having participants come to a laboratory and provide ratings for each stimulus, but the use of the Internet in data collection has increased in recent years and is likely to continue doing so. In the context of stimulus norms, the equivalence of lab-based and Internet methodologies has not been extensively examined. We discuss some of the pros and cons of online stimulus norming and touch on several issues to consider. We also highlight some important differences between norms obtained online and those obtained in-person.
-
Connectionism and the role of morphology in visual word recognition
Author(s): Jay Ruecklpp.: 371–400 (30)More LessThis paper provides a review of the connectionist perspective on the role of morphology in visual word recognition. Several computational models of morphological effects in reading are described and relationships between these models, models of past tense production, and models of other aspects of word recognition are traced. Limitations of extant models are noted, as are some of the technical challenges that must be solved to develop the next generation of models. Finally, some directions for future research are identified.
-
Towards a localist-connectionist model of word translation
Author(s): Ton Dijkstra and Steven Rekképp.: 401–420 (20)More LessWord translation is among the most sophisticated skills that bilinguals can perform. Brysbaert and Duyck (2010) have argued that the Revised Hierarchical Model (RHM; Kroll & Stewart, 1994), a verbal model for word translation in beginning and proficient bilinguals, should be abandoned in favor of connectionist models such as the Bilingual Interactive Activation Plus model (BIA+; Dijkstra & Van Heuven, 2002). However, the partially implemented BIA+ model for bilingual word recognition has neither been applied to bilinguals of different proficiency levels nor extended to the complex process of word translation. After considering a number of aspects of the RHM, a new localist-connectionist model, called Multilink, is formulated to account for the performance of bilinguals differing in their L2 proficiency in different tasks: lexical decision, language decision, and word translation.
-
Chinese as a natural experiment
Author(s): James Myerspp.: 421–435 (15)More LessThe Chinese lexicon is characterized by its typologically unique one-to-one-to-one mapping of morphemes, syllables, and orthographic characters. This architecture poses practical difficulties for the psycholinguist wanting to study lexical processing in Chinese. More seriously, seen as a natural experiment, Chinese challenges assumptions that processing models traditionally make about the roles of phonemes, morphemes, lemmas, and words in lexical access. It is argued that cross-linguistic variation in lexical processing cannot be accommodated by simply modifying lexical processing models, but instead what is needed is a universal learning model. Suggestions are given for how such a model could be tested empirically by extending methods already used for testing language-specific lexical processing.
-
Demythologizing the word frequency effect: A discriminative learning perspective
Author(s): R. H. Baayenpp.: 436–461 (26)More LessThis study starts from the hypothesis, first advanced by McDonald and Shillcock (2001), that the word frequency effect for a large part reflects local syntactic co-occurrence. It is shown that indeed the word frequency effect in the sense of pure repeated exposure accounts for only a small proportion of the variance in lexical decision, and that local syntactic and morphological co-occurrence probabilities are what makes word frequency a powerful predictor for lexical decision latencies. A comparison of two computational models, the cascaded dual route model (Coltheart, Rastle, Perry, Langdon, & Ziegler, 2001) and the Naive Discriminative Reader (Baayen, Milin, Filipovic Durdjevic, Hendrix, & Marelli, 2010), indicates that only the latter model properly captures the quantitative weight of the latent dimensions of lexical variation as predictors of response times. Computational models that account for frequency of occurrence by some mechanism equivalent to a counter in the head therefore run the risk of overestimating the role of frequency as repetition, of overestimating the importance of words’ form properties, and of underestimating the importance of contextual learning during past experience in proficient reading.
Volumes & issues
Most Read This Month
Article
content/journals/18711375
Journal
10
5
false
