Visit www.benjamins.com

Implicit learning of non-adjacent dependencies

MyBook is a cheap paperback edition of the original book and will be sold at uniform, low price.
This Chapter is currently unavailable for purchase.
Abstract

Language and other higher-cognitive functions require structured sequential behavior including non-adjacent relations. A fundamental question in cognitive science is what computational machinery can support both the learning and representation of such non-adjacencies, and what properties of the input facilitate such processes. Learning experiments using miniature languages with adult and infants have demonstrated the impact of high variability (Gómez, 2003) as well as nil variability (Onnis, Christiansen, Chater, & Gómez (2003; submitted) of intermediate elements on the learning of nonadjacent dependencies. Intriguingly, current associative measures cannot explain this U shape curve. In this chapter, extensive computer simulations using five different connectionist architectures reveal that Simple Recurrent Networks (SRN) best capture the behavioral data, by superimposing local and distant information over their internal ‘mental’ states. These results provide the first mechanistic account of implicit associative learning of non-adjacent dependencies modulated by distributional properties of the input. We conclude that implicit statistical learning might be more powerful than previously anticipated.

References

/content/books/9789027268723-10onn
dcterms_subject,pub_keyword
6
3
Loading
This is a required field
Please enter a valid email address