Implicit learning of non-adjacent dependencies

image of Implicit learning of non-adjacent dependencies

Language and other higher-cognitive functions require structured sequential behavior including non-adjacent relations. A fundamental question in cognitive science is what computational machinery can support both the learning and representation of such non-adjacencies, and what properties of the input facilitate such processes. Learning experiments using miniature languages with adult and infants have demonstrated the impact of high variability (Gómez, 2003) as well as nil variability (Onnis, Christiansen, Chater, & Gómez (2003; submitted) of intermediate elements on the learning of nonadjacent dependencies. Intriguingly, current associative measures cannot explain this U shape curve. In this chapter, extensive computer simulations using five different connectionist architectures reveal that Simple Recurrent Networks (SRN) best capture the behavioral data, by superimposing local and distant information over their internal ‘mental’ states. These results provide the first mechanistic account of implicit associative learning of non-adjacent dependencies modulated by distributional properties of the input. We conclude that implicit statistical learning might be more powerful than previously anticipated.

  • Affiliations: 1: Nanyang Technological University; 2: Universit√© Libre de Bruxelles; 3: Cornell University; 4: University of Warwick



This is a required field
Please enter a valid email address
Approval was successful
Invalid data
An Error Occurred
Approval was partially successful, following selected items could not be processed due to error