iMinerva: A Mathematical Model of Distributional Statistical Learning

Cognitive Science 37 (2):310-343 (2013)
  Copy   BIBTEX

Abstract

Statistical learning refers to the ability to identify structure in the input based on its statistical properties. For many linguistic structures, the relevant statistical features are distributional: They are related to the frequency and variability of exemplars in the input. These distributional regularities have been suggested to play a role in many different aspects of language learning, including phonetic categories, using phonemic distinctions in word learning, and discovering non-adjacent relations. On the surface, these different aspects share few commonalities. Despite this, we demonstrate that the same computational framework can account for learning in all of these tasks. These results support two conclusions. The first is that much, and perhaps all, of distributional statistical learning can be explained by the same underlying set of processes. The second is that some aspects of language can be learned due to domain-general characteristics of memory

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 91,783

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Which came first: Infants learning language or motherese?Heather Bortfeld - 2004 - Behavioral and Brain Sciences 27 (4):505-506.
Old ideas, new mistakes: All learning is relational.Stellan Ohlsson - 1997 - Behavioral and Brain Sciences 20 (1):79-80.
Statistical Learning Theory: A Tutorial.Sanjeev R. Kulkarni & Gilbert Harman - 2011 - Wiley Interdisciplinary Reviews: Computational Statistics 3 (6):543-556.
Model-based learning problem taxonomies.Richard M. Golden - 1997 - Behavioral and Brain Sciences 20 (1):73-74.

Analytics

Added to PP
2013-03-01

Downloads
27 (#588,051)

6 months
6 (#514,728)

Historical graph of downloads
How can I increase my downloads?