Exploiting Multiple Sources of Information in Learning an Artificial Language: Human Data and Modeling

Cognitive Science 34 (2):255-285 (2010)
  Copy   BIBTEX

Abstract

This study investigates the joint influences of three factors on the discovery of new word‐like units in a continuous artificial speech stream: the statistical structure of the ongoing input, the initial word‐likeness of parts of the speech flow, and the contextual information provided by the earlier emergence of other word‐like units. Results of an experiment conducted with adult participants show that these sources of information have strong and interactive influences on word discovery. The authors then examine the ability of different models of word segmentation to account for these results. PARSER (Perruchet & Vinter, 1998) is compared to the view that word segmentation relies on the exploitation of transitional probabilities between successive syllables, and with the models based on the Minimum Description Length principle, such as INCDROP. The authors submit arguments suggesting that PARSER has the advantage of accounting for the whole pattern of data without ad‐hoc modifications, while relying exclusively on general‐purpose learning principles. This study strengthens the growing notion that nonspecific cognitive processes, mainly based on associative learning and memory principles, are able to account for a larger part of early language acquisition than previously assumed.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 94,549

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Using Predictability for Lexical Segmentation.Çağrı Çöltekin - 2017 - Cognitive Science 41 (7):1988-2021.

Analytics

Added to PP
2013-11-21

Downloads
39 (#405,268)

6 months
13 (#282,484)

Historical graph of downloads
How can I increase my downloads?