Bayesian Word Learning in Multiple Language Environments

Cognitive Science 42 (S2):439-462 (2018)
  Copy   BIBTEX

Abstract

Infant language learners are faced with the difficult inductive problem of determining how new words map to novel or known objects in their environment. Bayesian inference models have been successful at using the sparse information available in natural child-directed speech to build candidate lexicons and infer speakers’ referential intentions. We begin by asking how a Bayesian model optimized for monolingual input generalizes to new monolingual or bilingual corpora and find that, especially in the case of the bilingual input, the model shows a significant decrease in performance. In the next experiment, we propose the ME Model, a modified Bayesian model, which approximates infants’ mutual exclusivity bias to support the differential demands of monolingual and bilingual learning situations. The extended model is assessed using the same corpora of real child-directed speech, showing that its performance is more robust against varying input and less dependent than the Intentional Model on optimization of its parsimony parameter. We argue that both monolingual and bilingual demands on word learning are important considerations for a computational model, as they can yield significantly different results than when only one such context is considered.

Other Versions

No versions found

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 96,554

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Analytics

Added to PP
2017-11-20

Downloads
36 (#502,685)

6 months
13 (#404,616)

Historical graph of downloads
How can I increase my downloads?

Author Profiles

Fan Li
Universitat Pompeu Fabra

Citations of this work

No citations found.

Add more citations