A Bayesian Model of Biases in Artificial Language Learning: The Case of a Word‐Order Universal

Cognitive Science 36 (8):1468-1498 (2012)
  Copy   BIBTEX


In this article, we develop a hierarchical Bayesian model of learning in a general type of artificial language‐learning experiment in which learners are exposed to a mixture of grammars representing the variation present in real learners’ input, particularly at times of language change. The modeling goal is to formalize and quantify hypothesized learning biases. The test case is an experiment (Culbertson, Smolensky, & Legendre, 2012) targeting the learning of word‐order patterns in the nominal domain. The model identifies internal biases of the experimental participants, providing evidence that learners impose (possibly arbitrary) properties on the grammars they learn, potentially resulting in the cross‐linguistic regularities known as typological universals. Learners exposed to mixtures of artificial grammars tended to shift those mixtures in certain ways rather than others; the model reveals how learners’ inferences are systematically affected by specific prior biases. These biases are in line with a typological generalization—Greenberg's Universal 18—which bans a particular word‐order pattern relating nouns, adjectives, and numerals.



    Upload a copy of this work     Papers currently archived: 91,102

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Evolutionary consequences of language learning.Partha Niyogi & Robert C. Berwick - 1997 - Linguistics and Philosophy 20 (6):697-719.
Bayesian model learning based on predictive entropy.Jukka Corander & Pekka Marttinen - 2006 - Journal of Logic, Language and Information 15 (1-2):5-20.


Added to PP

53 (#275,815)

6 months
3 (#550,572)

Historical graph of downloads
How can I increase my downloads?