This interdisciplinary new work explores one of the central theoretical problems in linguistics: learnability. The authors, from different backgrounds---linguistics, philosophy, computer science, psychology and cognitive science-explore the idea that language acquisition proceeds through general purpose learning mechanisms, an approach that is broadly empiricist both methodologically and psychologically. Written by four researchers in the full range of relevant fields: linguistics, psychology, computer science, and cognitive science, the book sheds light on the central problems of learnability and language, and traces their implications (...) for key questions of theoretical linguistics and the study of language acquisition. (shrink)
Human languages vary in many ways but also show striking cross-linguistic universals. Why do these universals exist? Recent theoretical results demonstrate that Bayesian learners transmitting language to each other through iterated learning will converge on a distribution of languages that depends only on their prior biases about language and the quantity of data transmitted at each point; the structure of the world being communicated about plays no role (Griffiths & Kalish, , ). We revisit these findings and show that when (...) certain assumptions about the relationship between language and the world are abandoned, learners will converge to languages that depend on the structure of the world as well as their prior biases. These theoretical results are supported with a series of experiments showing that when human learners acquire language through iterated learning, the ultimate structure of those languages is shaped by the structure of the meanings to be communicated. (shrink)
This article explores some of the philosophical implications of the Bayesian modeling paradigm. In particular, it focuses on the ramifications of the fact that Bayesian models pre‐specify an inbuilt hypothesis space. To what extent does this pre‐specification correspond to simply ‘‘building the solution in''? I argue that any learner must have a built‐in hypothesis space in precisely the same sense that Bayesian models have one. This has implications for the nature of learning, Fodor's puzzle of concept acquisition, and the role (...) of modeling in cognitive science. (shrink)
Everyday reasoning requires more evidence than raw data alone can provide. We explore the idea that people can go beyond this data by reasoning about how the data was sampled. This idea is investigated through an examination of premise non-monotonicity, in which adding premises to a category-based argument weakens rather than strengthens it. Relevance theories explain this phenomenon in terms of people's sensitivity to the relationships among premise items. We show that a Bayesian model of category-based induction taking premise sampling (...) assumptions and category similarity into account complements such theories and yields two important predictions: First, that sensitivity to premise relationships can be violated by inducing a weak sampling assumption; and second, that premise monotonicity should be restored as a result. We test these predictions with an experiment that manipulates people's assumptions in this regard, showing that people draw qualitatively different conclusions in each case. (shrink)
Jones & Love (J&L) contend that the Bayesian approach should integrate process constraints with abstract computational analysis. We agree, but argue that the fundamentalist/enlightened dichotomy is a false one: Enlightened research is deeply intertwined with the basic, fundamental work upon which it is based.