Languages are transmitted from person to person and generation to generation via a process of iterated learning: people learn a language from other people who once learned that language themselves. We analyze the consequences of iterated learning for learning algorithms based on the principles of Bayesian inference, assuming that learners compute a posterior distribution over languages by combining a prior (representing their inductive biases) with the evidence provided by linguistic data. We show that when learners sample languages from this posterior (...) distribution, iterated learning converges to a distribution over languages that is determined entirely by the prior. Under these conditions, iterated learning is a form of Gibbs sampling, a widely-used Markov chain Monte Carlo algorithm. The consequences of iterated learning are more complicated when learners choose the language with maximum posterior probability, being affected by both the prior of the learners and the amount of information transmitted between generations. We show that in this case, iterated learning corresponds to another statistical inference algorithm, a variant of the expectation-maximization (EM) algorithm. These results clarify the role of iterated learning in explanations of linguistic universals and provide a formal connection between constraints on language acquisition and the languages that come to be spoken, suggesting that information transmitted via iterated learning will ultimately come to mirror the minds of the learners. (shrink)
Determining the knowledge that guides human judgments is fundamental to understanding how people reason, make decisions, and form predictions. We use an experimental procedure called ‘‘iterated learning,’’ in which the responses that people give on one trial are used to generate the data they see on the next, to pinpoint the knowledge that informs people's predictions about everyday events (e.g., predicting the total box office gross of a movie from its current take). In particular, we use this method to discriminate (...) between two models of human judgments: a simple Bayesian model (Griffiths & Tenenbaum, 2006) and a recently proposed alternative model that assumes people store only a few instances of each type of event in memory (MinK; Mozer, Pashler, & Homaei, 2008). Although testing these models using standard experimental procedures is difficult due to differences in the number of free parameters and the need to make assumptions about the knowledge of individual learners, we show that the two models make very different predictions about the outcome of iterated learning. The results of an experiment using this methodology provide a rich picture of how much people know about the distributions of everyday quantities, and they are inconsistent with the predictions of the MinK model. The results suggest that accurate predictions about everyday events reflect relatively sophisticated knowledge on the part of individuals. (shrink)
Many of the problems studied in cognitive science are inductive problems, requiring people to evaluate hypotheses in the light of data. The key to solving these problems successfully is having the right inductive biases—assumptions about the world that make it possible to choose between hypotheses that are equally consistent with the observed data. This article explores a novel experimental method for identifying the biases that guide human inductive inferences. The idea behind this method is simple: This article uses the responses (...) produced by a participant on one trial to generate the stimuli that either they or another participant will see on the next. A formal analysis of this “iterated learning” procedure, based on the assumption that the learners are Bayesian agents, predicts that it should reveal the inductive biases of these learners, as expressed in a prior probability distribution over hypotheses. This article presents a series of experiments using stimuli based on a well-studied set of category structures, demonstrating that iterated learning can be used to reveal the inductive biases of human learners. (shrink)
Information changes as it is passed from person to person, with this process of cultural transmission allowing the minds of individuals to shape the information that they transmit. We present mathematical models of cultural transmission which predict that the amount of information passed from person to person should affect the rate at which that information changes. We tested this prediction using a function-learning task, in which people learn a functional relationship between two variables by observing the values of those variables. (...) We varied the total number of observations and the number of those observations that take unique values. We found an effect of the number of observations, with functions transmitted using fewer observations changing form more quickly. We did not find an effect of the number of unique observations, suggesting that noise in perception or memory may have affected learning. (shrink)
How can the impenetrability hypothesis be empirically tested? We comment on the role of signal detection measures, suggesting that context effects on discriminations for which post-perceptual cues are irrelevant, or on neural activity associated with early vision, would challenge impenetrability. We also note the great computational power of the proposed pre-perceptual attention processes and consider the implications for testability of the theory.