As adults age, their performance on many psychometric tests changes systematically, a finding that is widely taken to reveal that cognitive information-processing capacities decline across adulthood. Contrary to this, we suggest that older adults'; changing performance reflects memory search demands, which escalate as experience grows. A series of simulations show how the performance patterns observed across adulthood emerge naturally in learning models as they acquire knowledge. The simulations correctly identify greater variation in the cognitive performance of older adults, and successfully (...) predict that older adults will show greater sensitivity to fine-grained differences in the properties of test stimuli than younger adults. Our results indicate that older adults'; performance on cognitive tests reflects the predictable consequences of learning on information-processing, and not cognitive decline. We consider the implications of this for our scientific and cultural understanding of aging. (shrink)
Symbols enable people to organize and communicate about the world. However, the ways in which symbolic knowledge is learned and then represented in the mind are poorly understood. We present a formal analysis of symbolic learning—in particular, word learning—in terms of prediction and cue competition, and we consider two possible ways in which symbols might be learned: by learning to predict a label from the features of objects and events in the world, and by learning to predict features from a (...) label. This analysis predicts significant differences in symbolic learning depending on the sequencing of objects and labels. We report a computational simulation and two human experiments that confirm these differences, revealing the existence of Feature-Label-Ordering effects in learning. Discrimination learning is facilitated when objects predict labels, but not when labels predict objects. Our results and analysis suggest that the semantic categories people use to understand and communicate about the world can only be learned if labels are predicted from objects. We discuss the implications of this for our understanding of the nature of language and symbolic thought, and in particular, for theories of reference. (shrink)
Concepts allow us to treat different objects equivalently according to shared attributes, and hence to communicate about, draw inferences from, reason with, and explain these objects. Understanding how concepts are formed and used is thus essential to understanding and applying these basic processes, and the topic of similarity-based classification is central to psychology, artificial intelligence, statistics, and philosophy. This book brings together leading researchers, reflecting the key topics and important developments in the field and provides a uniquely interdisciplinary overview of (...) the subject. (shrink)
A central goal of typological research is to characterize linguistic features in terms of both their functional role and their fit to social and cognitive systems. One long-standing puzzle concerns why certain languages employ grammatical gender. In an information theoretic analysis of German noun classification, Dye, Milin, Futrell, and Ramscar enumerated a number of important processing advantages gender confers. Yet this raises a further puzzle: If gender systems are so beneficial to processing, what does this mean for languages that make (...) do without them? Here, we compare the communicative function of gender marking in German to that of prenominal adjectives in English, finding that despite their differences, both systems act to efficiently smooth information over discourse, making nouns more equally predictable in context. We examine why evolutionary pressures may favor one system over another and discuss the implications for compositional accounts of meaning and Gricean principles of communication. (shrink)
The uncertainty associated with paradigmatic families has been shown to correlate with their phonetic characteristics in speech, suggesting that representations of complex sublexical relations between words are part of speaker knowledge. To better understand this, recent studies have used two-layer neural network models to examine the way paradigmatic uncertainty emerges in learning. However, to date this work has largely ignored the way choices about the representation of inflectional and grammatical functions in models strongly influence what they subsequently learn. To explore (...) the consequences of this, we investigate how representations of IFS in the input-output structures of learning models affect the capacity of uncertainty estimates derived from them to account for phonetic variability in speech. Specifically, we examine whether IFS are best represented as outputs to neural networks or as inputs by building models that embody both choices and examining their capacity to account for uncertainty effects in the formant trajectories of word final [ɐ], which in German discriminates around sixty different IFS. Overall, we find that formants are enhanced as the uncertainty associated with IFS decreases. This result dovetails with a growing number of studies of morphological and inflectional families that have shown that enhancement is associated with lower uncertainty in context. Importantly, we also find that in models where IFS serve as inputs—as our theoretical analysis suggests they ought to—its uncertainty measures provide better fits to the empirical variance observed in [ɐ] formants than models where IFS serve as outputs. This supports our suggestion that IFS serve as cognitive cues during speech production, and should be treated as such in modeling. It is also consistent with the idea that when IFS serve as inputs to a learning network. This maintains the distinction between those parts of the network that represent message and those that represent signal. We conclude by describing how maintaining a “signal-message-uncertainty distinction” can allow us to reconcile a range of apparently contradictory findings about the relationship between articulation and uncertainty in context. (shrink)
How are natural symbol systems best understood? Traditional “symbolic” approaches seek to understand cognition by analogy to highly structured, prescriptive computer programs. Here, we describe some problems the traditional computational metaphor inevitably leads to, and a very different approach to computation (Ramscar, Yarlett, Dye, Denny, & Thorpe, 2010; Turing, 1950) that allows these problems to be avoided. The way we conceive of natural symbol systems depends to a large degree on the computational metaphors we use to understand them, and machine (...) learning suggests an understanding of symbolic thought that is very different to traditional views (Hummel, 2010). The empirical question then is: Which metaphor is best? (shrink)
There is an old joke about a theoretical physicist who was charged with figuring out how to increase the milk production of cows. Although many farmers, biologists, and psychologists had tried and failed to solve the problem before him, the physicist had no trouble coming up with a solution on the spot. “ First,” he began, “we assume a spherical cow... ” [Tenenbaum & Griffiths].