It is widely assumed that human learning and the structure of human languages are intimately related. This relationship is frequently suggested to derive from a language-specific biological endowment, which encodes universal, but communicatively arbitrary, principles of language structure (a Universal Grammar or UG). How might such a UG have evolved? We argue that UG could not have arisen either by biological adaptation or non-adaptationist genetic processes, resulting in a logical problem of language evolution. Specifically, as the processes of language change (...) are much more rapid than processes of genetic change, language constitutes a both over time and across different human populations, and, hence, cannot provide a stable environment to which language genes could have adapted. We conclude that a biologically determined UG is not evolutionarily viable. Instead, the original motivation for UG arises because language has been shaped to fit the human brain, rather than vice versa. Following Darwin, we view language itself as a complex and interdependent which evolves under selectional pressures from human learning and processing mechanisms. That is, languages themselves are shaped by severe selectional pressure from each generation of language users and learners. This suggests that apparently arbitrary aspects of linguistic structure may result from general learning and processing biases deriving from the structure of thought processes, perceptuo-motor factors, cognitive limitations, and pragmatics. (shrink)
Memory is fleeting. New material rapidly obliterates previous material. How, then, can the brain deal successfully with the continual deluge of linguistic input? We argue that, to deal with this “Now-or-Never” bottleneck, the brain must compress and recode linguistic input as rapidly as possible. This observation has strong implications for the nature of language processing: the language system must “eagerly” recode and compress linguistic input; as the bottleneck recurs at each new representational level, the language system must build a multilevel (...) linguistic representation; and the language system must deploy all available information predictively to ensure that local linguistic ambiguities are dealt with “Right-First-Time”; once the original input is lost, there is no way for the language system to recover. This is “Chunk-and-Pass” processing. Similarly, language learning must also occur in the here and now, which implies that language acquisition is learning to process, rather than inducing, a grammar. Moreover, this perspective provides a cognitive foundation for grammaticalization and other aspects of language change. Chunk-and-Pass processing also helps explain a variety of core properties of language, including its multilevel representational structure and duality of patterning. This approach promises to create a direct relationship between psycholinguistics and linguistic theory. More generally, we outline a framework within which to integrate often disconnected inquiries into language processing, language acquisition, and language change and evolution. (shrink)
The notion that the form of a word bears an arbitrary relation to its meaning accounts only partly for the attested relations between form and meaning in the languages of the world. Recent research suggests a more textured view of vocabulary structure, in which arbitrariness is complemented by iconicity (aspects of form resemble aspects of meaning) and systematicity (statistical regularities in forms predict function). Experimental evidence suggests these form-to-meaning correspondences serve different functions in language processing, development, and communication: systematicity facilitates (...) category learning by means of phonological cues, iconicity facilitates word learning and communication by means of perceptuomotor analogies, and arbitrariness facilitates meaning individuation through distinctive forms. Processes of cultural evolution help to explain how these competing motivations shape vocabulary structure. (shrink)
Why are children better language learners than adults despite being worse at a range of other cognitive tasks? Here, we explore the role of multiword sequences in explaining L1–L2 differences in learning. In particular, we propose that children and adults differ in their reliance on such multiword units in learning, and that this difference affects learning strategies and outcomes, and leads to difficulty in learning certain grammatical relations. In the first part, we review recent findings that suggest that MWUs play (...) a facilitative role in learning. We then discuss the implications of these findings for L1–L2 differences: We hypothesize that adults are both less likely to extract MWUs and less capable of benefiting from them in the process of learning. In the next section, we draw on psycholinguistic, developmental, and computational findings to support these predictions. We end with a discussion of the relation between this proposal and other accounts of L1–L2 difficulty. (shrink)
Recent research suggests that language evolution is a process of cultural change, in which linguistic structures are shaped through repeated cycles of learning and use by domain-general mechanisms. This paper draws out the implications of this viewpoint for understanding the problem of language acquisition, which is cast in a new, and much more tractable, form. In essence, the child faces a problem of induction, where the objective is to coordinate with others (C-induction), rather than to model the structure of the (...) natural world (N-induction). We argue that, of the two, C-induction is dramatically easier. More broadly, we argue that understanding the acquisition of any cultural form, whether linguistic or otherwise, during development, requires considering the corresponding question of how that cultural form arose through processes of cultural evolution. This perspective helps resolve the “logical” problem of language acquisition and has far-reaching implications for evolutionary psychology. (shrink)
The ability to convey our thoughts using an infinite number of linguistic expressions is one of the hallmarks of human language. Understanding the nature of the psychological mechanisms and representations that give rise to this unique productivity is a fundamental goal for the cognitive sciences. A long-standing hypothesis is that single words and rules form the basic building blocks of linguistic productivity, with multiword sequences being treated as units only in peripheral cases such as idioms. The new millennium, however, has (...) seen a shift toward construing multiword linguistic units not as linguistic rarities, but as important building blocks for language acquisition and processing. This shift—which originated within theoretical approaches that emphasize language learning and use—has far-reaching implications for theories of language representation, processing, and acquisition. Incorporating multiword units as integral building blocks blurs the distinction between grammar and lexicon; calls for models of production and comprehension that can accommodate and give rise to the effect of multiword information on processing; and highlights the importance of such units to learning. In this special topic, we bring together cutting-edge work on multiword sequences in theoretical linguistics, first-language acquisition, psycholinguistics, computational modeling, and second-language learning to present a comprehensive overview of the prominence and importance of such units in language, their possible role in explaining differences between first- and second-language learning, and the challenges the combined findings pose for theories of language. (shrink)
Previous research on lexical development has aimed to identify the factors that enable accurate initial word-referent mappings based on the assumption that the accuracy of initial word-referent associations is critical for word learning. The present study challenges this assumption. Adult English speakers learned an artificial language within a cross-situational learning paradigm. Visual fixation data were used to assess the direction of visual attention. Participants whose longest fixations in the initial trials fell more often on distracter images performed significantly better at (...) test than participants whose longest fixations fell more often on referent images. Thus, inaccurate initial word-referent mappings may actually benefit learning. (shrink)
Second-language learners rarely arrive at native proficiency in a number of linguistic domains, including morphological and syntactic processing. Previous approaches to understanding the different outcomes of first- versus second-language learning have focused on cognitive and neural factors. In contrast, we explore the possibility that children and adults may rely on different linguistic units throughout the course of language learning, with specific focus on the granularity of those units. Following recent psycholinguistic evidence for the role of multiword chunks in online language (...) processing, we explore the hypothesis that children rely more heavily on multiword units in language learning than do adults learning a second language. To this end, we take an initial step toward using large-scale, corpus-based computational modeling as a tool for exploring the granularity of speakers' linguistic units. Employing a computational model of language learning, the Chunk-Based Learner, we compare the usefulness of chunk-based knowledge in accounting for the speech of second-language learners versus children and adults speaking their first language. Our findings suggest that while multiword units are likely to play a role in second-language learning, adults may learn less useful chunks, rely on them to a lesser extent, and arrive at them through different means than children learning a first language. (shrink)
Our understanding of language, its origins and subsequent evolution, is shaped not only by data and theories from the language sciences, but also fundamentally by the biological sciences. Recent developments in genetics and evolutionary theory offer both very strong constraints on what scenarios of language evolution are possible and probable, but also offer exciting opportunities for understanding otherwise puzzling phenomena. Due to the intrinsic breathtaking rate of advancement in these fields, and the complexity, subtlety, and sometimes apparent non-intuitiveness of the (...) phenomena discovered, some of these recent developments have either being completely missed by language scientists or misperceived and misrepresented. In this short paper, we offer an update on some of these findings and theoretical developments through a selection of illustrative examples and discussions that cast new light on current debates in the language sciences. The main message of our paper is that life is much more complex and nuanced than anybody could have predicted even a few decades ago, and that we need to be flexible in our theorizing instead of embracing a priori dogmas and trying to patch paradigms that are no longer satisfactory. (shrink)
It is widely assumed that language in some form or other originated by piggybacking on pre-existing learning mechanism not dedicated to language. Using evolutionary connectionist simulations, we explore the implications of such assumptions by determining the effect of constraints derived from an earlier evolved mechanism for sequential learning on the interaction between biological and linguistic adaptation across generations of language learners. Artificial neural networks were initially allowed to evolve “biologically” to improve their sequential learning abilities, after which language was introduced (...) into the population. We compared the relative contribution of biological and linguistic adaptation by allowing both networks and language to change over time. The simulation results support two main conclusions: First, over generations, a consistent head-ordering emerged due to linguistic adaptation. This is consistent with previous studies suggesting that some apparently arbitrary aspects of linguistic structure may arise from cognitive constraints on sequential learning. Second, when networks were selected to maintain a good level of performance on the sequential learning task, language learnability is significantly improved by linguistic adaptation but not by biological adaptation. Indeed, the pressure toward maintaining a high level of sequential learning performance prevented biological assimilation of linguistic-specific knowledge from occurring. (shrink)
Psychologists have used experimental methods to study language for more than a century. However, only with the recent availability of large-scale linguistic databases has a more complete picture begun to emerge of how language is actually used, and what information is available as input to language acquisition. Analyses of such “big data” have resulted in reappraisals of key assumptions about the nature of language. As an example, we focus on corpus-based research that has shed new light on the arbitrariness of (...) the sign: the longstanding assumption that the relationship between the sound of a word and its meaning is arbitrary. The results reveal a systematic relationship between the sound of a word and its meaning, which is stronger for early acquired words. Moreover, the analyses further uncover a systematic relationship between words and their lexical categories—nouns and verbs sound differently from each other—affecting how we learn new words and use them in sentences. Together, these results point to a division of labor between arbitrariness and systematicity in sound-meaning mappings. We conclude by arguing in favor of including “big data” analyses into the language scientist's methodological toolbox. (shrink)
Intuitively, the accuracy of initial word-referent mappings should be positively correlated with the outcome of learning. Yet recent evidence suggests an inverse effect of initial accuracy in adults, whereby greater accuracy of initial mappings is associated with poorer outcomes in a cross-situational learning task. Here, we examine the impact of initial accuracy on 4-year-olds, 10-year-olds, and adults. For half of the participants most word-referent mappings were initially correct and for the other half most mappings were initially incorrect. Initial accuracy was (...) positively related to learning outcomes in 4-year-olds, had no effect on 10-year-olds' learning, and was inversely related to learning outcomes in adults. Examination of item learning patterns revealed item interdependence for adults and 4-year-olds but not 10-year-olds. These findings point to a qualitative change in language learning processes over development. (shrink)
It is widely assumed that language in some form or other originated by piggybacking on pre-existing learning mechanism not dedicated to language. Using evolutionary connectionist simulations, we explore the implications of such assumptions by determining the effect of constraints derived from an earlier evolved mechanism for sequential learning on the interaction between biological and linguistic adaptation across generations of language learners. Artificial neural networks were initially allowed to evolve “biologically” to improve their sequential learning abilities, after which language was introduced (...) into the population. We compared the relative contribution of biological and linguistic adaptation by allowing both networks and language to change over time. The simulation results support two main conclusions: First, over generations, a consistent head-ordering emerged due to linguistic adaptation. This is consistent with previous studies suggesting that some apparently arbitrary aspects of linguistic structure may arise from cognitive constraints on sequential learning. Second, when networks were selected to maintain a good level of performance on the sequential learning task, language learnability is significantly improved by linguistic adaptation but not by biological adaptation. Indeed, the pressure toward maintaining a high level of sequential learning performance prevented biological assimilation of linguistic-specific knowledge from occurring. (shrink)
If human language must be squeezed through a narrow cognitive bottleneck, what are the implications for language processing, acquisition, change, and structure? In our target article, we suggested that the implications are far-reaching and form the basis of an integrated account of many apparently unconnected aspects of language and language processing, as well as suggesting revision of many existing theoretical accounts. With some exceptions, commentators were generally supportive both of the existence of the bottleneck and its potential implications. Many commentators (...) suggested additional theoretical and linguistic nuances and extensions, links with prior work, and relevant computational and neuroscientific considerations; some argued for related but distinct viewpoints; a few, though, felt traditional perspectives were being abandoned too readily. Our response attempts to build on the many suggestions raised by the commentators and to engage constructively with challenges to our approach. (shrink)
We agree with Caplan & Waters that there are problems with the single-resource theory of sentence comprehension. However, we challenge their dual-resource alternative on theoretical and empirical grounds and point to a more coherent solution that abandons the notion of working memory resources.