This article presents a sobering view of the discipline of cognitive neuropsychology as practiced over the last three or four decades. Our judgment is that, although the study of abnormal cognition resulting from brain injury or disease in previously normal adults has produced a catalogue of fascinating and highly selective deficits, it has yielded relatively little advance in understanding how the brain accomplishes its cognitive business. We question the wisdom of the following three “choices” in mainstream cognitive neuropsychology: (a) single‐case (...) methodology, (b) dissociation between functions as the most important source of evidence, and (c) a central goal of diagramming the functional architecture of cognition rather than specifying how its components work. These choices may all stem from an excessive commitment to strict and fine‐grained modularity. Although different brain regions are undoubtedly specialized for different functions, we argue that parallel and interactive processing is a better assumption about cognitive processing. The essential goal of specifying representations and processes can, we claim, be significantly assisted by computational modeling which, by its very nature, requires such specification. (shrink)
Rumelhart and McClelland's chapter about learning the past tense created a degree of controversy extraordinary even in the adversarial culture of modern science. It also stimulated a vast amount of research that advanced the understanding of the past tense, inflectional morphology in English and other languages, the nature of linguistic representations, relations between language and other phenomena such as reading and object recognition, the properties of artificial neural networks, and other topics. We examine the impact of the Rumelhart and McClelland (...) model with the benefit of 25 years of hindsight. It is not clear who “won” the debate. It is clear, however, that the core ideas that the model instantiated have been assimilated into many areas in the study of language, changing the focus of research from abstract characterizations of linguistic competence to an emphasis on the role of the statistical structure of language in acquisition and processing. (shrink)
Page's proposal to stipulate representations in which individual units correspond to meaningful entities is too unconstrained to support effective theorizing. An approach combining general computational principles with domain-specific assumptions, in which learning is used to discover representations that are effective in solving tasks, provides more insight into why cognitive and neural systems are organized the way they are.
We (Patterson & Plaut, 2009) argued that cognitive neuropsychology has had a limited impact on cognitive science due to a nearly exclusive reliance on (a) single‐case studies, (b) dissociations in cognitive performance, and (c) shallow, box‐and‐arrow theorizing, and we advocated adopting a case‐series methodology, considering associations as well as dissociations, and employing explicit computational modeling in studying “how the brain does its cognitive business.” In reply, Coltheart (2010) claims that our concern is misplaced because cognitive neuropsychology is concerned only with (...) studying the mind, in terms of its “functional architecture,” without regard to how this is implemented in the brain. In this response, we do not dispute his characterization of cognitive neuropsychology as it has typically been practiced over the last 40 years, but we suggest that our understanding of brain structure and function has advanced to the point where studying the mind without regard to the brain is unwise and perpetuates the field’s isolation. (shrink)
We share with Anderson & Lebiere (A&L) (and with Newell before them) the goal of developing a domain-general framework for modeling cognition, and we take seriously the issue of evaluation criteria. We advocate a more focused approach than the one reflected in Newell's criteria, based on analysis of failures as well as successes of models brought into close contact with experimental data. A&L attribute the shortcomings of our parallel-distributed processing framework to a failure to acknowledge a symbolic level of thought. (...) Our framework does acknowledge a symbolic level, contrary to their claim. What we deny is that the symbolic level is the level at which the principles of cognitive processing should be formulated. Models cast at a symbolic level are sometimes useful as high-level approximations of the underlying mechanisms of thought. The adequacy of this approximation will continue to increase as symbolic modelers continue to incorporate principles of parallel distributed processing. (shrink)
Connectionist models offer concretemechanisms for cognitive processes. When these modelsmimic the performance of human subjects theycan offer insights into the computationswhich might underlie human cognition. We illustratethis with the performance of a recurrentconnectionist network which produces the meaningof words in response to their spellingpattern. It mimics a paradoxical pattern oferrors produced by people trying to read degradedwords. The reason why the network produces thesurprising error pattern lies in the nature ofthe attractors which it develops as it learns tomap spelling patterns (...) to semantics. The keyrole of attractor structure in the successfulsimulation suggests that the normal adult semanticreading route may involve attractor dynamics, andthus the paradoxical error pattern isexplained. (shrink)
Sibley, Kello, Plaut, and Elman (2008) proposed the sequence encoder as a model that learns fixed‐width distributed representations of variable‐length sequences. In doing so, the sequence encoder overcomes problems that have restricted models of word reading and recognition to processing only monosyllabic words. Bowers and Davis (2009) recently claimed that the sequence encoder does not actually overcome the relevant problems, and hence it is not a useful component of large‐scale word‐reading models. In this reply, it is noted that the sequence (...) encoder has facilitated the creation of large‐scale word‐reading models. The reasons for this success are explained and stand as counterarguments to claims made by Bowers and Davis. (shrink)
The search for a universal theory of reading is misguided. Instead, theories should articulate general principles of neural computation that interact with language-specific learning environments to explain the full diversity of observed reading-related phenomena across the world's languages.