Switch to: Citations

Add references

You must login to add references.
  1. Sequence Encoders Enable Large‐Scale Lexical Modeling: Reply to Bowers and Davis (2009).Daragh E. Sibley, Christopher T. Kello, David C. Plaut & Jeffrey L. Elman - 2009 - Cognitive Science 33 (7):1187-1191.
    Sibley, Kello, Plaut, and Elman (2008) proposed the sequence encoder as a model that learns fixed‐width distributed representations of variable‐length sequences. In doing so, the sequence encoder overcomes problems that have restricted models of word reading and recognition to processing only monosyllabic words. Bowers and Davis (2009) recently claimed that the sequence encoder does not actually overcome the relevant problems, and hence it is not a useful component of large‐scale word‐reading models. In this reply, it is noted that the sequence (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Large‐Scale Modeling of Wordform Learning and Representation.Daragh E. Sibley, Christopher T. Kello, David C. Plaut & Jeffrey L. Elman - 2008 - Cognitive Science 32 (4):741-754.
    The forms of words as they appear in text and speech are central to theories and models of lexical processing. Nonetheless, current methods for simulating their learning and representation fail to approach the scale and heterogeneity of real wordform lexicons. A connectionist architecture termed thesequence encoderis used to learn nearly 75,000 wordform representations through exposure to strings of stress‐marked phonemes or letters. First, the mechanisms and efficacy of the sequence encoder are demonstrated and shown to overcome problems with traditional slot‐based (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  • Large‐Scale Modeling of Wordform Learning and Representation.Daragh E. Sibley, Christopher T. Kello, David C. Plaut & Jeffrey L. Elman - 2008 - Cognitive Science 32 (4):741-754.
    The forms of words as they appear in text and speech are central to theories and models of lexical processing. Nonetheless, current methods for simulating their learning and representation fail to approach the scale and heterogeneity of real wordform lexicons. A connectionist architecture termed thesequence encoderis used to learn nearly 75,000 wordform representations through exposure to strings of stress‐marked phonemes or letters. First, the mechanisms and efficacy of the sequence encoder are demonstrated and shown to overcome problems with traditional slot‐based (...)
    No categories
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  • A distributed, developmental model of word recognition and naming.Mark S. Seidenberg & James L. McClelland - 1989 - Psychological Review 96 (4):523-568.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   415 citations  
  • Parallel Distributed Processing at 25: Further Explorations in the Microstructure of Cognition.Timothy T. Rogers & James L. McClelland - 2014 - Cognitive Science 38 (6):1024-1077.
    This paper introduces a special issue of Cognitive Science initiated on the 25th anniversary of the publication of Parallel Distributed Processing (PDP), a two-volume work that introduced the use of neural network models as vehicles for understanding cognition. The collection surveys the core commitments of the PDP framework, the key issues the framework has addressed, and the debates the framework has spawned, and presents viewpoints on the current status of these issues. The articles focus on both historical roots and contemporary (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   17 citations  
  • Understanding normal and impaired word reading: Computational principles in quasi-regular domains.David C. Plaut, James L. McClelland, Mark S. Seidenberg & Karalyn Patterson - 1996 - Psychological Review 103 (1):56-115.
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   191 citations  
  • Stipulating versus discovering representations.David C. Plaut & James L. McClelland - 2000 - Behavioral and Brain Sciences 23 (4):489-491.
    Page's proposal to stipulate representations in which individual units correspond to meaningful entities is too unconstrained to support effective theorizing. An approach combining general computational principles with domain-specific assumptions, in which learning is used to discover representations that are effective in solving tasks, provides more insight into why cognitive and neural systems are organized the way they are.
    Direct download (8 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  • On language and connectionism: Analysis of a parallel distributed processing model of language acquisition.Steven Pinker & Alan Prince - 1988 - Cognition 28 (1-2):73-193.
  • The primacy model: A new model of immediate serial recall.Michael P. A. Page & Dennis Norris - 1998 - Psychological Review 105 (4):761-781.
  • Letting structure emerge: connectionist and dynamical systems approaches to cognition.James L. McClelland, Matthew M. Botvinick, David C. Noelle, David C. Plaut, Timothy T. Rogers, Mark S. Seidenberg & Linda B. Smith - 2010 - Trends in Cognitive Sciences 14 (8):348-356.
  • Emergence in Cognitive Science.James L. McClelland - 2010 - Topics in Cognitive Science 2 (4):751-770.
    The study of human intelligence was once dominated by symbolic approaches, but over the last 30 years an alternative approach has arisen. Symbols and processes that operate on them are often seen today as approximate characterizations of the emergent consequences of sub- or nonsymbolic processes, and a wide range of constructs in cognitive science can be understood as emergents. These include representational constructs (units, structures, rules), architectural constructs (central executive, declarative memory), and developmental processes and outcomes (stages, sensitive periods, neurocognitive (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   15 citations  
  • An interactive activation model of context effects in letter perception: I. An account of basic findings.James L. McClelland & David E. Rumelhart - 1981 - Psychological Review 88 (5):375-407.
  • Letting Structure Emerge: Connectionist and Dynamical Systems Approaches to Cognition.Linda B. Smith James L. McClelland, Matthew M. Botvinick, David C. Noelle, David C. Plaut, Timothy T. Rogers, Mark S. Seidenberg - 2010 - Trends in Cognitive Sciences 14 (8):348.
  • Distributed representations of structure: A theory of analogical access and mapping.John E. Hummel & Keith J. Holyoak - 1997 - Psychological Review 104 (3):427-466.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   148 citations  
  • Dynamic binding in a neural network for shape recognition.John E. Hummel & Irving Biederman - 1992 - Psychological Review 99 (3):480-517.
  • How does a brain build a cognitive code?Stephen Grossberg - 1980 - Psychological Review 87 (1):1-51.
  • Connectionism and cognitive architecture: A critical analysis.Jerry A. Fodor & Zenon W. Pylyshyn - 1988 - Cognition 28 (1-2):3-71.
    This paper explores the difference between Connectionist proposals for cognitive a r c h i t e c t u r e a n d t h e s o r t s o f m o d e l s t hat have traditionally been assum e d i n c o g n i t i v e s c i e n c e . W e c l a i m t h a t t h (...)
    Direct download (13 more)  
     
    Export citation  
     
    Bookmark   1119 citations  
  • The spatial coding model of visual word identification.Colin J. Davis - 2010 - Psychological Review 117 (3):713-758.
  • DRC: A dual route cascaded model of visual word recognition and reading aloud.Max Coltheart, Kathleen Rastle, Conrad Perry, Robyn Langdon & Johannes Ziegler - 2001 - Psychological Review 108 (1):204-256.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   212 citations  
  • Why do some neurons in cortex respond to information in a selective manner? Insights from artificial neural networks.Jeffrey S. Bowers, Ivan I. Vankov, Markus F. Damian & Colin J. Davis - 2016 - Cognition 148 (C):47-63.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  • Postscript: More problems with Botvinick and Plaut’s (2006) PDP model of short-term memory.Jeffrey S. Bowers, Markus F. Damian & Colin J. Davis - 2009 - Psychological Review 116 (4):995-997.
  • On the biological plausibility of grandmother cells: Implications for neural network theories in psychology and neuroscience.Jeffrey S. Bowers - 2009 - Psychological Review 116 (1):220-251.
    A fundamental claim associated with parallel distributed processing theories of cognition is that knowledge is coded in a distributed manner in mind and brain. This approach rejects the claim that knowledge is coded in a localist fashion, with words, objects, and simple concepts, that is, coded with their own dedicated representations. One of the putative advantages of this approach is that the theories are biologically plausible. Indeed, advocates of the PDP approach often highlight the close parallels between distributed representations learned (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   26 citations  
  • Neural networks learn highly selective representations in order to overcome the superposition catastrophe.Jeffrey S. Bowers, Ivan I. Vankov, Markus F. Damian & Colin J. Davis - 2014 - Psychological Review 121 (2):248-261.
  • Learning Representations of Wordforms With Recurrent Networks: Comment on Sibley, Kello, Plaut, & Elman (2008).Jeffrey S. Bowers & Colin J. Davis - 2009 - Cognitive Science 33 (7):1183-1186.
    Sibley et al. (2008) report a recurrent neural network model designed to learn wordform representations suitable for written and spoken word identification. The authors claim that their sequence encoder network overcomes a key limitation associated with models that code letters by position (e.g., CAT might be coded as C‐in‐position‐1, A‐in‐position‐2, T‐in‐position‐3). The problem with coding letters by position (slot‐coding) is that it is difficult to generalize knowledge across positions; for example, the overlap between CAT and TOMCAT is lost. Although we (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  • A fundamental limitation of the conjunctive codes learned in PDP models of cognition: Comment on Botvinick and Plaut (2006).Jeffrey S. Bowers, Markus F. Damian & Colin J. Davis - 2009 - Psychological Review 116 (4):986-995.
    No categories
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  • The Computational and Neural Basis of Cognitive Control: Charted Territory and New Frontiers.Matthew M. Botvinick - 2014 - Cognitive Science 38 (6):1249-1285.
    Cognitive control has long been one of the most active areas of computational modeling work in cognitive science. The focus on computational models as a medium for specifying and developing theory predates the PDP books, and cognitive control was not one of the areas on which they focused. However, the framework they provided has injected work on cognitive control with new energy and new ideas. On the occasion of the books' anniversary, we review computational modeling in the study of cognitive (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   29 citations  
  • Short-term memory for serial order: A recurrent neural network model.Matthew M. Botvinick & David C. Plaut - 2006 - Psychological Review 113 (2):201-233.