Switch to: References

Add citations

You must login to add citations.
  1. Similarity of referents influences the learning of phonological word forms: Evidence from concurrent word learning.Libo Zhao, Stephanie Packard, Bob McMurray & Prahlad Gupta - 2019 - Cognition 190 (C):42-60.
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Learning Orthographic Structure With Sequential Generative Neural Networks.Alberto Testolin, Ivilin Stoianov, Alessandro Sperduti & Marco Zorzi - 2016 - Cognitive Science 40 (3):579-606.
    Learning the structure of event sequences is a ubiquitous problem in cognition and particularly in language. One possible solution is to learn a probabilistic generative model of sequences that allows making predictions about upcoming events. Though appealing from a neurobiological standpoint, this approach is typically not pursued in connectionist modeling. Here, we investigated a sequential version of the restricted Boltzmann machine, a stochastic recurrent neural network that extracts high-order structure from sensory data through unsupervised generative learning and can encode contextual (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • Sequence Encoders Enable Large‐Scale Lexical Modeling: Reply to Bowers and Davis (2009).Daragh E. Sibley, Christopher T. Kello, David C. Plaut & Jeffrey L. Elman - 2009 - Cognitive Science 33 (7):1187-1191.
    Sibley, Kello, Plaut, and Elman (2008) proposed the sequence encoder as a model that learns fixed‐width distributed representations of variable‐length sequences. In doing so, the sequence encoder overcomes problems that have restricted models of word reading and recognition to processing only monosyllabic words. Bowers and Davis (2009) recently claimed that the sequence encoder does not actually overcome the relevant problems, and hence it is not a useful component of large‐scale word‐reading models. In this reply, it is noted that the sequence (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Large‐Scale Modeling of Wordform Learning and Representation.Daragh E. Sibley, Christopher T. Kello, David C. Plaut & Jeffrey L. Elman - 2008 - Cognitive Science 32 (4):741-754.
    The forms of words as they appear in text and speech are central to theories and models of lexical processing. Nonetheless, current methods for simulating their learning and representation fail to approach the scale and heterogeneity of real wordform lexicons. A connectionist architecture termed thesequence encoderis used to learn nearly 75,000 wordform representations through exposure to strings of stress‐marked phonemes or letters. First, the mechanisms and efficacy of the sequence encoder are demonstrated and shown to overcome problems with traditional slot‐based (...)
    No categories
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  • Parallel Distributed Processing at 25: Further Explorations in the Microstructure of Cognition.Timothy T. Rogers & James L. McClelland - 2014 - Cognitive Science 38 (6):1024-1077.
    This paper introduces a special issue of Cognitive Science initiated on the 25th anniversary of the publication of Parallel Distributed Processing (PDP), a two-volume work that introduced the use of neural network models as vehicles for understanding cognition. The collection surveys the core commitments of the PDP framework, the key issues the framework has addressed, and the debates the framework has spawned, and presents viewpoints on the current status of these issues. The articles focus on both historical roots and contemporary (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   17 citations  
  • The Place of Modeling in Cognitive Science.James L. McClelland - 2009 - Topics in Cognitive Science 1 (1):11-38.
    I consider the role of cognitive modeling in cognitive science. Modeling, and the computers that enable it, are central to the field, but the role of modeling is often misunderstood. Models are not intended to capture fully the processes they attempt to elucidate. Rather, they are explorations of ideas about the nature of cognitive processes. In these explorations, simplification is essential—through simplification, the implications of the central ideas become more transparent. This is not to say that simplification has no downsides; (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   23 citations  
  • Protein Analysis Meets Visual Word Recognition: A Case for String Kernels in the Brain.Thomas Hannagan & Jonathan Grainger - 2012 - Cognitive Science 36 (4):575-606.
    It has been recently argued that some machine learning techniques known as Kernel methods could be relevant for capturing cognitive and neural mechanisms (Jäkel, Schölkopf, & Wichmann, 2009). We point out that ‘‘String kernels,’’ initially designed for protein function prediction and spam detection, are virtually identical to one contending proposal for how the brain encodes orthographic information during reading. We suggest some reasons for this connection and we derive new ideas for visual word recognition that are successfully put to the (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  • Learning Representations of Wordforms With Recurrent Networks: Comment on Sibley, Kello, Plaut, & Elman (2008).Jeffrey S. Bowers & Colin J. Davis - 2009 - Cognitive Science 33 (7):1183-1186.
    Sibley et al. (2008) report a recurrent neural network model designed to learn wordform representations suitable for written and spoken word identification. The authors claim that their sequence encoder network overcomes a key limitation associated with models that code letters by position (e.g., CAT might be coded as C‐in‐position‐1, A‐in‐position‐2, T‐in‐position‐3). The problem with coding letters by position (slot‐coding) is that it is difficult to generalize knowledge across positions; for example, the overlap between CAT and TOMCAT is lost. Although we (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations