21 found
Order:
  1.  64
    Towards a universal model of reading.Ram Frost, Christina Behme, Madeleine El Beveridge, Thomas H. Bak, Jeffrey S. Bowers, Max Coltheart, Stephen Crain, Colin J. Davis, S. Hélène Deacon & Laurie Beth Feldman - 2012 - Behavioral and Brain Sciences 35 (5):263.
    In the last decade, reading research has seen a paradigmatic shift. A new wave of computational models of orthographic processing that offer various forms of noisy position or context-sensitive coding have revolutionized the field of visual word recognition. The influx of such models stems mainly from consistent findings, coming mostly from European languages, regarding an apparent insensitivity of skilled readers to letter order. Underlying the current revolution is the theoretical assumption that the insensitivity of readers to letter order reflects the (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   25 citations  
  2.  17
    On the biological plausibility of grandmother cells: Implications for neural network theories in psychology and neuroscience.Jeffrey S. Bowers - 2009 - Psychological Review 116 (1):220-251.
    A fundamental claim associated with parallel distributed processing theories of cognition is that knowledge is coded in a distributed manner in mind and brain. This approach rejects the claim that knowledge is coded in a localist fashion, with words, objects, and simple concepts, that is, coded with their own dedicated representations. One of the putative advantages of this approach is that the theories are biologically plausible. Indeed, advocates of the PDP approach often highlight the close parallels between distributed representations learned (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   26 citations  
  3.  14
    Deep problems with neural network models of human vision.Jeffrey S. Bowers, Gaurav Malhotra, Marin Dujmović, Milton Llera Montero, Christian Tsvetkov, Valerio Biscione, Guillermo Puebla, Federico Adolfi, John E. Hummel, Rachel F. Heaton, Benjamin D. Evans, Jeffrey Mitchell & Ryan Blything - 2023 - Behavioral and Brain Sciences 46:e385.
    Deep neural networks (DNNs) have had extraordinary successes in classifying photographic images of objects and are often described as the best models of biological vision. This conclusion is largely based on three sets of findings: (1) DNNs are more accurate than any other model in classifying images taken from various datasets, (2) DNNs do the best job in predicting the pattern of human errors in classifying objects taken from various behavioral datasets, and (3) DNNs do the best job in predicting (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  4.  34
    The practical and principled problems with educational neuroscience.Jeffrey S. Bowers - 2016 - Psychological Review 123 (5):600-612.
  5.  20
    Interfering neighbours: The impact of novel word learning on the identification of visually similar words.Jeffrey S. Bowers, Colin J. Davis & Derek A. Hanley - 2005 - Cognition 97 (3):B45-B54.
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   13 citations  
  6.  16
    Why do some neurons in cortex respond to information in a selective manner? Insights from artificial neural networks.Jeffrey S. Bowers, Ivan I. Vankov, Markus F. Damian & Colin J. Davis - 2016 - Cognition 148 (C):47-63.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  7.  42
    Neural networks learn highly selective representations in order to overcome the superposition catastrophe.Jeffrey S. Bowers, Ivan I. Vankov, Markus F. Damian & Colin J. Davis - 2014 - Psychological Review 121 (2):248-261.
  8.  6
    Priming is not all bias: Commentary on Ratcliff and McKoon (1997).Jeffrey S. Bowers - 1999 - Psychological Review 106 (3):582-596.
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  9.  15
    A fundamental limitation of the conjunctive codes learned in PDP models of cognition: Comment on Botvinick and Plaut (2006).Jeffrey S. Bowers, Markus F. Damian & Colin J. Davis - 2009 - Psychological Review 116 (4):986-995.
    No categories
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  10.  12
    Learning Representations of Wordforms With Recurrent Networks: Comment on Sibley, Kello, Plaut, & Elman (2008).Jeffrey S. Bowers & Colin J. Davis - 2009 - Cognitive Science 33 (7):1183-1186.
    Sibley et al. (2008) report a recurrent neural network model designed to learn wordform representations suitable for written and spoken word identification. The authors claim that their sequence encoder network overcomes a key limitation associated with models that code letters by position (e.g., CAT might be coded as C‐in‐position‐1, A‐in‐position‐2, T‐in‐position‐3). The problem with coding letters by position (slot‐coding) is that it is difficult to generalize knowledge across positions; for example, the overlap between CAT and TOMCAT is lost. Although we (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  11.  27
    Age-of-acquisition effects in visual word recognition: evidence from expert vocabularies.Hans Stadthagen-Gonzalez, Jeffrey S. Bowers & Markus F. Damian - 2004 - Cognition 93 (1):B11-B26.
  12.  1
    Clarifying status of DNNs as models of human vision.Jeffrey S. Bowers, Gaurav Malhotra, Marin Dujmović, Milton L. Montero, Christian Tsvetkov, Valerio Biscione, Guillermo Puebla, Federico Adolfi, John E. Hummel, Rachel F. Heaton, Benjamin D. Evans, Jeffrey Mitchell & Ryan Blything - 2023 - Behavioral and Brain Sciences 46:e415.
    On several key issues we agree with the commentators. Perhaps most importantly, everyone seems to agree that psychology has an important role to play in building better models of human vision, and (most) everyone agrees (including us) that deep neural networks (DNNs) will play an important role in modelling human vision going forward. But there are also disagreements about what models are for, how DNN–human correspondences should be evaluated, the value of alternative modelling approaches, and impact of marketing hype in (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  13.  34
    Further arguments in support of localist coding in connectionist networks.Jeffrey S. Bowers - 2000 - Behavioral and Brain Sciences 23 (4):471-471.
    Two additional sources of evidence are provided in support of localist coding within connectionist networks. First, only models with localist codes can currently represent multiple pieces of information simultaneously or represent order among a set of items on-line. Second, recent priming data appear problematic for theories that rely on distributed representations. However, a faulty argument advanced by Page is also pointed out.
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark  
  14.  16
    Grossberg and colleagues solved the hyperonym problem over a decade ago.Jeffrey S. Bowers - 1999 - Behavioral and Brain Sciences 22 (1):38-39.
    Levelt et al. describe a model of speech production in which lemma access is achieved via input from nondecompositional conceptual representations. They claim that existing decompositional theories are unable to account for lexical retrieval because of the so-called hyperonym problem. However, existing decompositional models have solved a formally equivalent problem.
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  15.  13
    More varieties of Bayesian theories, but no enlightenment.Jeffrey S. Bowers & Colin J. Davis - 2011 - Behavioral and Brain Sciences 34 (4):193-194.
    We argue that Bayesian models are best categorized as methodological or theoretical. That is, models are used as tools to constrain theories, with no commitment to the processes that mediate cognition, or models are intended to approximate the underlying algorithmic solutions. We argue that both approaches are flawed, and that the Enlightened Bayesian approach is unlikely to help.
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  16.  21
    Position-invariant letter identification is a key component of any universal model of reading.Jeffrey S. Bowers - 2012 - Behavioral and Brain Sciences 35 (5):281-282.
    A universal property of visual word identification is position-invariant letter identification, such that the letter is coded in the same way in CAT and ACT. This should provide a fundamental constraint on theories of word identification, and, indeed, it inspired some of the theories that Frost has criticized. I show how the spatial coding scheme of Colin Davis can, in principle, account for contrasting transposed letter priming effects, and at the same time, position-invariant letter identification.
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark  
  17.  10
    Postscript: More problems with Botvinick and Plaut’s (2006) PDP model of short-term memory.Jeffrey S. Bowers, Markus F. Damian & Colin J. Davis - 2009 - Psychological Review 116 (4):995-997.
  18.  5
    Postscript: Some final thoughts on grandmother cells, distributed representations, and PDP models of cognition.Jeffrey Bowers - 2010 - Psychological Review 117 (1):306-308.
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark  
  19.  7
    Researchers Keep Rejecting Grandmother Cells after Running the Wrong Experiments: The Issue Is How Familiar Stimuli Are Identified.Jeffrey S. Bowers, Nicolas D. Martin & Ella M. Gale - 2019 - Bioessays 41 (8):1800248.
    There is widespread agreement in neuroscience and psychology that the visual system identifies objects and faces based on a pattern of activation over many neurons, each neuron being involved in representing many different categories. The hypothesis that the visual system includes finely tuned neurons for specific objects or faces for the sake of identification, so‐called “grandmother cells”, is widely rejected. Here it is argued that the rejection of grandmother cells is premature. Grandmother cells constitute a hypothesis of how familiar visual (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  20.  28
    The visual categories for letters and words reside outside any informationally encapsulated perceptual system.Jeffrey S. Bowers - 1999 - Behavioral and Brain Sciences 22 (3):368-369.
    According to Pylyshyn, the early visual system is able to categorize perceptual inputs into shape classes based on visual similarity criteria; it is also suggested that written words may be categorized within early vision. This speculation is contradicted by the fact that visually unrelated exemplars of a given letter (e.g., a/A) or word (e.g., read/READ) map onto common visual categories.
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  21.  1
    Psychology, not educational neuroscience, is the way forward for improving educational outcomes for all children: Reply to Gabrieli (2016) and Howard-Jones et al. (2016). [REVIEW]Jeffrey S. Bowers - 2016 - Psychological Review 123 (5):628-635.
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations