Switch to: References

Add citations

You must login to add citations.
  1. Testable or bust: theoretical lessons for predictive processing.Marcin Miłkowski & Piotr Litwin - 2022 - Synthese 200 (6):1-18.
    The predictive processing account of action, cognition, and perception is one of the most influential approaches to unifying research in cognitive science. However, its promises of grand unification will remain unfulfilled unless the account becomes theoretically robust. In this paper, we focus on empirical commitments of PP, since they are necessary both for its theoretical status to be established and for explanations of individual phenomena to be falsifiable. First, we argue that PP is a varied research tradition, which may employ (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • A predictive coding model of the N400.Samer Nour Eddine, Trevor Brothers, Lin Wang, Michael Spratling & Gina R. Kuperberg - 2024 - Cognition 246 (C):105755.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • The Rhetoric and Reality of Anthropomorphism in Artificial Intelligence.David Watson - 2019 - Minds and Machines 29 (3):417-440.
    Artificial intelligence has historically been conceptualized in anthropomorphic terms. Some algorithms deploy biomimetic designs in a deliberate attempt to effect a sort of digital isomorphism of the human brain. Others leverage more general learning strategies that happen to coincide with popular theories of cognitive science and social epistemology. In this paper, I challenge the anthropomorphic credentials of the neural network algorithm, whose similarities to human cognition I argue are vastly overstated and narrowly construed. I submit that three alternative supervised learning (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   13 citations  
  • Monotone Quantifiers Emerge via Iterated Learning.Fausto Carcassi, Shane Steinert-Threlkeld & Jakub Szymanik - 2021 - Cognitive Science 45 (8):e13027.
    Natural languages exhibit manysemantic universals, that is, properties of meaning shared across all languages. In this paper, we develop an explanation of one very prominent semantic universal, the monotonicity universal. While the existing work has shown that quantifiers satisfying the monotonicity universal are easier to learn, we provide a more complete explanation by considering the emergence of quantifiers from the perspective of cultural evolution. In particular, we show that quantifiers satisfy the monotonicity universal evolve reliably in an iterated learning paradigm (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations