Switch to: References

Citations of:

Finding Structure in Time

Cognitive Science 14 (2):179-211 (1990)

Add citations

You must login to add citations.
  1. The Story Gestalt: A Model Of Knowledge‐Intensive Processes in Text Comprehension.Mark F. John - 1992 - Cognitive Science 16 (2):271-306.
    Direct download  
     
    Export citation  
     
    Bookmark   13 citations  
  • Perceptual Inference Through Global Lexical Similarity.Brendan T. Johns & Michael N. Jones - 2012 - Topics in Cognitive Science 4 (1):103-120.
    The literature contains a disconnect between accounts of how humans learn lexical semantic representations for words. Theories generally propose that lexical semantics are learned either through perceptual experience or through exposure to regularities in language. We propose here a model to integrate these two information sources. Specifically, the model uses the global structure of memory to exploit the redundancy between language and perception in order to generate inferred perceptual representations for words with which the model has no perceptual experience. We (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   17 citations  
  • Genes, development, and the “innate” structure of the mind.Timothy D. Johnston - 1994 - Behavioral and Brain Sciences 17 (4):721-722.
  • Letting Structure Emerge: Connectionist and Dynamical Systems Approaches to Cognition.Linda B. Smith James L. McClelland, Matthew M. Botvinick, David C. Noelle, David C. Plaut, Timothy T. Rogers, Mark S. Seidenberg - 2010 - Trends in Cognitive Sciences 14 (8):348.
  • Are there interactive processes in speech perception?Lori L. Holt James L. McClelland, Daniel Mirman - 2006 - Trends in Cognitive Sciences 10 (8):363.
  • The flow of narrative in the mind unmoored: An account of narrative processing.Elspeth Jajdelska - 2019 - Philosophical Psychology 32 (4):560-583.
    Verbal narratives provide incomplete information and can be very long, yet readers and hearers often effortlessly fill in the gaps and make connections across long stretches of text, sometimes even finding this immersive. How is this done? In the last few decades, event-indexing situation modeling and complementary accounts of narrative emotion have suggested answers. Despite this progress, comparisons between real-life perception and narrative experience might underplay the way narrative processing modifies our world model, as well as the role of the (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Alignment as a consequence of expectation adaptation: Syntactic priming is affected by the prime’s prediction error given both prior and recent experience.T. Florian Jaeger & Neal E. Snider - 2013 - Cognition 127 (1):57-83.
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   49 citations  
  • What is classical conditioning?W. J. Jacobs - 1989 - Behavioral and Brain Sciences 12 (1):146-146.
  • Statistically Induced Chunking Recall: A Memory‐Based Approach to Statistical Learning.Erin S. Isbilen, Stewart M. McCauley, Evan Kidd & Morten H. Christiansen - 2020 - Cognitive Science 44 (7):e12848.
    The computations involved in statistical learning have long been debated. Here, we build on work suggesting that a basic memory process, chunking, may account for the processing of statistical regularities into larger units. Drawing on methods from the memory literature, we developed a novel paradigm to test statistical learning by leveraging a robust phenomenon observed in serial recall tasks: that short‐term memory is fundamentally shaped by long‐term distributional learning. In the statistically induced chunking recall (SICR) task, participants are exposed to (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  • An interference account of the missing-VP effect.Jana Hã¤Ussler & Markus Bader - 2015 - Frontiers in Psychology 6.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  • Structured Semantic Knowledge Can Emerge Automatically from Predicting Word Sequences in Child-Directed Speech.Philip A. Huebner & Jon A. Willits - 2018 - Frontiers in Psychology 9.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  • A parallel architecture perspective on pre-activation and prediction in language processing.Falk Huettig, Jenny Audring & Ray Jackendoff - 2022 - Cognition 224:105050.
  • The Logical Problem of Language Acquisition: A Probabilistic Perspective.Anne S. Hsu & Nick Chater - 2010 - Cognitive Science 34 (6):972-1016.
    Natural language is full of patterns that appear to fit with general linguistic rules but are ungrammatical. There has been much debate over how children acquire these “linguistic restrictions,” and whether innate language knowledge is needed. Recently, it has been shown that restrictions in language can be learned asymptotically via probabilistic inference using the minimum description length (MDL) principle. Here, we extend the MDL approach to give a simple and practical methodology for estimating how much linguistic data are required to (...)
    Direct download  
     
    Export citation  
     
    Bookmark   13 citations  
  • Language Learning From Positive Evidence, Reconsidered: A Simplicity-Based Approach.Anne S. Hsu, Nick Chater & Paul Vitányi - 2013 - Topics in Cognitive Science 5 (1):35-55.
    Children learn their native language by exposure to their linguistic and communicative environment, but apparently without requiring that their mistakes be corrected. Such learning from “positive evidence” has been viewed as raising “logical” problems for language acquisition. In particular, without correction, how is the child to recover from conjecturing an over-general grammar, which will be consistent with any sentence that the child hears? There have been many proposals concerning how this “logical problem” can be dissolved. In this study, we review (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  • Development, learning, and consciousness.Mark L. Howe & F. Michael Rabinowitz - 1994 - Behavioral and Brain Sciences 17 (3):407-407.
  • Constructing Semantic Representations From a Gradually Changing Representation of Temporal Context.Marc W. Howard, Karthik H. Shankar & Udaya K. K. Jagadisan - 2011 - Topics in Cognitive Science 3 (1):48-73.
    Computational models of semantic memory exploit information about co-occurrences of words in naturally occurring text to extract information about the meaning of the words that are present in the language. Such models implicitly specify a representation of temporal context. Depending on the model, words are said to have occurred in the same context if they are presented within a moving window, within the same sentence, or within the same document. The temporal context model (TCM), which specifies a particular definition of (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  • Preparatory response hypotheses: A muddle of causal and functional analyses.Karen L. Hollis - 1989 - Behavioral and Brain Sciences 12 (1):145-146.
  • Implicit assumptions about implicit learning.Keith J. Holyoak & Merideth Gattis - 1994 - Behavioral and Brain Sciences 17 (3):406-407.
  • Concepts, control, and context: A connectionist account of normal and disordered semantic cognition.Paul Hoffman, James L. McClelland & Matthew A. Lambon Ralph - 2018 - Psychological Review 125 (3):293-328.
    No categories
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  • Where Do Features Come From?Geoffrey Hinton - 2014 - Cognitive Science 38 (6):1078-1101.
    It is possible to learn multiple layers of non-linear features by backpropagating error derivatives through a feedforward neural network. This is a very effective learning procedure when there is a huge amount of labeled training data, but for many learning tasks very few labeled examples are available. In an effort to overcome the need for labeled data, several different generative models were developed that learned interesting features by modeling the higher order statistical structure of a set of input vectors. One (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  • Is stiffness the mainspring of posture and movement?Z. Hasan - 1992 - Behavioral and Brain Sciences 15 (4):756-758.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   40 citations  
  • Spoken word recognition without a TRACE.Thomas Hannagan, James S. Magnuson & Jonathan Grainger - 2013 - Frontiers in Psychology 4.
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  • Representational redescription, memory, and connectionism.P. J. Hampson - 1994 - Behavioral and Brain Sciences 17 (4):721-721.
  • Introduction to the Issue on Computational Models of Natural Language.John Hale & David Reitter - 2013 - Topics in Cognitive Science 5 (3):388-391.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • The 'explicit-implicit' distinction.Robert F. Hadley - 1995 - Minds and Machines 5 (2):219-42.
    Much of traditional AI exemplifies the explicit representation paradigm, and during the late 1980''s a heated debate arose between the classical and connectionist camps as to whether beliefs and rules receive an explicit or implicit representation in human cognition. In a recent paper, Kirsh (1990) questions the coherence of the fundamental distinction underlying this debate. He argues that our basic intuitions concerning explicit and implicit representations are not only confused but inconsistent. Ultimately, Kirsh proposes a new formulation of the distinction, (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   19 citations  
  • Systematicity in connectionist language learning.Robert F. Hadley - 1994 - Mind and Language 9 (3):247-72.
  • Systematicity revisited.Robert F. Hadley - 1994 - Mind and Language 9 (4):431-44.
  • Cognition, systematicity, and nomic necessity.Robert F. Hadley - 1997 - Mind and Language 12 (2):137-53.
    In their provocative 1988 paper, Fodor and Pylyshyn issued a formidable challenge to connectionists, i.e. to provide a non‐classical explanation of the empirical phenomenon of systematicity in cognitive agents. Since the appearance of F&P's challenge, a number of connectionist systems have emerged which prima facie meet this challenge. However, Fodor and McLaughlin (1990) advance an argument, based upon a general principle of nomological necessity, to show that one of these systems (Smolensky's) could not satisfy the Fodor‐Pylyshyn challenge. Yet, if Fodor (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   10 citations  
  • Cognition, Systematicity and Nomic Necessity.Robert F. Hadley - 1997 - Mind and Language 12 (2):137-153.
    In their provocative 1988 paper, Fodor and Pylyshyn issued a formidable challenge to connectionists, i.e. to provide a non‐classical explanation of the empirical phenomenon of systematicity in cognitive agents. Since the appearance of F&P's challenge, a number of connectionist systems have emerged which prima facie meet this challenge. However, Fodor and McLaughlin (1990) advance an argument, based upon a general principle of nomological necessity, to show that one of these systems (Smolensky's) could not satisfy the Fodor‐Pylyshyn challenge. Yet, if Fodor (...)
    Direct download  
     
    Export citation  
     
    Bookmark   9 citations  
  • Direct Associations or Internal Transformations? Exploring the Mechanisms Underlying Sequential Learning Behavior.Todd M. Gureckis & Bradley C. Love - 2010 - Cognitive Science 34 (1):10-50.
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   8 citations  
  • Theoretical and computational analysis of skill learning, repetition priming, and procedural memory.Prahlad Gupta & Neal J. Cohen - 2002 - Psychological Review 109 (2):401-448.
  • Language Usage and Second Language Morphosyntax: Effects of Availability, Reliability, and Formulaicity.Rundi Guo & Nick C. Ellis - 2021 - Frontiers in Psychology 12.
    A large body of psycholinguistic research demonstrates that both language processing and language acquisition are sensitive to the distributions of linguistic constructions in usage. Here we investigate how statistical distributions at different linguistic levels – morphological and lexical, and phrasal – contribute to the ease with which morphosyntax is processed and produced by second language learners. We analyze Chinese ESL learners’ knowledge of four English inflectional morphemes: -ed, -ing, and third-person -s on verbs, and plural -s on nouns. In Elicited (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Particularism, Analogy, and Moral Cognition.Marcello Guarini - 2010 - Minds and Machines 20 (3):385-422.
    ‘Particularism’ and ‘generalism’ refer to families of positions in the philosophy of moral reasoning, with the former playing down the importance of principles, rules or standards, and the latter stressing their importance. Part of the debate has taken an empirical turn, and this turn has implications for AI research and the philosophy of cognitive modeling. In this paper, Jonathan Dancy’s approach to particularism (arguably one of the best known and most radical approaches) is questioned both on logical and empirical grounds. (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  • Beyond connectionist versus classical Al: A control theoretic perspective on development and cognitive science.Rick Grush - 1994 - Behavioral and Brain Sciences 17 (4):720-720.
  • Semantic learning in autonomously active recurrent neural networks.Claudius Gros & Gregor Kaczor - 2010 - Logic Journal of the IGPL 18 (5):686-704.
    The human brain is autonomously active, being characterized by a self-sustained neural activity which would be present even in the absence of external sensory stimuli. Here we study the interrelation between the self-sustained activity in autonomously active recurrent neural nets and external sensory stimuli. There is no a priori semantical relation between the influx of external stimuli and the patterns generated internally by the autonomous and ongoing brain dynamics. The question then arises when and how are semantic correlations between internal (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • Classical conditioning: The role of interdisciplinary theory.Stephen Grossberg - 1989 - Behavioral and Brain Sciences 12 (1):144-145.
  • Using Category Structures to Test Iterated Learning as a Method for Identifying Inductive Biases.Thomas L. Griffiths, Brian R. Christian & Michael L. Kalish - 2008 - Cognitive Science 32 (1):68-107.
    Many of the problems studied in cognitive science are inductive problems, requiring people to evaluate hypotheses in the light of data. The key to solving these problems successfully is having the right inductive biases—assumptions about the world that make it possible to choose between hypotheses that are equally consistent with the observed data. This article explores a novel experimental method for identifying the biases that guide human inductive inferences. The idea behind this method is simple: This article uses the responses (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   12 citations  
  • Dissociation, self-attribution, and redescription.George Graham - 1994 - Behavioral and Brain Sciences 17 (4):719-719.
  • Learning to divide the labor: an account of deficits in light and heavy verb production.Jean K. Gordon & Gary S. Dell - 2003 - Cognitive Science 27 (1):1-40.
    Theories of sentence production that involve a convergence of activation from conceptual‐semantic and syntactic‐sequential units inspired a connectionist model that was trained to produce simple sentences. The model used a learning algorithm that resulted in a sharing of responsibility (or “division of labor”) between syntactic and semantic inputs for lexical activation according to their predictive power. Semantically rich, or “heavy”, verbs in the model came to rely on semantic cues more than on syntactic cues, whereas semantically impoverished, or “light”, verbs (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  • Influence of Perceptual Saliency Hierarchy on Learning of Language Structures: An Artificial Language Learning Experiment.Tao Gong, Yau W. Lam & Lan Shuai - 2016 - Frontiers in Psychology 7.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • Artificial grammar learning by 1-year-olds leads to specific and abstract knowledge.Rebecca L. Gomez & LouAnn Gerken - 1999 - Cognition 70 (2):109-135.
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   81 citations  
  • Do you have to be right to redescribe?Susan Goldin-Meadow & Martha Wagner Alibali - 1994 - Behavioral and Brain Sciences 17 (4):718-719.
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  • Are rules and instances subserved by separate systems?Robert L. Goldstone & John K. Kruschke - 1994 - Behavioral and Brain Sciences 17 (3):405-405.
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • A Bayesian framework for word segmentation: Exploring the effects of context.Sharon Goldwater, Thomas L. Griffiths & Mark Johnson - 2009 - Cognition 112 (1):21-54.
  • Smolensky's proper treatment of connectionism: Having it both ways.Vinod Goel - 1990 - Behavioral and Brain Sciences 13 (2):400-401.
  • Artificial grammar learning by 1-year-olds leads to specific and abstract knowledge.Rebecca L. Gomez & LouAnn Gerken - 1999 - Cognition 70 (2):109-135.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   53 citations  
  • Lexical and Sublexical Units in Speech Perception.Ibrahima Giroux & Arnaud Rey - 2009 - Cognitive Science 33 (2):260-272.
    Saffran, Newport, and Aslin (1996a) found that human infants are sensitive to statistical regularities corresponding to lexical units when hearing an artificial spoken language. Two sorts of segmentation strategies have been proposed to account for this early word‐segmentation ability: bracketing strategies, in which infants are assumed to insert boundaries into continuous speech, and clustering strategies, in which infants are assumed to group certain speech sequences together into units (Swingley, 2005). In the present study, we test the predictions of two computational (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   18 citations  
  • Linguistic complexity: locality of syntactic dependencies.Edward Gibson - 1998 - Cognition 68 (1):1-76.
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   152 citations  
  • Developmental motifs reveal complex structure in cell lineages.Nicholas Geard, Seth Bullock, Rolf Lohaus, Ricardo B. R. Azevedo & Janet Wiles - 2011 - Complexity 16 (4):48-57.
    Many natural and technological systems are complex, with organizational structures that exhibit characteristic patterns but defy concise description. One effective approach to analyzing such systems is in terms of repeated topological motifs. Here, we extend the motif concept to characterize the dynamic behavior of complex systems by introducing developmental motifs, which capture patterns of system growth. As a proof of concept, we use developmental motifs to analyze the developmental cell lineage of the nematode Caenorhabditis elegans, revealing a new perspective on (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Why Can Computers Understand Natural Language?Juan Luis Gastaldi - 2020 - Philosophy and Technology 34 (1):149-214.
    The present paper intends to draw the conception of language implied in the technique of word embeddings that supported the recent development of deep neural network models in computational linguistics. After a preliminary presentation of the basic functioning of elementary artificial neural networks, we introduce the motivations and capabilities of word embeddings through one of its pioneering models, word2vec. To assess the remarkable results of the latter, we inspect the nature of its underlying mechanisms, which have been characterized as the (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations