Switch to: References

Citations of:

Finding Structure in Time

Cognitive Science 14 (2):179-211 (1990)

Add citations

You must login to add citations.
  1. Cognitive science: Emerging perspectives and approaches.Narayanan Srinivasan - 2011 - In Girishwar Misra (ed.), Handbook of psychology in India. New Delhi: Oxford University Press. pp. 46--57.
  • Bayesian Fundamentalism or Enlightenment? On the explanatory status and theoretical contributions of Bayesian models of cognition.Matt Jones & Bradley C. Love - 2011 - Behavioral and Brain Sciences 34 (4):169-188.
    The prominence of Bayesian modeling of cognition has increased recently largely because of mathematical advances in specifying and deriving predictions from complex probabilistic models. Much of this research aims to demonstrate that cognitive behavior can be explained from rational principles alone, without recourse to psychological or neurological processes and representations. We note commonalities between this rational approach and other movements in psychology – namely, Behaviorism and evolutionary psychology – that set aside mechanistic explanations or make use of optimality assumptions. Through (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   121 citations  
  • A predictive coding model of the N400.Samer Nour Eddine, Trevor Brothers, Lin Wang, Michael Spratling & Gina R. Kuperberg - 2024 - Cognition 246 (C):105755.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • What levels of explanation in the behavioural sciences?Giuseppe Boccignone & Roberto Cordeschi (eds.) - 2015 - Frontiers Media SA.
    Complex systems are to be seen as typically having multiple levels of organization. For instance, in the behavioural and cognitive sciences, there has been a long lasting trend, promoted by the seminal work of David Marr, putting focus on three distinct levels of analysis: the computational level, accounting for the What and Why issues, the algorithmic and the implementational levels specifying the How problem. However, the tremendous developments in neuroscience knowledge about processes at different scales of organization together with the (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  • Conceptual mapping through keyword coupled clustering.Zvika Marx & Ido Dagan - 2001 - Mind and Society 2 (2):59-85.
    This paper introduces coupled clustering—a novel computational framework for detecting corresponding themes in unstructured data. Gaining its inspiration from the structure mapping theory, our framework utilizes unsupervised statistical learning tools for automatic construction of aligned representations reflecting the context of the particular mapping being made. The coupled clustering algorithm is demonstrated and evaluated through detecting conceptual correspondences in textual corpora. In its current phase, the method is primarily oriented towards context-dependent feature-based similarity. However, it is preliminary demonstrated how it could (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Five Ways in Which Computational Modeling Can Help Advance Cognitive Science: Lessons From Artificial Grammar Learning.Willem Zuidema, Robert M. French, Raquel G. Alhama, Kevin Ellis, Timothy J. O'Donnell, Tim Sainburg & Timothy Q. Gentner - 2020 - Topics in Cognitive Science 12 (3):925-941.
    Zuidema et al. illustrate how empirical AGL studies can benefit from computational models and techniques. Computational models can help clarifying theories, and thus in delineating research questions, but also in facilitating experimental design, stimulus generation, and data analysis. The authors show, with a series of examples, how computational modeling can be integrated with empirical AGL approaches, and how model selection techniques can indicate the most likely model to explain experimental outcomes.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Modeling language and cognition with deep unsupervised learning: a tutorial overview.Marco Zorzi, Alberto Testolin & Ivilin P. Stoianov - 2013 - Frontiers in Psychology 4.
  • The construction of 'reality' in the robot: Constructivist perspectives on situated artificial intelligence and adaptive robotics. [REVIEW]Tom Ziemke - 2001 - Foundations of Science 6 (1-3):163-233.
    This paper discusses different approaches incognitive science and artificial intelligenceresearch from the perspective of radicalconstructivism, addressing especially theirrelation to the biologically based theories ofvon Uexküll, Piaget as well as Maturana andVarela. In particular recent work in New AI and adaptive robotics on situated and embodiedintelligence is examined, and we discuss indetail the role of constructive processes asthe basis of situatedness in both robots andliving organisms.
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  • Tuning in to non-adjacencies: Exposure to learnable patterns supports discovering otherwise difficult structures.Martin Zettersten, Christine E. Potter & Jenny R. Saffran - 2020 - Cognition 202 (C):104283.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • From the decline of development to the ascent of consciousness.Philip David Zelazo - 1994 - Behavioral and Brain Sciences 17 (4):731-732.
  • A theory of eye movements during target acquisition.Gregory J. Zelinsky - 2008 - Psychological Review 115 (4):787-835.
  • Solving the Black Box Problem: A Normative Framework for Explainable Artificial Intelligence.Carlos Zednik - 2019 - Philosophy and Technology 34 (2):265-288.
    Many of the computing systems programmed using Machine Learning are opaque: it is difficult to know why they do what they do or how they work. Explainable Artificial Intelligence aims to develop analytic techniques that render opaque computing systems transparent, but lacks a normative framework with which to evaluate these techniques’ explanatory successes. The aim of the present discussion is to develop such a framework, paying particular attention to different stakeholders’ distinct explanatory requirements. Building on an analysis of “opacity” from (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   53 citations  
  • Learning the generative principles of a symbol system from limited examples.Lei Yuan, Violet Xiang, David Crandall & Linda Smith - 2020 - Cognition 200 (C):104243.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • State‐Trace Analysis: Dissociable Processes in a Connectionist Network?Fayme Yeates, Andy J. Wills, Fergal W. Jones & Ian P. L. McLaren - 2015 - Cognitive Science 39 (5):1047-1061.
    Some argue the common practice of inferring multiple processes or systems from a dissociation is flawed. One proposed solution is state-trace analysis, which involves plotting, across two or more conditions of interest, performance measured by either two dependent variables, or two conditions of the same dependent measure. The resulting analysis is considered to provide evidence that either a single process underlies performance or there is evidence for more than one process. This article reports simulations using the simple recurrent network in (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Word Order Typology Interacts With Linguistic Complexity: A Cross‐Linguistic Corpus Study.Himanshu Yadav, Ashwini Vaidya, Vishakha Shukla & Samar Husain - 2020 - Cognitive Science 44 (4):e12822.
    Much previous work has suggested that word order preferences across languages can be explained by the dependency distance minimization constraint (Ferrer‐i Cancho, 2008, 2015; Hawkins, 1994). Consistent with this claim, corpus studies have shown that the average distance between a head (e.g., verb) and its dependent (e.g., noun) tends to be short cross‐linguistically (Ferrer‐i Cancho, 2014; Futrell, Mahowald, & Gibson, 2015; Liu, Xu, & Liang, 2017). This implies that on average languages avoid inefficient or complex structures for simpler structures. But (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Quality Prediction Model Based on Novel Elman Neural Network Ensemble.Lan Xu & Yuting Zhang - 2019 - Complexity 2019:1-11.
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • Finding Structure in Time: Visualizing and Analyzing Behavioral Time Series.Tian Linger Xu, Kaya de Barbaro, Drew H. Abney & Ralf F. A. Cox - 2020 - Frontiers in Psychology 11:521451.
    The temporal structure of behavior contains a rich source of information about its dynamic organization, origins, and development. Today, advances in sensing and data storage allow researchers to collect multiple dimensions of behavioral data at a fine temporal scale both in and out of the laboratory, leading to the curation of massive multimodal corpora of behavior. However, along with these new opportunities come new challenges. Theories are often underspecified as to the exact nature of these unfolding interactions, and psychologists have (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • On the creation of classification systems of memory.Daniel B. Willingham - 1994 - Behavioral and Brain Sciences 17 (3):426-427.
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Classical conditioning and the placebo effect.Ian Wickram - 1989 - Behavioral and Brain Sciences 12 (1):160-161.
  • Reinforcement learning of non-Markov decision processes.Steven D. Whitehead & Long-Ji Lin - 1995 - Artificial Intelligence 73 (1-2):271-306.
  • Classical conditioning: A manifestation of Bayesian neural learning.James Christopher Westland & Manfred Kochen - 1989 - Behavioral and Brain Sciences 12 (1):160-160.
  • Turing's Analysis of Computation and Theories of Cognitive Architecture.A. J. Wells - 1998 - Cognitive Science 22 (3):269-294.
    Turing's analysis of computation is a fundamental part of the background of cognitive science. In this paper it is argued that a re‐interpretation of Turing's work is required to underpin theorizing about cognitive architecture. It is claimed that the symbol systems view of the mind, which is the conventional way of understanding how Turing's work impacts on cognitive science, is deeply flawed. There is an alternative interpretation that is more faithful to Turing's original insights, avoids the criticisms made of the (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  • Pigeons acquire multiple categories in parallel via associative learning: A parallel to human word learning?Edward A. Wasserman, Daniel I. Brooks & Bob McMurray - 2015 - Cognition 136 (C):99-122.
    No categories
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  • Directions in Connectionist Research: Tractable Computations Without Syntactically Structured Representations.Jonathan Waskan & William Bechtel - 1997 - Metaphilosophy 28 (1‐2):31-62.
    Figure 1: A pr ototyp ical exa mple of a three-layer feed forward network, used by Plunkett and M archm an (1 991 ) to simulate learning the past-tense of En glish verbs. The inpu t units encode representations of the three phonemes of the present tense of the artificial words used in this simulation. Th e netwo rk is trained to produce a representation of the phonemes employed in the past tense form and the suffix (/d/, /ed/, or /t/) (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • Statistical Learning of Unfamiliar Sounds as Trajectories Through a Perceptual Similarity Space.Felix Hao Wang, Elizabeth A. Hutton & Jason D. Zevin - 2019 - Cognitive Science 43 (8):e12740.
    In typical statistical learning studies, researchers define sequences in terms of the probability of the next item in the sequence given the current item (or items), and they show that high probability sequences are treated as more familiar than low probability sequences. Existing accounts of these phenomena all assume that participants represent statistical regularities more or less as they are defined by the experimenters—as sequential probabilities of symbols in a string. Here we offer an alternative, or possibly supplementary, hypothesis. Specifically, (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  • Finding Structure in One Child's Linguistic Experience.Wentao Wang, Wai Keen Vong, Najoung Kim & Brenden M. Lake - 2023 - Cognitive Science 47 (6):e13305.
    Neural network models have recently made striking progress in natural language processing, but they are typically trained on orders of magnitude more language input than children receive. What can these neural networks, which are primarily distributional learners, learn from a naturalistic subset of a single child's experience? We examine this question using a recent longitudinal dataset collected from a single child, consisting of egocentric visual data paired with text transcripts. We train both language-only and vision-and-language neural networks and analyze the (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Advantages of Combining Factorization Machine with Elman Neural Network for Volatility Forecasting of Stock Market.Fang Wang, Sai Tang & Menggang Li - 2021 - Complexity 2021:1-12.
    With a focus in the financial market, stock market dynamics forecasting has received much attention. Predicting stock market fluctuations is usually challenging due to the nonlinear and nonstationary time series of stock prices. The Elman recurrent network is renowned for its capability of dealing with dynamic information, which has made it a successful application to predicting. We developed a hybrid approach which combined Elman recurrent network with factorization machine technique, i.e., the FM-Elman neural network, to predict stock market volatility. In (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • Multilevel Exemplar Theory.Michael Walsh, Bernd Möbius, Travis Wade & Hinrich Schütze - 2010 - Cognitive Science 34 (4):537-582.
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Is there an implicit level of representation?Annie Vinter & Pierre Perruchet - 1994 - Behavioral and Brain Sciences 17 (4):730-731.
  • A self-organized sentence processing theory of gradience: The case of islands.Sandra Villata & Whitney Tabor - 2022 - Cognition 222 (C):104943.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Is recursion language-specific? Evidence of recursive mechanisms in the structure of intentional action.Giuseppe Vicari & Mauro Adenzato - 2014 - Consciousness and Cognition 26:169-188.
    In their 2002 seminal paper Hauser, Chomsky and Fitch hypothesize that recursion is the only human-specific and language-specific mechanism of the faculty of language. While debate focused primarily on the meaning of recursion in the hypothesis and on the human-specific and syntax-specific character of recursion, the present work focuses on the claim that recursion is language-specific. We argue that there are recursive structures in the domain of motor intentionality by way of extending John R. Searle’s analysis of intentional action. We (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  • Smolensky's theory of mind.Paul F. M. J. Verschure - 1990 - Behavioral and Brain Sciences 13 (2):407-407.
  • Symbolic and nonsymbolic pathways of number processing.Tom Verguts & Wim Fias - 2008 - Philosophical Psychology 21 (4):539 – 554.
    Recent years have witnessed an enormous increase in behavioral and neuroimaging studies of numerical cognition. Particular interest has been devoted toward unraveling properties of the representational medium on which numbers are thought to be represented. We have argued that a correct inference concerning these properties requires distinguishing between different input modalities and different decision/output structures. To back up this claim, we have trained computational models with either symbolic or nonsymbolic input and with different task requirements, and showed that this allowed (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  • Similarity and Rules United: Similarity‐ and Rule‐Based Processing in a Single Neural Network.Tom Verguts & Wim Fias - 2009 - Cognitive Science 33 (2):243-259.
    A central controversy in cognitive science concerns the roles of rules versus similarity. To gain some leverage on this problem, we propose that rule‐ versus similarity‐based processes can be characterized as extremes in a multidimensional space that is composed of at least two dimensions: the number of features (Pothos, 2005) and the physical presence of features. The transition of similarity‐ to rule‐based processing is conceptualized as a transition in this space. To illustrate this, we show how a neural network model (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • On observing emergent properties and their compositions.Francisco T. Varela & Vicente Sanchez-Leighton - 1990 - Behavioral and Brain Sciences 13 (2):401-402.
  • Criteria for the Design and Evaluation of Cognitive Architectures.Sashank Varma - 2011 - Cognitive Science 35 (7):1329-1351.
    Cognitive architectures are unified theories of cognition that take the form of computational formalisms. They support computational models that collectively account for large numbers of empirical regularities using small numbers of computational mechanisms. Empirical coverage and parsimony are the most prominent criteria by which architectures are designed and evaluated, but they are not the only ones. This paper considers three additional criteria that have been comparatively undertheorized. (a) Successful architectures possess subjective and intersubjective meaning, making cognition comprehensible to individual cognitive (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • The dynamical hypothesis in cognitive science.Tim van Gelder - 1998 - Behavioral and Brain Sciences 21 (5):615-28.
    According to the dominant computational approach in cognitive science, cognitive agents are digital computers; according to the alternative approach, they are dynamical systems. This target article attempts to articulate and support the dynamical hypothesis. The dynamical hypothesis has two major components: the nature hypothesis (cognitive agents are dynamical systems) and the knowledge hypothesis (cognitive agents can be understood dynamically). A wide range of objections to this hypothesis can be rebutted. The conclusion is that cognitive systems may well be dynamical systems, (...)
    Direct download (11 more)  
     
    Export citation  
     
    Bookmark   207 citations  
  • Embodied Language Comprehension Requires an Enactivist Paradigm of Cognition.Michiel van Elk, Marc Slors & Harold Bekkering - 2010 - Frontiers in Psychology 1.
  • Exploring What Is Encoded in Distributional Word Vectors: A Neurobiologically Motivated Analysis.Akira Utsumi - 2020 - Cognitive Science 44 (6):e12844.
    The pervasive use of distributional semantic models or word embeddings for both cognitive modeling and practical application is because of their remarkable ability to represent the meanings of words. However, relatively little effort has been made to explore what types of information are encoded in distributional word vectors. Knowing the internal knowledge embedded in word vectors is important for cognitive modeling using distributional semantic models. Therefore, in this paper, we attempt to identify the knowledge encoded in word vectors by conducting (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  • Classical conditioning beyond the reflex: An uneasy rebirth.Jaylan Sheila Turkkan - 1989 - Behavioral and Brain Sciences 12 (1):161-179.
  • Classical conditioning: The new hegemony.Jaylan Sheila Turkkan - 1989 - Behavioral and Brain Sciences 12 (1):121-137.
    Converging data from different disciplines are showing the role of classical conditioning processes in the elaboration of human and animal behavior to be larger than previously supposed. Restricted views of classically conditioned responses as merely secretory, reflexive, or emotional are giving way to a broader conception that includes problem-solving, and other rule-governed behavior thought to be the exclusive province of either operant conditiońing or cognitive psychology. These new views have been accompanied by changes in the way conditioning is conducted and (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   57 citations  
  • No need to forget, just keep the balance: Hebbian neural networks for statistical learning.Ángel Eugenio Tovar & Gert Westermann - 2023 - Cognition 230 (C):105176.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • The cerebellum and memory.Richard F. Thompson - 1992 - Behavioral and Brain Sciences 15 (4):801-802.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • Darwin and the golden rule: how to distinguish differences of degree from differences of kind using mechanisms.Paul Thagard - 2022 - Biology and Philosophy 37 (6):1–18.
    Darwin claimed that human and animal minds differ in degree but not in kind, and that ethical principles such as the Golden Rule are just an extension of thinking found in animals. Both claims are false. The best way to distinguish differences in degree from differences in kind is by identifying mechanisms that have emergent properties. Recursive thinking is an emergent capability found in humans but not in other animals. The Golden Rule and some other ethical principles such as Kant’s (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • Learning Orthographic Structure With Sequential Generative Neural Networks.Alberto Testolin, Ivilin Stoianov, Alessandro Sperduti & Marco Zorzi - 2016 - Cognitive Science 40 (3):579-606.
    Learning the structure of event sequences is a ubiquitous problem in cognition and particularly in language. One possible solution is to learn a probabilistic generative model of sequences that allows making predictions about upcoming events. Though appealing from a neurobiological standpoint, this approach is typically not pursued in connectionist modeling. Here, we investigated a sequential version of the restricted Boltzmann machine, a stochastic recurrent neural network that extracts high-order structure from sensory data through unsupervised generative learning and can encode contextual (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • Are infants human?H. S. Terrace - 1994 - Behavioral and Brain Sciences 17 (3):425-426.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • Mapping sensorimotor sequences to word sequences: A connectionist model of language acquisition and sentence generation.Martin Takac, Lubica Benuskova & Alistair Knott - 2012 - Cognition 125 (2):288-308.
  • Dynamical Models of Sentence Processing.Whitney Tabor & Michael K. Tanenhaus - 1999 - Cognitive Science 23 (4):491-515.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  • Fractal Analysis Illuminates the Form of Connectionist Structural Gradualness.Whitney Tabor, Pyeong Whan Cho & Emily Szkudlarek - 2013 - Topics in Cognitive Science 5 (3):634-667.
    We examine two connectionist networks—a fractal learning neural network (FLNN) and a Simple Recurrent Network (SRN)—that are trained to process center-embedded symbol sequences. Previous work provides evidence that connectionist networks trained on infinite-state languages tend to form fractal encodings. Most such work focuses on simple counting recursion cases (e.g., anbn), which are not comparable to the complex recursive patterns seen in natural language syntax. Here, we consider exponential state growth cases (including mirror recursion), describe a new training scheme that seems (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  • Birth of an Abstraction: A Dynamical Systems Account of the Discovery of an Elsewhere Principle in a Category Learning Task.Whitney Tabor, Pyeong W. Cho & Harry Dankowicz - 2013 - Cognitive Science 37 (7):1193-1227.
    Human participants and recurrent (“connectionist”) neural networks were both trained on a categorization system abstractly similar to natural language systems involving irregular (“strong”) classes and a default class. Both the humans and the networks exhibited staged learning and a generalization pattern reminiscent of the Elsewhere Condition (Kiparsky, 1973). Previous connectionist accounts of related phenomena have often been vague about the nature of the networks’ encoding systems. We analyzed our network using dynamical systems theory, revealing topological and geometric properties that can (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark