Switch to: References

Citations of:

Finding Structure in Time

Cognitive Science 14 (2):179-211 (1990)

Add citations

You must login to add citations.
  1. Connectionist and Memory‐Array Models of Artificial Grammar Learning.Zoltan Dienes - 1992 - Cognitive Science 16 (1):41-79.
    Subjects exposed to strings of letters generated by a finite state grammar can later classify grammatical and nongrammatical test strings, even though they cannot adequately say what the rules of the grammar are (e.g., Reber, 1989). The MINERVA 2 (Hintzman, 1986) and Medin and Schaffer (1978) memory‐array models and a number of connectionist outoassociator models are tested against experimental data by deriving mainly parameter‐free predictions from the models of the rank order of classification difficulty of test strings. The importance of (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   61 citations  
  • Interactive Effects of Explicit Emergent Structure: A Major Challenge for Cognitive Computational Modeling.Robert M. French & Elizabeth Thomas - 2015 - Topics in Cognitive Science 7 (2):206-216.
    David Marr's (1982) three‐level analysis of computational cognition argues for three distinct levels of cognitive information processing—namely, the computational, representational, and implementational levels. But Marr's levels are—and were meant to be—descriptive, rather than interactive and dynamic. For this reason, we suggest that, had Marr been writing today, he might well have gone even farther in his analysis, including the emergence of structure—in particular, explicit structure at the conceptual level—from lower levels, and the effect of explicit emergent structures on the level (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • Predicting Age of Acquisition for Children's Early Vocabulary in Five Languages Using Language Model Surprisal.Eva Portelance, Yuguang Duan, Michael C. Frank & Gary Lupyan - 2023 - Cognitive Science 47 (9):e13334.
    What makes a word easy to learn? Early‐learned words are frequent and tend to name concrete referents. But words typically do not occur in isolation. Some words are predictable from their contexts; others are less so. Here, we investigate whether predictability relates to when children start producing different words (age of acquisition; AoA). We operationalized predictability in terms of a word's surprisal in child‐directed speech, computed using n‐gram and long‐short‐term‐memory (LSTM) language models. Predictability derived from LSTMs was generally a better (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • What Are You Waiting For? Real‐Time Integration of Cues for Fricatives Suggests Encapsulated Auditory Memory.Marcus E. Galle, Jamie Klein-Packard, Kayleen Schreiber & Bob McMurray - 2019 - Cognitive Science 43 (1):e12700.
    Speech unfolds over time, and the cues for even a single phoneme are rarely available simultaneously. Consequently, to recognize a single phoneme, listeners must integrate material over several hundred milliseconds. Prior work contrasts two accounts: (a) a memory buffer account in which listeners accumulate auditory information in memory and only access higher level representations (i.e., lexical representations) when sufficient information has arrived; and (b) an immediate integration scheme in which lexical representations can be partially activated on the basis of early (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  • What Are You Waiting For? Real‐Time Integration of Cues for Fricatives Suggests Encapsulated Auditory Memory.Marcus E. Galle, Jamie Klein-Packard, Kayleen Schreiber & Bob McMurray - 2019 - Cognitive Science 43 (1):e12700.
    Speech unfolds over time, and the cues for even a single phoneme are rarely available simultaneously. Consequently, to recognize a single phoneme, listeners must integrate material over several hundred milliseconds. Prior work contrasts two accounts: (a) a memory buffer account in which listeners accumulate auditory information in memory and only access higher level representations (i.e., lexical representations) when sufficient information has arrived; and (b) an immediate integration scheme in which lexical representations can be partially activated on the basis of early (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  • Predictive Movements and Human Reinforcement Learning of Sequential Action.Roy de Kleijn, George Kachergis & Bernhard Hommel - 2018 - Cognitive Science 42 (S3):783-808.
    Sequential action makes up the bulk of human daily activity, and yet much remains unknown about how people learn such actions. In one motor learning paradigm, the serial reaction time (SRT) task, people are taught a consistent sequence of button presses by cueing them with the next target response. However, the SRT task only records keypress response times to a cued target, and thus it cannot reveal the full time‐course of motion, including predictive movements. This paper describes a mouse movement (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Models of Chinese Reading: Review and Analysis.Erik D. Reichle & Lili Yu - 2018 - Cognitive Science 42 (S4):1154-1165.
    Our understanding of the cognitive processes involved in reading has been advanced by computational models that simulate those processes. Unfortunately, most of these models have been developed to explain the reading of English and other alphabetic languages, with relatively fewer efforts to examine whether or not the assumptions of these models also explain what has been learned from other languages and, in particular, non-alphabetic writing systems like Chinese. In this article, we will review those computational models that have been developed (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • Redefining “Learning” in Statistical Learning: What Does an Online Measure Reveal About the Assimilation of Visual Regularities?Noam Siegelman, Louisa Bogaerts, Ofer Kronenfeld & Ram Frost - 2018 - Cognitive Science 42 (S3):692-727.
    From a theoretical perspective, most discussions of statistical learning have focused on the possible “statistical” properties that are the object of learning. Much less attention has been given to defining what “learning” is in the context of “statistical learning.” One major difficulty is that SL research has been monitoring participants’ performance in laboratory settings with a strikingly narrow set of tasks, where learning is typically assessed offline, through a set of two-alternative-forced-choice questions, which follow a brief visual or auditory familiarization (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   16 citations  
  • A Neurocomputational Model of the N400 and the P600 in Language Processing.Harm Brouwer, Matthew W. Crocker, Noortje J. Venhuizen & John C. J. Hoeks - 2017 - Cognitive Science 41 (S6):1318-1352.
    Ten years ago, researchers using event-related brain potentials to study language comprehension were puzzled by what looked like a Semantic Illusion: Semantically anomalous, but structurally well-formed sentences did not affect the N400 component—traditionally taken to reflect semantic integration—but instead produced a P600 effect, which is generally linked to syntactic processing. This finding led to a considerable amount of debate, and a number of complex processing models have been proposed as an explanation. What these models have in common is that they (...)
    No categories
    Direct download (7 more)  
     
    Export citation  
     
    Bookmark   13 citations  
  • Quasiregularity and Its Discontents: The Legacy of the Past Tense Debate.Mark S. Seidenberg & David C. Plaut - 2014 - Cognitive Science 38 (6):1190-1228.
    Rumelhart and McClelland's chapter about learning the past tense created a degree of controversy extraordinary even in the adversarial culture of modern science. It also stimulated a vast amount of research that advanced the understanding of the past tense, inflectional morphology in English and other languages, the nature of linguistic representations, relations between language and other phenomena such as reading and object recognition, the properties of artificial neural networks, and other topics. We examine the impact of the Rumelhart and McClelland (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  • Parallel Distributed Processing at 25: Further Explorations in the Microstructure of Cognition.Timothy T. Rogers & James L. McClelland - 2014 - Cognitive Science 38 (6):1024-1077.
    This paper introduces a special issue of Cognitive Science initiated on the 25th anniversary of the publication of Parallel Distributed Processing (PDP), a two-volume work that introduced the use of neural network models as vehicles for understanding cognition. The collection surveys the core commitments of the PDP framework, the key issues the framework has addressed, and the debates the framework has spawned, and presents viewpoints on the current status of these issues. The articles focus on both historical roots and contemporary (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   16 citations  
  • Connecting Conscious and Unconscious Processing.Axel Cleeremans - 2014 - Cognitive Science 38 (6):1286-1315.
    Consciousness remains a mystery—“a phenomenon that people do not know how to think about—yet” (Dennett, , p. 21). Here, I consider how the connectionist perspective on information processing may help us progress toward the goal of understanding the computational principles through which conscious and unconscious processing differ. I begin by delineating the conceptual challenges associated with classical approaches to cognition insofar as understanding unconscious information processing is concerned, and to highlight several contrasting computational principles that are constitutive of the connectionist (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   10 citations  
  • Where Do Features Come From?Geoffrey Hinton - 2014 - Cognitive Science 38 (6):1078-1101.
    It is possible to learn multiple layers of non-linear features by backpropagating error derivatives through a feedforward neural network. This is a very effective learning procedure when there is a huge amount of labeled training data, but for many learning tasks very few labeled examples are available. In an effort to overcome the need for labeled data, several different generative models were developed that learned interesting features by modeling the higher order statistical structure of a set of input vectors. One (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  • Incrementality and Prediction in Human Sentence Processing.Gerry T. M. Altmann & Jelena Mirković - 2009 - Cognitive Science 33 (4):583-609.
    We identify a number of principles with respect to prediction that, we argue, underpin adult language comprehension: (a) comprehension consists in realizing a mapping between the unfolding sentence and the event representation corresponding to the real‐world event being described; (b) the realization of this mapping manifests as the ability to predict both how the language will unfold, and how the real‐world event would unfold if it were being experienced directly; (c) concurrent linguistic and nonlinguistic inputs, and the prior internal states (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   47 citations  
  • On the Meaning of Words and Dinosaur Bones: Lexical Knowledge Without a Lexicon.Jeffrey L. Elman - 2009 - Cognitive Science 33 (4):547-582.
    Although for many years a sharp distinction has been made in language research between rules and words—with primary interest on rules—this distinction is now blurred in many theories. If anything, the focus of attention has shifted in recent years in favor of words. Results from many different areas of language research suggest that the lexicon is representationally rich, that it is the source of much productive behavior, and that lexically specific information plays a critical and early role in the interpretation (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   54 citations  
  • Lexical Organization and Competition in First and Second Languages: Computational and Neural Mechanisms.Ping Li - 2009 - Cognitive Science 33 (4):629-664.
    How does a child rapidly acquire and develop a structured mental organization for the vast number of words in the first years of life? How does a bilingual individual deal with the even more complicated task of learning and organizing two lexicons? It is only until recently have we started to examine the lexicon as a dynamical system with regard to its acquisition, representation, and organization. In this article, I outline a proposal based on our research that takes the dynamical (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  • Regularity Extraction Across Species: Associative Learning Mechanisms Shared by Human and Non‐Human Primates.Arnaud Rey, Laure Minier, Raphaëlle Malassis, Louisa Bogaerts & Joël Fagot - 2019 - Topics in Cognitive Science 11 (3):573-586.
    One of the themes that has been widely addressed in both the implicit learning and statistical learning literatures is that of rule learning. While it is widely agreed that the extraction of regularities from the environment is a fundamental facet of cognition, there is still debate about the nature of rule learning. Rey and colleagues show that the comparison between human and non‐human primates can contribute important insights to this debate.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  • Event‐Predictive Cognition: A Root for Conceptual Human Thought.Martin V. Butz, Asya Achimova, David Bilkey & Alistair Knott - 2021 - Topics in Cognitive Science 13 (1):10-24.
    Butz, Achimova, Bilkey, and Knott provide a topic overview and discuss whether the special issue contributions may imply that event‐predictive abilities constitute a root for conceptual human thought, because they enable complex, mutually beneficial, but also intricately competitive, social interactions and language communication.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  • Statistical Learning of Unfamiliar Sounds as Trajectories Through a Perceptual Similarity Space.Felix Hao Wang, Elizabeth A. Hutton & Jason D. Zevin - 2019 - Cognitive Science 43 (8):e12740.
    In typical statistical learning studies, researchers define sequences in terms of the probability of the next item in the sequence given the current item (or items), and they show that high probability sequences are treated as more familiar than low probability sequences. Existing accounts of these phenomena all assume that participants represent statistical regularities more or less as they are defined by the experimenters—as sequential probabilities of symbols in a string. Here we offer an alternative, or possibly supplementary, hypothesis. Specifically, (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Adjacent and Non‐Adjacent Word Contexts Both Predict Age of Acquisition of English Words: A Distributional Corpus Analysis of Child‐Directed Speech.Lucas M. Chang & Gedeon O. Deák - 2020 - Cognitive Science 44 (11):e12899.
    Children show a remarkable degree of consistency in learning some words earlier than others. What patterns of word usage predict variations among words in age of acquisition? We use distributional analysis of a naturalistic corpus of child‐directed speech to create quantitative features representing natural variability in word contexts. We evaluate two sets of features: One set is generated from the distribution of words into frames defined by the two adjacent words. These features primarily encode syntactic aspects of word usage. The (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Statistically Induced Chunking Recall: A Memory‐Based Approach to Statistical Learning.Erin S. Isbilen, Stewart M. McCauley, Evan Kidd & Morten H. Christiansen - 2020 - Cognitive Science 44 (7):e12848.
    The computations involved in statistical learning have long been debated. Here, we build on work suggesting that a basic memory process, chunking, may account for the processing of statistical regularities into larger units. Drawing on methods from the memory literature, we developed a novel paradigm to test statistical learning by leveraging a robust phenomenon observed in serial recall tasks: that short‐term memory is fundamentally shaped by long‐term distributional learning. In the statistically induced chunking recall (SICR) task, participants are exposed to (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  • Exploring What Is Encoded in Distributional Word Vectors: A Neurobiologically Motivated Analysis.Akira Utsumi - 2020 - Cognitive Science 44 (6):e12844.
    The pervasive use of distributional semantic models or word embeddings for both cognitive modeling and practical application is because of their remarkable ability to represent the meanings of words. However, relatively little effort has been made to explore what types of information are encoded in distributional word vectors. Knowing the internal knowledge embedded in word vectors is important for cognitive modeling using distributional semantic models. Therefore, in this paper, we attempt to identify the knowledge encoded in word vectors by conducting (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • Word Order Typology Interacts With Linguistic Complexity: A Cross‐Linguistic Corpus Study.Himanshu Yadav, Ashwini Vaidya, Vishakha Shukla & Samar Husain - 2020 - Cognitive Science 44 (4):e12822.
    Much previous work has suggested that word order preferences across languages can be explained by the dependency distance minimization constraint (Ferrer‐i Cancho, 2008, 2015; Hawkins, 1994). Consistent with this claim, corpus studies have shown that the average distance between a head (e.g., verb) and its dependent (e.g., noun) tends to be short cross‐linguistically (Ferrer‐i Cancho, 2014; Futrell, Mahowald, & Gibson, 2015; Liu, Xu, & Liang, 2017). This implies that on average languages avoid inefficient or complex structures for simpler structures. But (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Efficient Communication in Written and Performed Music.Laurent Bonnasse-Gahot - 2020 - Cognitive Science 44 (4):e12826.
    Since its inception, Shannon's information theory has attracted interest for the study of language and music. Recently, a wide range of converging studies have shown how efficient communication pervades language, from phonetics to syntax. Efficient principles imply that more resources should be assigned to highly informative items. For instance, average information content was shown to be a better predictor of word length than frequency, revisiting the famous Zipf's law. However, in spite of the success of the efficient communication framework in (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Symbolically speaking: a connectionist model of sentence production.Franklin Chang - 2002 - Cognitive Science 26 (5):609-651.
    The ability to combine words into novel sentences has been used to argue that humans have symbolic language production abilities. Critiques of connectionist models of language often center on the inability of these models to generalize symbolically (Fodor & Pylyshyn, 1988; Marcus, 1998). To address these issues, a connectionist model of sentence production was developed. The model had variables (role‐concept bindings) that were inspired by spatial representations (Landau & Jackendoff, 1993). In order to take advantage of these variables, a novel (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   23 citations  
  • Predictive Modeling of Individual Human Cognition: Upper Bounds and a New Perspective on Performance.Nicolas Riesterer, Daniel Brand & Marco Ragni - 2020 - Topics in Cognitive Science 12 (3):960-974.
    Syllogisms (e.g. “All A are B; All B are C; What is true about A and C?”) are a long‐studied area of human reasoning. Riesterer, Brand, and Ragni compare a variety of models to human performance and show that not only do current models have a lot of room for improvement, but more importantly a large part of this improvement must come from examining individual differences in performance.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  • Five Ways in Which Computational Modeling Can Help Advance Cognitive Science: Lessons From Artificial Grammar Learning.Willem Zuidema, Robert M. French, Raquel G. Alhama, Kevin Ellis, Timothy J. O'Donnell, Tim Sainburg & Timothy Q. Gentner - 2020 - Topics in Cognitive Science 12 (3):925-941.
    Zuidema et al. illustrate how empirical AGL studies can benefit from computational models and techniques. Computational models can help clarifying theories, and thus in delineating research questions, but also in facilitating experimental design, stimulus generation, and data analysis. The authors show, with a series of examples, how computational modeling can be integrated with empirical AGL approaches, and how model selection techniques can indicate the most likely model to explain experimental outcomes.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Conceptual mapping through keyword coupled clustering.Zvika Marx & Ido Dagan - 2001 - Mind and Society 2 (2):59-85.
    This paper introduces coupled clustering—a novel computational framework for detecting corresponding themes in unstructured data. Gaining its inspiration from the structure mapping theory, our framework utilizes unsupervised statistical learning tools for automatic construction of aligned representations reflecting the context of the particular mapping being made. The coupled clustering algorithm is demonstrated and evaluated through detecting conceptual correspondences in textual corpora. In its current phase, the method is primarily oriented towards context-dependent feature-based similarity. However, it is preliminary demonstrated how it could (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Modeling language and cognition with deep unsupervised learning: a tutorial overview.Marco Zorzi, Alberto Testolin & Ivilin P. Stoianov - 2013 - Frontiers in Psychology 4.
  • The construction of 'reality' in the robot: Constructivist perspectives on situated artificial intelligence and adaptive robotics. [REVIEW]Tom Ziemke - 2001 - Foundations of Science 6 (1-3):163-233.
    This paper discusses different approaches incognitive science and artificial intelligenceresearch from the perspective of radicalconstructivism, addressing especially theirrelation to the biologically based theories ofvon Uexküll, Piaget as well as Maturana andVarela. In particular recent work in New AI and adaptive robotics on situated and embodiedintelligence is examined, and we discuss indetail the role of constructive processes asthe basis of situatedness in both robots andliving organisms.
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  • Tuning in to non-adjacencies: Exposure to learnable patterns supports discovering otherwise difficult structures.Martin Zettersten, Christine E. Potter & Jenny R. Saffran - 2020 - Cognition 202 (C):104283.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • From the decline of development to the ascent of consciousness.Philip David Zelazo - 1994 - Behavioral and Brain Sciences 17 (4):731-732.
  • A theory of eye movements during target acquisition.Gregory J. Zelinsky - 2008 - Psychological Review 115 (4):787-835.
  • Solving the Black Box Problem: A Normative Framework for Explainable Artificial Intelligence.Carlos Zednik - 2019 - Philosophy and Technology 34 (2):265-288.
    Many of the computing systems programmed using Machine Learning are opaque: it is difficult to know why they do what they do or how they work. Explainable Artificial Intelligence aims to develop analytic techniques that render opaque computing systems transparent, but lacks a normative framework with which to evaluate these techniques’ explanatory successes. The aim of the present discussion is to develop such a framework, paying particular attention to different stakeholders’ distinct explanatory requirements. Building on an analysis of “opacity” from (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   42 citations  
  • Solving the Black Box Problem: A Normative Framework for Explainable Artificial Intelligence.Carlos Zednik - 2019 - Philosophy and Technology 34 (2):265-288.
    Many of the computing systems programmed using Machine Learning are opaque: it is difficult to know why they do what they do or how they work. Explainable Artificial Intelligence aims to develop analytic techniques that render opaque computing systems transparent, but lacks a normative framework with which to evaluate these techniques’ explanatory successes. The aim of the present discussion is to develop such a framework, paying particular attention to different stakeholders’ distinct explanatory requirements. Building on an analysis of “opacity” from (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   39 citations  
  • Learning the generative principles of a symbol system from limited examples.Lei Yuan, Violet Xiang, David Crandall & Linda Smith - 2020 - Cognition 200 (C):104243.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • State‐Trace Analysis: Dissociable Processes in a Connectionist Network?Fayme Yeates, Andy J. Wills, Fergal W. Jones & Ian P. L. McLaren - 2015 - Cognitive Science 39 (5):1047-1061.
    Some argue the common practice of inferring multiple processes or systems from a dissociation is flawed. One proposed solution is state-trace analysis, which involves plotting, across two or more conditions of interest, performance measured by either two dependent variables, or two conditions of the same dependent measure. The resulting analysis is considered to provide evidence that either a single process underlies performance or there is evidence for more than one process. This article reports simulations using the simple recurrent network in (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Quality Prediction Model Based on Novel Elman Neural Network Ensemble.Lan Xu & Yuting Zhang - 2019 - Complexity 2019:1-11.
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • Finding Structure in Time: Visualizing and Analyzing Behavioral Time Series.Tian Linger Xu, Kaya de Barbaro, Drew H. Abney & Ralf F. A. Cox - 2020 - Frontiers in Psychology 11.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • On the creation of classification systems of memory.Daniel B. Willingham - 1994 - Behavioral and Brain Sciences 17 (3):426-427.
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Classical conditioning and the placebo effect.Ian Wickram - 1989 - Behavioral and Brain Sciences 12 (1):160-161.
  • Reinforcement learning of non-Markov decision processes.Steven D. Whitehead & Long-Ji Lin - 1995 - Artificial Intelligence 73 (1-2):271-306.
  • Classical conditioning: A manifestation of Bayesian neural learning.James Christopher Westland & Manfred Kochen - 1989 - Behavioral and Brain Sciences 12 (1):160-160.
  • Turing's Analysis of Computation and Theories of Cognitive Architecture.A. J. Wells - 1998 - Cognitive Science 22 (3):269-294.
    Direct download  
     
    Export citation  
     
    Bookmark   11 citations  
  • Pigeons acquire multiple categories in parallel via associative learning: A parallel to human word learning?Edward A. Wasserman, Daniel I. Brooks & Bob McMurray - 2015 - Cognition 136 (C):99-122.
    No categories
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  • Directions in Connectionist Research: Tractable Computations Without Syntactically Structured Representations.Jonathan Waskan & William Bechtel - 1997 - Metaphilosophy 28 (1‐2):31-62.
    Figure 1: A pr ototyp ical exa mple of a three-layer feed forward network, used by Plunkett and M archm an (1 991 ) to simulate learning the past-tense of En glish verbs. The inpu t units encode representations of the three phonemes of the present tense of the artificial words used in this simulation. Th e netwo rk is trained to produce a representation of the phonemes employed in the past tense form and the suffix (/d/, /ed/, or /t/) (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • Finding Structure in One Child's Linguistic Experience.Wentao Wang, Wai Keen Vong, Najoung Kim & Brenden M. Lake - 2023 - Cognitive Science 47 (6):e13305.
    Neural network models have recently made striking progress in natural language processing, but they are typically trained on orders of magnitude more language input than children receive. What can these neural networks, which are primarily distributional learners, learn from a naturalistic subset of a single child's experience? We examine this question using a recent longitudinal dataset collected from a single child, consisting of egocentric visual data paired with text transcripts. We train both language-only and vision-and-language neural networks and analyze the (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • Advantages of Combining Factorization Machine with Elman Neural Network for Volatility Forecasting of Stock Market.Fang Wang, Sai Tang & Menggang Li - 2021 - Complexity 2021:1-12.
    With a focus in the financial market, stock market dynamics forecasting has received much attention. Predicting stock market fluctuations is usually challenging due to the nonlinear and nonstationary time series of stock prices. The Elman recurrent network is renowned for its capability of dealing with dynamic information, which has made it a successful application to predicting. We developed a hybrid approach which combined Elman recurrent network with factorization machine technique, i.e., the FM-Elman neural network, to predict stock market volatility. In (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • Multilevel Exemplar Theory.Michael Walsh, Bernd Möbius, Travis Wade & Hinrich Schütze - 2010 - Cognitive Science 34 (4):537-582.
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   5 citations  
  • Is there an implicit level of representation?Annie Vinter & Pierre Perruchet - 1994 - Behavioral and Brain Sciences 17 (4):730-731.