Results for 'Simple recurrent network'

1000+ found
Order:
  1.  60
    Finite state automata and simple recurrent networks.Axel Cleeremans & David Servan-Schreiber - unknown
    We explore a network architecture introduced by Elman (1988) for predicting successive elements of a sequence. The network uses the pattern of activation over a set of hidden units from time-step 25-1, together with element t, to predict element t + 1. When the network is trained with strings from a particular finite-state grammar, it can learn to be a perfect finite-state recognizer for the grammar. When the network has a minimal number of hidden units, patterns (...)
    Direct download  
     
    Export citation  
     
    Bookmark   22 citations  
  2.  26
    Simple recurrent networks can distinguish non-occurring from ungrammatical sentences given appropriate task structure: reply to Marcus.Douglas L. T. Rohde & David C. Plaut - 1999 - Cognition 73 (3):297-300.
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  3.  10
    A Dual Simple Recurrent Network Model for Chunking and Abstract Processes in Sequence Learning.Lituan Wang, Yangqin Feng, Qiufang Fu, Jianyong Wang, Xunwei Sun, Xiaolan Fu, Lei Zhang & Zhang Yi - 2021 - Frontiers in Psychology 12.
    Although many studies have provided evidence that abstract knowledge can be acquired in artificial grammar learning, it remains unclear how abstract knowledge can be attained in sequence learning. To address this issue, we proposed a dual simple recurrent network model that includes a surface SRN encoding and predicting the surface properties of stimuli and an abstract SRN encoding and predicting the abstract properties of stimuli. The results of Simulations 1 and 2 showed that the DSRN model can (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  4.  33
    Language acquisition in the absence of explicit negative evidence: can simple recurrent networks obviate the need for domain-specific learning devices?Gary F. Marcus - 1999 - Cognition 73 (3):293-296.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  5.  31
    Convolutional Recurrent Neural Network for Fault Diagnosis of High-Speed Train Bogie.Kaiwei Liang, Na Qin, Deqing Huang & Yuanzhe Fu - 2018 - Complexity 2018:1-13.
    Timely detection and efficient recognition of fault are challenging for the bogie of high-speed train, owing to the fact that different types of fault signals have similar characteristics in the same frequency range. Notice that convolutional neural networks are powerful in extracting high-level local features and that recurrent neural networks are capable of learning long-term context dependencies in vibration signals. In this paper, by combining CNN and RNN, a so-called convolutional recurrent neural network is proposed to diagnose (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  6. Sequential Expectations: The Role of Prediction‐Based Learning in Language.Jennifer B. Misyak, Morten H. Christiansen & J. Bruce Tomblin - 2010 - Topics in Cognitive Science 2 (1):138-153.
    Prediction‐based processes appear to play an important role in language. Few studies, however, have sought to test the relationship within individuals between prediction learning and natural language processing. This paper builds upon existing statistical learning work using a novel paradigm for studying the on‐line learning of predictive dependencies. Within this paradigm, a new “prediction task” is introduced that provides a sensitive index of individual differences for developing probabilistic sequential expectations. Across three interrelated experiments, the prediction task and results thereof are (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   27 citations  
  7.  28
    Learning Orthographic Structure With Sequential Generative Neural Networks.Alberto Testolin, Ivilin Stoianov, Alessandro Sperduti & Marco Zorzi - 2016 - Cognitive Science 40 (3):579-606.
    Learning the structure of event sequences is a ubiquitous problem in cognition and particularly in language. One possible solution is to learn a probabilistic generative model of sequences that allows making predictions about upcoming events. Though appealing from a neurobiological standpoint, this approach is typically not pursued in connectionist modeling. Here, we investigated a sequential version of the restricted Boltzmann machine, a stochastic recurrent neural network that extracts high-order structure from sensory data through unsupervised generative learning and can (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  8.  98
    Phenomenology, dynamical neural networks and brain function.Donald Borrett, Sean D. Kelly & Hon Kwan - 2000 - Philosophical Psychology 13 (2):213-228.
    Current cognitive science models of perception and action assume that the objects that we move toward and perceive are represented as determinate in our experience of them. A proper phenomenology of perception and action, however, shows that we experience objects indeterminately when we are perceiving them or moving toward them. This indeterminacy, as it relates to simple movement and perception, is captured in the proposed phenomenologically based recurrent network models of brain function. These models provide a possible (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  9.  23
    State‐Trace Analysis: Dissociable Processes in a Connectionist Network?Fayme Yeates, Andy J. Wills, Fergal W. Jones & Ian P. L. McLaren - 2015 - Cognitive Science 39 (5):1047-1061.
    Some argue the common practice of inferring multiple processes or systems from a dissociation is flawed. One proposed solution is state-trace analysis, which involves plotting, across two or more conditions of interest, performance measured by either two dependent variables, or two conditions of the same dependent measure. The resulting analysis is considered to provide evidence that either a single process underlies performance or there is evidence for more than one process. This article reports simulations using the simple recurrent (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  10.  88
    Fractal Analysis Illuminates the Form of Connectionist Structural Gradualness.Whitney Tabor, Pyeong Whan Cho & Emily Szkudlarek - 2013 - Topics in Cognitive Science 5 (3):634-667.
    We examine two connectionist networks—a fractal learning neural network (FLNN) and a Simple Recurrent Network (SRN)—that are trained to process center-embedded symbol sequences. Previous work provides evidence that connectionist networks trained on infinite-state languages tend to form fractal encodings. Most such work focuses on simple counting recursion cases (e.g., anbn), which are not comparable to the complex recursive patterns seen in natural language syntax. Here, we consider exponential state growth cases (including mirror recursion), describe a (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  11. The application of artificial neural networks to forecast financial time series.D. González-Cortés, E. Onieva, I. Pastor & J. Wu - forthcoming - Logic Journal of the IGPL.
    The amount of information that is produced on a daily basis in the financial markets is vast and complex; consequently, the development of systems that simplify decision-making is an essential endeavor. In this article, several intelligent systems are proposed and tested to predict the closing price of the IBEX 35 index using more than ten years of historical data and five distinct architectures for neural networks. A multi-layer perceptron was the first step, followed by a simple recurrent neural (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  12.  19
    Direct Associations or Internal Transformations? Exploring the Mechanisms Underlying Sequential Learning Behavior.Todd M. Gureckis & Bradley C. Love - 2010 - Cognitive Science 34 (1):10-50.
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   8 citations  
  13. Recurrent networks: learning algorithms.Kenji Doya - 2002 - In The Handbook of Brain Theory and Neural Networks. pp. 955--960.
  14.  13
    A recurrent network that performs a context-sensitive prediction task.Peter Griinwald - 1996 - In Garrison W. Cottrell (ed.), Proceedings of the Eighteenth Annual Conference of the Cognitive Science Society. Lawrence Erlbaum. pp. 18--335.
    Direct download  
     
    Export citation  
     
    Bookmark  
  15.  28
    Are feedforward and recurrent networks systematic? Analysis and implications for a connectionist cognitive architecture.S. Phillips - unknown
    Human cognition is said to be systematic: cognitive ability generalizes to structurally related behaviours. The connectionist approach to cognitive theorizing has been strongly criticized for its failure to explain systematicity. Demonstrations of generalization notwithstanding, I show that two widely used networks (feedforward and recurrent) do not support systematicity under the condition of local input/output representations. For a connectionist explanation of systematicity, these results leave two choices, either: (1) develop models capable of systematicity under local input/output representations; or (2) justify (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  16.  35
    Comparing direct and indirect measures of sequence learning.Axel Cleeremans - unknown
    Comparing the relative sensitivity of direct and indirect measures of learning is proposed as the best way to provide evidence for unconscious learning when both conceptual and operative definitions of awareness are lacking. This approach was first proposed by Reingold & Merikle (1988) in the context of subliminal perception. In this paper, we apply it to a choice reaction time task in which the material is generated based on a probabilistic finite-state grammar (Cleeremans, 1993). We show (1) that participants progressively (...)
    Direct download  
     
    Export citation  
     
    Bookmark   16 citations  
  17.  27
    Case Classification, Similarities, Spaces of Reasons, and Coherences.Marcello Guarini - unknown
    A simple recurrent artificial neural network is used to classify situations as permissible or impermissible. The trained ANN can be understood as having set up a similarity space of cases at the level of its internal or hidden units. An analysis of the network’s internal representations is undertaken using a new visualization technique for state space approaches to understanding similarity. Insights from the literature on moral philosophy pertaining to contributory standards will be used to interpret the (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  18.  60
    Implicit sequence learning: The truth is in the details.Axel Cleeremans & L. JimC)nez - 1998 - In Michael A. Stadler & Peter A. Frensch (eds.), Handbook of Implicit Learning. Newbury Park, CA: Sage.
    Over the past decade, sequence learning has gradually become a central paradigm through which to study implicit learning. In this chapter, we start by briefly summarizing the results obtained with different variants of the sequence learning paradigm. We distinguish three subparadigms in terms of whether the stimulus material is generated either by following a fixed and repeating sequence (e.g., Nissen & Bullemer, 1987), by relying on a complex set of rules from which one can produce several alternative deterministic sequences (e.g., (...)
    Direct download  
     
    Export citation  
     
    Bookmark   10 citations  
  19. Supervised learning in recurrent networks.Kenji Doya - 1995 - In Michael A. Arbib (ed.), Handbook of Brain Theory and Neural Networks. MIT Press.
  20.  27
    Two ways of learning associations.Luke Boucher & Zoltán Dienes - 2003 - Cognitive Science 27 (6):807-842.
    How people learn chunks or associations between adjacent items in sequences was modelled. Two previously successful models of how people learn artificial grammars were contrasted: the CCN, a network version of the competitive chunker of Servan‐Schreiber and Anderson [J. Exp. Psychol.: Learn. Mem. Cogn. 16 (1990) 592], which produces local and compositionally‐structured chunk representations acquired incrementally; and the simple recurrent network (SRN) of Elman [Cogn. Sci. 14 (1990) 179], which acquires distributed representations through error correction. The (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   14 citations  
  21.  42
    Rules vs. statistics in implicit learning of biconditional grammars.Axel Cleeremans - unknown
    A significant part of everyday learning occurs incidentally — a process typically described as implicit learning. A central issue in this domain and others, such as language acquisition, is the extent to which performance depends on the acquisition and deployment of abstract rules. Shanks and colleagues [22], [11] have suggested (1) that discrimination between grammatical and ungrammatical instances of a biconditional grammar requires the acquisition and use of abstract rules, and (2) that training conditions — in particular whether instructions orient (...)
    Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  22.  20
    Out of control: An associative account of congruency effects in sequence learning.Tom Beesley, Fergal W. Jones & David R. Shanks - 2012 - Consciousness and Cognition 21 (1):413-421.
    The demonstration of a sequential congruency effect in sequence learning has been offered as evidence for control processes that act to inhibit automatic response tendencies via unconscious conflict monitoring. Here we propose an alternative interpretation of this effect based on the associative learning of chains of sequenced contingencies. This account is supported by simulations with a Simple Recurrent Network, an associative model of sequence learning. We argue that the control- and associative-based accounts differ in their predictions concerning (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  23.  44
    Incrementality and Prediction in Human Sentence Processing.Gerry T. M. Altmann & Jelena Mirković - 2009 - Cognitive Science 33 (4):583-609.
    We identify a number of principles with respect to prediction that, we argue, underpin adult language comprehension: (a) comprehension consists in realizing a mapping between the unfolding sentence and the event representation corresponding to the real‐world event being described; (b) the realization of this mapping manifests as the ability to predict both how the language will unfold, and how the real‐world event would unfold if it were being experienced directly; (c) concurrent linguistic and nonlinguistic inputs, and the prior internal states (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   47 citations  
  24.  40
    Incremental Sequence Learning.Axel Cleeremans - unknown
    As linguistic competence so clearly illustrates, processing sequences of events is a fundamental aspect of human cognition. For this reason perhaps, sequence learning behavior currently attracts considerable attention in both cognitive psychology and computational theory. In typical sequence learning situations, participants are asked to react to each element of sequentially structured visual sequences of events. An important issue in this context is to determine whether essentially associative processes are sufficient to understand human performance, or whether more powerful learning mechanisms are (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  25. Comparing direct and indirect measures of sequence learning.Jimenez Luis, Mendez Castor & Cleeremans Axel - 1996 - Journal of Experimental Psychology 22 (4):948-969.
    Comparing the relative sensitivity of direct and indirect measures of learning is proposed as the best way to provide evidence for unconscious learning when both conceptual and operative definitions of awareness are lacking. This approach was first proposed by Reingold & Merikle (1988) in the context of subliminal perception. In this paper, we apply it to a choice reaction time task in which the material is generated based on a probabilistic finite-state grammar (Cleeremans, 1993). We show (1) that participants progressively (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  26.  86
    On the Meaning of Words and Dinosaur Bones: Lexical Knowledge Without a Lexicon.Jeffrey L. Elman - 2009 - Cognitive Science 33 (4):547-582.
    Although for many years a sharp distinction has been made in language research between rules and words—with primary interest on rules—this distinction is now blurred in many theories. If anything, the focus of attention has shifted in recent years in favor of words. Results from many different areas of language research suggest that the lexicon is representationally rich, that it is the source of much productive behavior, and that lexically specific information plays a critical and early role in the interpretation (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   58 citations  
  27.  14
    Learning Representations of Wordforms With Recurrent Networks: Comment on Sibley, Kello, Plaut, & Elman (2008).Jeffrey S. Bowers & Colin J. Davis - 2009 - Cognitive Science 33 (7):1183-1186.
    Sibley et al. (2008) report a recurrent neural network model designed to learn wordform representations suitable for written and spoken word identification. The authors claim that their sequence encoder network overcomes a key limitation associated with models that code letters by position (e.g., CAT might be coded as C‐in‐position‐1, A‐in‐position‐2, T‐in‐position‐3). The problem with coding letters by position (slot‐coding) is that it is difficult to generalize knowledge across positions; for example, the overlap between CAT and TOMCAT is (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  28.  31
    Systematicity: Psychological evidence with connectionist implications.S. Phillips & G. S. Halford - unknown
    At root, the systematicity debate over classical versus connectionist explanations for cognitive architecture turns on quantifying the degree to which human cognition is systematic. We introduce into the debate recent psychological data that provides strong support for the purely structure-based generalizations claimed by Fodor and Pylyshyn (1988). We then show, via simulation, that two widely used connectionist models (feedforward and simple recurrent networks) do not capture the same degree of generalization as human subjects. However, we show that this (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  29.  30
    A Connectionist Model of Phonological Representation in Speech Perception.M. Gareth Gaskell, Mary Hare & William D. Marslen-Wilson - 1995 - Cognitive Science 19 (4):407-439.
    A number of recent studies have examined the effects of phonological variation on the perception of speech. These studies show that both the lexical representations of words and the mechanisms of lexical access are organized so that natural, systematic variation is tolerated by the perceptual system, while a general intolerance of random deviation is maintained. Lexical abstraction distinguishes between phonetic features that form the invariant core of a word and those that are susceptible to variation. Phonological inference relies on the (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  30.  39
    Rules vs. Statistics in Implicit Learning of Biconditional Grammars.Bert Timmermans - unknown
    A significant part of everyday learning occurs incidentally — a process typically described as implicit learning. A central issue in this domain and others, such as language acquisition, is the extent to which performance depends on the acquisition and deployment of abstract rules. Shanks and colleagues [22], [11] have suggested (1) that discrimination between grammatical and ungrammatical instances of a biconditional grammar requires the acquisition and use of abstract rules, and (2) that training conditions — in particular whether instructions orient (...)
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  31. Constructive processes in immediate serial recall: A recurrent network model of the bigram frequency effect.M. Botvinick & D. C. Plaut - 2003 - In B. Kokinov & W. Hirst (eds.), Constructive Memory. New Bulgarian University. pp. 129--137.
  32.  34
    Rules versus Statistics in Biconditional Grammar Learning: A Simulation based on Shanks et al. (1997).Bert Timmermans - unknown
    A significant part of everyday learning occurs incidentally — a process typically described as implicit learning. A central issue in this and germane domains such as language acquisition is the extent to which performance depends on the acquisition and deployment of abstract rules. In an attempt to address this question, we show that the apparent use of such rules in a simple categorisation task of artificial grammar strings, as reported by Shanks, Johnstone, and Staggs (1997), can be simulated by (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  33.  15
    Large‐Scale Modeling of Wordform Learning and Representation.Daragh E. Sibley, Christopher T. Kello, David C. Plaut & Jeffrey L. Elman - 2008 - Cognitive Science 32 (4):741-754.
    The forms of words as they appear in text and speech are central to theories and models of lexical processing. Nonetheless, current methods for simulating their learning and representation fail to approach the scale and heterogeneity of real wordform lexicons. A connectionist architecture termed thesequence encoderis used to learn nearly 75,000 wordform representations through exposure to strings of stress‐marked phonemes or letters. First, the mechanisms and efficacy of the sequence encoder are demonstrated and shown to overcome problems with traditional slot‐based (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  34.  40
    Large‐Scale Modeling of Wordform Learning and Representation.Daragh E. Sibley, Christopher T. Kello, David C. Plaut & Jeffrey L. Elman - 2008 - Cognitive Science 32 (4):741-754.
    The forms of words as they appear in text and speech are central to theories and models of lexical processing. Nonetheless, current methods for simulating their learning and representation fail to approach the scale and heterogeneity of real wordform lexicons. A connectionist architecture termed thesequence encoderis used to learn nearly 75,000 wordform representations through exposure to strings of stress‐marked phonemes or letters. First, the mechanisms and efficacy of the sequence encoder are demonstrated and shown to overcome problems with traditional slot‐based (...)
    No categories
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  35.  56
    Two apparent 'counterexamples' to Marcus: A closer look. [REVIEW]Marius Vilcu & Robert F. Hadley - 2005 - Minds and Machines 15 (3-4):359-382.
    Marcus et al.’s experiment (1999) concerning infant ability to distinguish between differing syntactic structures has prompted connectionists to strive to show that certain types of neural networks can mimic the infants’ results. In this paper we take a closer look at two such attempts: Shultz and Bale [Shultz, T.R. and Bale, A.C. (2001), Infancy 2, pp. 501–536] Altmann and Dienes [Altmann, G.T.M. and Dienes, Z. (1999) Science 248, p. 875a]. We were not only interested in how well these two models (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  36.  3
    Biophysical approach to modeling reflection: basis, methods, results.С. И Барцев, Г. М Маркова & А. И Матвеева - 2023 - Philosophical Problems of IT and Cyberspace (PhilIT&C) 2:120-139.
    The approach used by physics is based on the identification and study of ideal objects, which is also the basis of biophysics, in combination with von Neumann heuristic modeling and functional fractionation according to R.Rosen is discussed as a tool for studying the properties of consciousness. The object of the study is a kind of line of analog systems: the human brain, the vertebrate brain, the invertebrate brain and artificial neural networks capable of reflection, which is a key property characteristic (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  37.  64
    Recurrent neural network-based models for recognizing requisite and effectuation parts in legal texts.Truong-Son Nguyen, Le-Minh Nguyen, Satoshi Tojo, Ken Satoh & Akira Shimazu - 2018 - Artificial Intelligence and Law 26 (2):169-199.
    This paper proposes several recurrent neural network-based models for recognizing requisite and effectuation parts in Legal Texts. Firstly, we propose a modification of BiLSTM-CRF model that allows the use of external features to improve the performance of deep learning models in case large annotated corpora are not available. However, this model can only recognize RE parts which are not overlapped. Secondly, we propose two approaches for recognizing overlapping RE parts including the cascading approach which uses the sequence of (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   10 citations  
  38. Particularism, Analogy, and Moral Cognition.Marcello Guarini - 2010 - Minds and Machines 20 (3):385-422.
    ‘Particularism’ and ‘generalism’ refer to families of positions in the philosophy of moral reasoning, with the former playing down the importance of principles, rules or standards, and the latter stressing their importance. Part of the debate has taken an empirical turn, and this turn has implications for AI research and the philosophy of cognitive modeling. In this paper, Jonathan Dancy’s approach to particularism (arguably one of the best known and most radical approaches) is questioned both on logical and empirical grounds. (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  39.  4
    Flexibility and decoupling in Simple Temporal Networks.Michel Wilson, Tomas Klos, Cees Witteveen & Bob Huisman - 2014 - Artificial Intelligence 214 (C):26-44.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  40.  16
    Training Recurrent Neural Networks Using Optimization Layer-by- Layer Recursive Least Squares Algorithm for Vibration Signals System Identification and Fault Diagnostic Analysis.S. -Y. Cho, T. W. S. Chow & Y. Fang - 2001 - Journal of Intelligent Systems 11 (2):125-154.
  41.  21
    Recurrent quantum neural network and its applications.Laxmidhar Behera, Indrani Kar & Avshalom C. Elitzur - 2006 - In J. Tuszynski (ed.), The Emerging Physics of Consciousness. Springer Verlag. pp. 327--350.
  42.  6
    Biophysical approach to modeling reflection: basis, methods, results.S. I. Bartsev, G. M. Markova & A. I. Matveeva - forthcoming - Philosophical Problems of IT and Cyberspace (PhilIT&C).
    The approach used by physics is based on the identification and study of ideal objects, which is also the basis of biophysics, in combination with von Neumann heuristic modeling and functional fractionation according to R.Rosen is discussed as a tool for studying the properties of consciousness. The object of the study is a kind of line of analog systems: the human brain, the vertebrate brain, the invertebrate brain and artificial neural networks capable of reflection, which is a key property characteristic (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  43.  6
    A Recurrent Neural Network for Attenuating Non-cognitive Components of Pupil Dynamics.Sharath Koorathota, Kaveri Thakoor, Linbi Hong, Yaoli Mao, Patrick Adelman & Paul Sajda - 2021 - Frontiers in Psychology 12.
    There is increasing interest in how the pupil dynamics of the eye reflect underlying cognitive processes and brain states. Problematic, however, is that pupil changes can be due to non-cognitive factors, for example luminance changes in the environment, accommodation and movement. In this paper we consider how by modeling the response of the pupil in real-world environments we can capture the non-cognitive related changes and remove these to extract a residual signal which is a better index of cognition and performance. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  44. A simple dynamic-model for recurrent choice.D. G. S. Davis & Er Staddon - 1991 - Bulletin of the Psychonomic Society 29 (6):481-481.
     
    Export citation  
     
    Bookmark  
  45.  76
    Can Recurrent Neural Networks Validate Usage-Based Theories of Grammar Acquisition?Ludovica Pannitto & Aurelie Herbelot - 2022 - Frontiers in Psychology 13.
    It has been shown that Recurrent Artificial Neural Networks automatically acquire some grammatical knowledge in the course of performing linguistic prediction tasks. The extent to which such networks can actually learn grammar is still an object of investigation. However, being mostly data-driven, they provide a natural testbed for usage-based theories of language acquisition. This mini-review gives an overview of the state of the field, focusing on the influence of the theoretical framework in the interpretation of results.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  46.  17
    A Simple Metric for Ad Hoc Network Adaptation.Stephen F. S. F. Bush - 2005 - Ieee Journal on Selected Areas in Communications Journal 23 (12):2272--2287.
    This paper examines flexibility in ad hoc networks and suggests that, even with cross-layer design as a mechanism to improve adaptation, a fundamental limitation exists in the ability of a single optimization function, defined a priori, to adapt the network to meet all quality-of-service requirements. Thus, code implementing multiple algorithms will have to be positioned within the network. Active networking and programmable networking enable unprecedented autonomy and flexibility for ad hoc communication networks. However, in order to best leverage (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  47. Analogy as relational priming: A developmental and computational perspective on the origins of a complex cognitive skill.Robert Leech, Denis Mareschal & Richard P. Cooper - 2008 - Behavioral and Brain Sciences 31 (4):357-378.
    The development of analogical reasoning has traditionally been understood in terms of theories of adult competence. This approach emphasizes structured representations and structure mapping. In contrast, we argue that by taking a developmental perspective, analogical reasoning can be viewed as the product of a substantially different cognitive ability – relational priming. To illustrate this, we present a computational (here connectionist) account where analogy arises gradually as a by-product of pattern completion in a recurrent network. Initial exposure to a (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   10 citations  
  48.  94
    Recurrent Convolutional Neural Networks: A Better Model of Biological Object Recognition.Courtney J. Spoerer, Patrick McClure & Nikolaus Kriegeskorte - 2017 - Frontiers in Psychology 8.
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  49.  16
    Corrigendum: Recurrent Convolutional Neural Networks: A Better Model of Biological Object Recognition.Courtney J. Spoerer, Patrick McClure & Nikolaus Kriegeskorte - 2018 - Frontiers in Psychology 9.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  50.  80
    Neural networks discover a near-identity relation to distinguish simple syntactic forms.Thomas R. Shultz & Alan C. Bale - 2006 - Minds and Machines 16 (2):107-139.
    Computer simulations show that an unstructured neural-network model [Shultz, T. R., & Bale, A. C. (2001). Infancy, 2, 501–536] covers the essential features␣of infant learning of simple grammars in an artificial language [Marcus, G. F., Vijayan, S., Bandi Rao, S., & Vishton, P. M. (1999). Science, 283, 77–80], and generalizes to examples both outside and inside of the range of training sentences. Knowledge-representation analyses confirm that these networks discover that duplicate words in the sentences are nearly identical and (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
1 — 50 / 1000