Results for 'Incremental learning'

987 found
Order:
  1.  58
    Incremental learning of gestures for human–robot interaction.Shogo Okada, Yoichi Kobayashi, Satoshi Ishibashi & Toyoaki Nishida - 2010 - AI and Society 25 (2):155-168.
    For a robot to cohabit with people, it should be able to learn people’s nonverbal social behavior from experience. In this paper, we propose a novel machine learning method for recognizing gestures used in interaction and communication. Our method enables robots to learn gestures incrementally during human–robot interaction in an unsupervised manner. It allows the user to leave the number and types of gestures undefined prior to the learning. The proposed method (HB-SOINN) is based on a self-organizing (...) neural network and the hidden Markov model. We have added an interactive learning mechanism to HB-SOINN to prevent a single cluster from running into a failure as a result of polysemy of being assigned more than one meaning. For example, a sentence: “Keep on going left slowly” has three meanings such as, “Keep on (1)”, “going left (2)”, “slowly (3)”. We experimentally tested the clustering performance of the proposed method against data obtained from measuring gestures using a motion capture device. The results show that the classification performance of HB-SOINN exceeds that of conventional clustering approaches. In addition, we have found that the interactive learning function improves the learning performance of HB-SOINN. (shrink)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  2.  7
    Incremental learning from multiple analogies.Mark H. Burstein - 1988 - In Armand Prieditis (ed.), Analogica. Morgan Kaufmann Publishers. pp. 37--62.
  3.  26
    Integrating Incremental Learning and Episodic Memory Models of the Hippocampal Region.M. Meeter, C. E. Myers & M. A. Gluck - 2005 - Psychological Review 112 (3):560-585.
  4.  24
    An Incremental Learning Ensemble Strategy for Industrial Process Soft Sensors.Huixin Tian, Minwei Shuai, Kun Li & Xiao Peng - 2019 - Complexity 2019:1-12.
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  5.  9
    Incremental learning with partial instance memory.Marcus A. Maloof & Ryszard S. Michalski - 2004 - Artificial Intelligence 154 (1-2):95-126.
  6.  19
    Incremental Learning in Terms of Output Attributes.Sheng-Uei Guan & Peng Li - 2004 - Journal of Intelligent Systems 13 (2):95-122.
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  7.  37
    The dark side of incremental learning: A model of cumulative semantic interference during lexical access in speech production.Myrna F. Schwartz Gary M. Oppenheim, Gary S. Dell - 2010 - Cognition 114 (2):227.
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   35 citations  
  8.  33
    The dark side of incremental learning: A model of cumulative semantic interference during lexical access in speech production.Gary M. Oppenheim, Gary S. Dell & Myrna F. Schwartz - 2010 - Cognition 114 (2):227-252.
    No categories
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   39 citations  
  9.  14
    All-or-none versus incremental learning.Joan E. Jones - 1962 - Psychological Review 69 (2):156-160.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  10.  20
    A Hierarchical Incremental Learning Approach to Task Decomposition.Sheng-Uei Guan & Peng Li - 2002 - Journal of Intelligent Systems 12 (3):201-226.
  11.  4
    Interference effects of phonological similarity in word production arise from competitive incremental learning.Qingqing Qu, Chen Feng & Markus F. Damian - 2021 - Cognition 212 (C):104738.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  12.  28
    Incremental Bayesian Category Learning From Natural Language.Lea Frermann & Mirella Lapata - 2016 - Cognitive Science 40 (6):1333-1381.
    Models of category learning have been extensively studied in cognitive science and primarily tested on perceptual abstractions or artificial stimuli. In this paper, we focus on categories acquired from natural language stimuli, that is, words. We present a Bayesian model that, unlike previous work, learns both categories and their features in a single process. We model category induction as two interrelated subproblems: the acquisition of features that discriminate among categories, and the grouping of concepts into categories based on those (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  13.  40
    Incremental Sequence Learning.Axel Cleeremans - unknown
    As linguistic competence so clearly illustrates, processing sequences of events is a fundamental aspect of human cognition. For this reason perhaps, sequence learning behavior currently attracts considerable attention in both cognitive psychology and computational theory. In typical sequence learning situations, participants are asked to react to each element of sequentially structured visual sequences of events. An important issue in this context is to determine whether essentially associative processes are sufficient to understand human performance, or whether more powerful (...) mechanisms are necessary. To address this issue, we explore how well human participants and connectionist models are capable of learning sequential material that involves complex, disjoint, longdistance contingencies. We show that the popular Simple Recurrent Network model (Elman, 1990), which has otherwise been shown to account for a variety of empirical findings (Cleeremans, 1993), fails to account for human performance in several experimental situations meant to test the model’s specific predictions. In previous research (Cleeremans, 1993) briefly described in this paper, the structure of center-embedded sequential structures was manipulated to be strictly identical or probabilistically different as a function of the elements surrounding the embedding. While the SRN could only learn in the second case, human subjects were found to be insensitive to the manipulation. In the new experiment described in this paper, we tested the idea that performance benefits from “starting small effects” (Elman, 1993) by contrasting two conditions in which the training regimen was either incremental or not. Again, while the SRN is only capable of learning in the first case, human subjects were able to learn in both. We suggest an alternative model based on Maskara & Noetzel’s (1991) Auto-Associative Recurrent Network as a way to overcome the SRN model’s failure to account for the empirical findings.. (shrink)
    Direct download  
     
    Export citation  
     
    Bookmark  
  14.  28
    Incremental implicit learning of bundles of statistical patterns.Ting Qian, T. Florian Jaeger & Richard N. Aslin - 2016 - Cognition 157 (C):156-173.
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  15.  12
    Incremental language learning: two and three year olds' acquisition of adjectives.T. Mintz & L. Gleitman - 1998 - In M. A. Gernsbacher & S. J. Derry (eds.), Proceedings of the 20th Annual Conference of the Cognitive Science Society. Lawerence Erlbaum.
  16.  19
    Distal learning of the incremental capacity curve of a LiFePO4 battery.Luciano Sánchez, José Otero, Manuela González, David Anseán, Alana A. Zülke & Inés Couso - 2022 - Logic Journal of the IGPL 30 (2):301-313.
    An intelligent model of the incremental capacity curve of an automotive lithium-ferrophosphate battery is presented. The relative heights of the two major peaks of the IC curve can be acquired from high-current discharges, thus enabling the state of health estimation of the battery while the vehicle is being operated and in certain cases, aging mechanisms can be suggested. Our model has been validated using a large dataset representing different degradation scenarios, obtained from a recently available open-source database.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  17.  24
    Learning and incremental dynamic programming.Andrew G. Barto - 1991 - Behavioral and Brain Sciences 14 (1):94-95.
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  18.  24
    Hierarchical Incremental Class Learning with Output Parallelism.Sheng-Uei Guan & Kai Wang - 2007 - Journal of Intelligent Systems 16 (2):167-193.
    Direct download  
     
    Export citation  
     
    Bookmark  
  19.  27
    The Dynamics of Perceptual Learning: An Incremental Reweighting Model.Alexander A. Petrov, Barbara Anne Dosher & Zhong-Lin Lu - 2005 - Psychological Review 112 (4):715-743.
  20.  12
    Effects of increments of reinforcement in human probability learning.Maynard W. Shelly - 1960 - Journal of Experimental Psychology 59 (6):345.
  21. A probabilistic incremental model of word learning in the presence of referential uncertainty.Afsaneh Fazly, Afra Alishahi & Suzanne Stevenson - 2008 - In B. C. Love, K. McRae & V. M. Sloutsky (eds.), Proceedings of the 30th Annual Conference of the Cognitive Science Society. Cognitive Science Society.
  22.  17
    Two-Phase Incremental Kernel PCA for Learning Massive or Online Datasets.Feng Zhao, Islem Rekik, Seong-Whan Lee, Jing Liu, Junying Zhang & Dinggang Shen - 2019 - Complexity 2019:1-17.
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  23.  37
    Sleep-Dependent Memory Consolidation and Incremental Sentence Comprehension: Computational Dependencies during Language Learning as Revealed by Neuronal Oscillations.Zachariah R. Cross, Mark J. Kohler, Matthias Schlesewsky, M. G. Gaskell & Ina Bornkessel-Schlesewsky - 2018 - Frontiers in Human Neuroscience 12.
  24.  3
    Incremental Adaptive Control of a Class of Nonlinear Nonaffine Systems.Yizhao Zhan, Shengxiang Zou, Xiongxiong He & Mingxuan Sun - 2022 - Complexity 2022:1-19.
    As a class of familiar nonlinear systems, nonaffine systems are frequently encountered in practical applications. Currently, in the context of learning control, there is a lack of research results about such general class of nonlinear systems, especially for the case of performing infinite interval tasks. This article focuses on the incremental adaptive control for nonlinear systems in nonaffine form, without requiring periodicity or repeatability. Instead of using the integral adaptation, incremental adaptive mechanisms are developed and the corresponding (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  25.  45
    Different strategy of hand choice after learning of constant and incremental dynamical perturbation in arm reaching.Chie Habagishi, Shoko Kasuga, Yohei Otaka, Meigen Liu & Junichi Ushiba - 2014 - Frontiers in Human Neuroscience 8.
  26. Learning a Generative Probabilistic Grammar of Experience: A Process‐Level Model of Language Acquisition.Oren Kolodny, Arnon Lotem & Shimon Edelman - 2014 - Cognitive Science 38 (4):227-267.
    We introduce a set of biologically and computationally motivated design choices for modeling the learning of language, or of other types of sequential, hierarchically structured experience and behavior, and describe an implemented system that conforms to these choices and is capable of unsupervised learning from raw natural-language corpora. Given a stream of linguistic input, our model incrementally learns a grammar that captures its statistical patterns, which can then be used to parse or generate new data. The grammar constructed (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  27.  25
    Incremental acquisition of paired-associate lists.George Mandler - 1970 - Journal of Experimental Psychology 84 (1):185.
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  28.  15
    Learning a Generative Probabilistic Grammar of Experience: A Process‐Level Model of Language Acquisition.Oren Kolodny, Arnon Lotem & Shimon Edelman - 2015 - Cognitive Science 39 (2):227-267.
    We introduce a set of biologically and computationally motivated design choices for modeling the learning of language, or of other types of sequential, hierarchically structured experience and behavior, and describe an implemented system that conforms to these choices and is capable of unsupervised learning from raw natural‐language corpora. Given a stream of linguistic input, our model incrementally learns a grammar that captures its statistical patterns, which can then be used to parse or generate new data. The grammar constructed (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  29. Deep learning and synthetic media.Raphaël Millière - 2022 - Synthese 200 (3):1-27.
    Deep learning algorithms are rapidly changing the way in which audiovisual media can be produced. Synthetic audiovisual media generated with deep learning—often subsumed colloquially under the label “deepfakes”—have a number of impressive characteristics; they are increasingly trivial to produce, and can be indistinguishable from real sounds and images recorded with a sensor. Much attention has been dedicated to ethical concerns raised by this technological development. Here, I focus instead on a set of issues related to the notion of (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  30.  85
    In AI We Trust Incrementally: a Multi-layer Model of Trust to Analyze Human-Artificial Intelligence Interactions.Andrea Ferrario, Michele Loi & Eleonora Viganò - 2020 - Philosophy and Technology 33 (3):523-539.
    Real engines of the artificial intelligence revolution, machine learning models, and algorithms are embedded nowadays in many services and products around us. As a society, we argue it is now necessary to transition into a phronetic paradigm focused on the ethical dilemmas stemming from the conception and application of AIs to define actionable recommendations as well as normative solutions. However, both academic research and society-driven initiatives are still quite far from clearly defining a solid program of study and intervention. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   15 citations  
  31. Is It That Difficult to Find a Good Preference Order for the Incremental Algorithm?Emiel Krahmer, Ruud Koolen & Mariët Theune - 2012 - Cognitive Science 36 (5):837-841.
    In a recent article published in this journal (van Deemter, Gatt, van der Sluis, & Power, 2012), the authors criticize the Incremental Algorithm (a well-known algorithm for the generation of referring expressions due to Dale & Reiter, 1995, also in this journal) because of its strong reliance on a pre-determined, domain-dependent Preference Order. The authors argue that there are potentially many different Preference Orders that could be considered, while often no evidence is available to determine which is a good (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  32.  76
    Social learning and teaching in chimpanzees.Richard Moore - 2013 - Biology and Philosophy 28 (6):879-901.
    There is increasing evidence that some behavioural differences between groups of chimpanzees can be attributed neither to genetic nor to ecological variation. Such differences are likely to be maintained by social learning. While humans teach their offspring, and acquire cultural traits through imitative learning, there is little evidence of such behaviours in chimpanzees. However, by appealing only to incremental changes in motivation, attention and attention-soliciting behaviour, and without expensive changes in cognition, we can hypothesise the possible emergence (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   10 citations  
  33.  52
    Examining the Effects of Incremental Case Presentation and Forecasting Outcomes on Case-Based Ethics Instruction.Alexandra E. MacDougall, Lauren N. Harkrider, Zhanna Bagdasarov, James F. Johnson, Chase E. Thiel, Juandre Peacock, Michael D. Mumford, Lynn D. Devenport & Shane Connelly - 2014 - Ethics and Behavior 24 (2):126-150.
    Case-based reasoning has long been used to facilitate instructional effectiveness. Although much remains to be known concerning the most beneficial way to present case material, recent literature suggests that simplifying case material is favorable. Accordingly, the current study manipulated two instructional techniques, incremental case presentation and forecasting outcomes, in a training environment in an attempt to better understand the utility of simplified versus complicated case presentation for learning. Findings suggest that pairing these two cognitively demanding techniques reduces satisfaction (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  34.  83
    Machine learning by imitating human learning.Chang Kuo-Chin, Hong Tzung-Pei & Tseng Shian-Shyong - 1996 - Minds and Machines 6 (2):203-228.
    Learning general concepts in imperfect environments is difficult since training instances often include noisy data, inconclusive data, incomplete data, unknown attributes, unknown attribute values and other barriers to effective learning. It is well known that people can learn effectively in imperfect environments, and can manage to process very large amounts of data. Imitating human learning behavior therefore provides a useful model for machine learning in real-world applications. This paper proposes a new, more effective way to represent (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark  
  35.  32
    Mental Magnitudes and Increments of Mental Magnitudes.Matthew Katz - 2013 - Review of Philosophy and Psychology 4 (4):675-703.
    There is at present a lively debate in cognitive psychology concerning the origin of natural number concepts. At the center of this debate is the system of mental magnitudes, an innately given cognitive mechanism that represents cardinality and that performs a variety of arithmetical operations. Most participants in the debate argue that this system cannot be the sole source of natural number concepts, because they take it to represent cardinality approximately while natural number concepts are precise. In this paper, I (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  36. Assessing the Incremental Algorithm: A Response to Krahmer et al.Kees van Deemter, Albert Gatt, Ielka van der Sluis & Richard Power - 2012 - Cognitive Science 36 (5):842-845.
    This response discusses the experiment reported in Krahmer et al.’s Letter to the Editor of Cognitive Science. We observe that their results do not tell us whether the Incremental Algorithm is better or worse than its competitors, and we speculate about implications for reference in complex domains, and for learning from ‘‘normal” (i.e., non-semantically-balanced) corpora.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  37.  61
    Representational trajectories in connectionist learning.Andy Clark - 1994 - Minds and Machines 4 (3):317-32.
    The paper considers the problems involved in getting neural networks to learn about highly structured task domains. A central problem concerns the tendency of networks to learn only a set of shallow (non-generalizable) representations for the task, i.e., to miss the deep organizing features of the domain. Various solutions are examined, including task specific network configuration and incremental learning. The latter strategy is the more attractive, since it holds out the promise of a task-independent solution to the problem. (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  38. Relational learning re-examined.Chris Thornton & Andy Clark - 1997 - Behavioral and Brain Sciences 20 (1):83-83.
    We argue that existing learning algorithms are often poorly equipped to solve problems involving a certain type of important and widespread regularity that we call “type-2 regularity.” The solution in these cases is to trade achieved representation against computational search. We investigate several ways in which such a trade-off may be pursued including simple incremental learning, modular connectionism, and the developmental hypothesis of “representational redescription.”.
    Direct download (8 more)  
     
    Export citation  
     
    Bookmark  
  39.  5
    Dynamic Assessment of Reading Difficulties: Predictive and Incremental Validity on Attitude toward Reading and the Use of Dialogue/Participation Strategies in Classroom Activities.Juan-José Navarro & Laura Lara - 2017 - Frontiers in Psychology 8:230315.
    Dynamic Assessment (DA) has been shown to have more predictive value than conventional tests for academic performance. However, in relation to reading difficulties, further research is needed to determine the predictive validity of DA for specific aspects of the different processes involved in reading and the differential validity of DA for different subgroups of students with an academic disadvantage. This paper analyzes the implementation of a DA device that evaluates processes involved in reading (EDPL) among 60 students with reading comprehension (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  40.  33
    Learning to Attend: A Connectionist Model of Situated Language Comprehension.Marshall R. Mayberry, Matthew W. Crocker & Pia Knoeferle - 2009 - Cognitive Science 33 (3):449-496.
    Evidence from numerous studies using the visual world paradigm has revealed both that spoken language can rapidly guide attention in a related visual scene and that scene information can immediately influence comprehension processes. These findings motivated the coordinated interplay account (Knoeferle & Crocker, 2006) of situated comprehension, which claims that utterance‐mediated attention crucially underlies this closely coordinated interaction of language and scene processing. We present a recurrent sigma‐pi neural network that models the rapid use of scene information, exploiting an utterance‐mediated (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  41.  87
    Trading spaces: Computation, representation, and the limits of uninformed learning.Andy Clark & Chris Thornton - 1997 - Behavioral and Brain Sciences 20 (1):57-66.
    Some regularities enjoy only an attenuated existence in a body of training data. These are regularities whose statistical visibility depends on some systematic recoding of the data. The space of possible recodings is, however, infinitely large – it is the space of applicable Turing machines. As a result, mappings that pivot on such attenuated regularities cannot, in general, be found by brute-force search. The class of problems that present such mappings we call the class of “type-2 problems.” Type-1 problems, by (...)
    Direct download (7 more)  
     
    Export citation  
     
    Bookmark   44 citations  
  42.  30
    Two ways of learning associations.Luke Boucher & Zoltán Dienes - 2003 - Cognitive Science 27 (6):807-842.
    How people learn chunks or associations between adjacent items in sequences was modelled. Two previously successful models of how people learn artificial grammars were contrasted: the CCN, a network version of the competitive chunker of Servan‐Schreiber and Anderson [J. Exp. Psychol.: Learn. Mem. Cogn. 16 (1990) 592], which produces local and compositionally‐structured chunk representations acquired incrementally; and the simple recurrent network (SRN) of Elman [Cogn. Sci. 14 (1990) 179], which acquires distributed representations through error correction. The models' susceptibility to two (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   14 citations  
  43.  60
    A Probabilistic Computational Model of Cross-Situational Word Learning.Afsaneh Fazly, Afra Alishahi & Suzanne Stevenson - 2010 - Cognitive Science 34 (6):1017-1063.
    Words are the essence of communication: They are the building blocks of any language. Learning the meaning of words is thus one of the most important aspects of language acquisition: Children must first learn words before they can combine them into complex utterances. Many theories have been developed to explain the impressive efficiency of young children in acquiring the vocabulary of their language, as well as the developmental patterns observed in the course of lexical acquisition. A major source of (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   26 citations  
  44. Semantic Holism and Language Learning.Martin L. Jönsson - 2014 - Journal of Philosophical Logic 43 (4):725-759.
    Holistic theories of meaning have, at least since Dummett’s Frege: The Philosophy of language, been assumed to be problematic from the perspective of the incremental nature of natural language learning. In this essay I argue that the general relationship between holism and language learning is in fact the opposite of that claimed by Dummett. It is only given a particular form of language learning, and a particular form of holism, that there is a problem at all; (...)
    Direct download (7 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  45.  36
    Stubborn learning.Jean-François Laslier & Bernard Walliser - 2015 - Theory and Decision 79 (1):51-93.
    The paper studies a specific adaptive learning rule when each player faces a unidimensional strategy set. The rule states that a player keeps on incrementing her strategy in the same direction if her utility increased and reverses direction if it decreased. The paper concentrates on games on the square [0,1]×[0,1]\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$[0,1]\times [0,1]$$\end{document} as mixed extensions of 2×2\documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$2\times 2$$\end{document} games. We study in general (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  46.  24
    EARSHOT: A Minimal Neural Network Model of Incremental Human Speech Recognition.James S. Magnuson, Heejo You, Sahil Luthra, Monica Li, Hosung Nam, Monty Escabí, Kevin Brown, Paul D. Allopenna, Rachel M. Theodore, Nicholas Monto & Jay G. Rueckl - 2020 - Cognitive Science 44 (4):e12823.
    Despite the lack of invariance problem (the many‐to‐many mapping between acoustics and percepts), human listeners experience phonetic constancy and typically perceive what a speaker intends. Most models of human speech recognition (HSR) have side‐stepped this problem, working with abstract, idealized inputs and deferring the challenge of working with real speech. In contrast, carefully engineered deep learning networks allow robust, real‐world automatic speech recognition (ASR). However, the complexities of deep learning architectures and training regimens make it difficult to use (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  47.  49
    Trading Spaces: Connectionism and the Limits of Uninformed Learning.Andy Clark & Chris Thornton - unknown
    It is widely appreciated that the difficulty of a particluar computation varies according to how the input data are presented. What is less understood is the effect of this computation/representation tradeoff within familiar learning paradigms. We argue that existing learning algoritms are often poorly equipped to solve problems involving a certain type of important and widespread regularity, which we call 'type-2' regularity. The solution in these cases is to trade achieved representation against computational search. We investigate several ways (...)
    Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  48.  21
    Robot Motion Planning Method Based on Incremental High-Dimensional Mixture Probabilistic Model.Fusheng Zha, Yizhou Liu, Xin Wang, Fei Chen, Jingxuan Li & Wei Guo - 2018 - Complexity 2018:1-14.
    The sampling-based motion planner is the mainstream method to solve the motion planning problem in high-dimensional space. In the process of exploring robot configuration space, this type of algorithm needs to perform collision query on a large number of samples, which greatly limits their planning efficiency. Therefore, this paper uses machine learning methods to establish a probabilistic model of the obstacle region in configuration space by learning a large number of labeled samples. Based on this, the high-dimensional samples’ (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  49.  28
    Under What Conditions Can Recursion Be Learned? Effects of Starting Small in Artificial Grammar Learning of Center‐Embedded Structure.Fenna H. Poletiek, Christopher M. Conway, Michelle R. Ellefson, Jun Lai, Bruno R. Bocanegra & Morten H. Christiansen - 2018 - Cognitive Science 42 (8):2855-2889.
    It has been suggested that external and/or internal limitations paradoxically may lead to superior learning, that is, the concepts of starting small and less is more (Elman, ; Newport, ). In this paper, we explore the type of incremental ordering during training that might help learning, and what mechanism explains this facilitation. We report four artificial grammar learning experiments with human participants. In Experiments 1a and 1b we found a beneficial effect of starting small using two (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  50.  12
    A comparison of distributed machine learning methods for the support of “many labs” collaborations in computational modeling of decision making.Lili Zhang, Himanshu Vashisht, Andrey Totev, Nam Trinh & Tomas Ward - 2022 - Frontiers in Psychology 13.
    Deep learning models are powerful tools for representing the complex learning processes and decision-making strategies used by humans. Such neural network models make fewer assumptions about the underlying mechanisms thus providing experimental flexibility in terms of applicability. However, this comes at the cost of involving a larger number of parameters requiring significantly more data for effective learning. This presents practical challenges given that most cognitive experiments involve relatively small numbers of subjects. Laboratory collaborations are a natural way (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
1 — 50 / 987