Results for 'Probabilistic learning'

977 found
Order:
  1.  28
    Probabilistic Learning and Psychological Similarity.Nina Poth - 2023 - Entropy 25 (10).
    The notions of psychological similarity and probabilistic learning are key posits in cognitive, computational, and developmental psychology and in machine learning. However, their explanatory relationship is rarely made explicit within and across these research fields. This opinionated review critically evaluates how these notions can mutually inform each other within computational cognitive science. Using probabilistic models of concept learning as a case study, I argue that two notions of psychological similarity offer important normative constraints to guide (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  2.  13
    Probabilistic Learning Models.Peter M. Williams - 2001 - In David Corfield & Jon Williamson (eds.), Foundations of Bayesianism. Kluwer Academic Publishers. pp. 117--134.
  3.  9
    What is in the feedback? Effect of induced happiness vs. sadness on probabilistic learning with vs. without exploration.Jasmina Bakic, Rudi De Raedt, Marieke Jepma & Gilles Pourtois - 2015 - Frontiers in Human Neuroscience 9.
  4.  60
    A Probabilistic Computational Model of Cross-Situational Word Learning.Afsaneh Fazly, Afra Alishahi & Suzanne Stevenson - 2010 - Cognitive Science 34 (6):1017-1063.
    Words are the essence of communication: They are the building blocks of any language. Learning the meaning of words is thus one of the most important aspects of language acquisition: Children must first learn words before they can combine them into complex utterances. Many theories have been developed to explain the impressive efficiency of young children in acquiring the vocabulary of their language, as well as the developmental patterns observed in the course of lexical acquisition. A major source of (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   26 citations  
  5.  11
    The Probabilistic Foundations of Rational Learning.Simon M. Huttegger - 2017 - Cambridge University Press.
    According to Bayesian epistemology, rational learning from experience is consistent learning, that is learning should incorporate new information consistently into one's old system of beliefs. Simon M. Huttegger argues that this core idea can be transferred to situations where the learner's informational inputs are much more limited than Bayesianism assumes, thereby significantly expanding the reach of a Bayesian type of epistemology. What results from this is a unified account of probabilistic learning in the tradition of (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   12 citations  
  6.  45
    Probabilistic rule-based argumentation for norm-governed learning agents.Régis Riveret, Antonino Rotolo & Giovanni Sartor - 2012 - Artificial Intelligence and Law 20 (4):383-420.
    This paper proposes an approach to investigate norm-governed learning agents which combines a logic-based formalism with an equation-based counterpart. This dual formalism enables us to describe the reasoning of such agents and their interactions using argumentation, and, at the same time, to capture systemic features using equations. The approach is applied to norm emergence and internalisation in systems of learning agents. The logical formalism is rooted into a probabilistic defeasible logic instantiating Dung’s argumentation framework. Rules of this (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  7.  21
    Tracking probabilistic truths: a logic for statistical learning.Alexandru Baltag, Soroush Rafiee Rad & Sonja Smets - 2021 - Synthese 199 (3-4):9041-9087.
    We propose a new model for forming and revising beliefs about unknown probabilities. To go beyond what is known with certainty and represent the agent’s beliefs about probability, we consider a plausibility map, associating to each possible distribution a plausibility ranking. Beliefs are defined as in Belief Revision Theory, in terms of truth in the most plausible worlds. We consider two forms of conditioning or belief update, corresponding to the acquisition of two types of information: learning observable evidence obtained (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  8.  19
    Probabilistic discrimination learning.W. K. Estes, C. J. Burke, R. C. Atkinson & J. P. Frankmann - 1957 - Journal of Experimental Psychology 54 (4):233.
  9. Learning a Generative Probabilistic Grammar of Experience: A Process‐Level Model of Language Acquisition.Oren Kolodny, Arnon Lotem & Shimon Edelman - 2014 - Cognitive Science 38 (4):227-267.
    We introduce a set of biologically and computationally motivated design choices for modeling the learning of language, or of other types of sequential, hierarchically structured experience and behavior, and describe an implemented system that conforms to these choices and is capable of unsupervised learning from raw natural-language corpora. Given a stream of linguistic input, our model incrementally learns a grammar that captures its statistical patterns, which can then be used to parse or generate new data. The grammar constructed (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  10.  13
    Probabilistic discrimination learning in the pigeon.Charles P. Shimp - 1973 - Journal of Experimental Psychology 97 (3):292.
  11.  3
    Reinforcement Learning with Probabilistic Boolean Network Models of Smart Grid Devices.Pedro Juan Rivera Torres, Carlos Gershenson García, María Fernanda Sánchez Puig & Samir Kanaan Izquierdo - 2022 - Complexity 2022:1-15.
    The area of smart power grids needs to constantly improve its efficiency and resilience, to provide high quality electrical power in a resilient grid, while managing faults and avoiding failures. Achieving this requires high component reliability, adequate maintenance, and a studied failure occurrence. Correct system operation involves those activities and novel methodologies to detect, classify, and isolate faults and failures and model and simulate processes with predictive algorithms and analytics. In this paper, we showcase the application of a complex-adaptive, self-organizing (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  12. Learning as Hypothesis Testing: Learning Conditional and Probabilistic Information.Jonathan Vandenburgh - manuscript
    Complex constraints like conditionals ('If A, then B') and probabilistic constraints ('The probability that A is p') pose problems for Bayesian theories of learning. Since these propositions do not express constraints on outcomes, agents cannot simply conditionalize on the new information. Furthermore, a natural extension of conditionalization, relative information minimization, leads to many counterintuitive predictions, evidenced by the sundowners problem and the Judy Benjamin problem. Building on the notion of a `paradigm shift' and empirical research in psychology and (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  13.  65
    Implicit probabilistic sequence learning is independent of explicit awareness.Sunbin Song, Howard Jr, James H. & Darlene V. Howard - 2007 - Learning and Memory 14 (1-6):167-176.
  14.  13
    Concept learning in a probabilistic language-of-thought. How is it possible and what does it presuppose?Matteo Colombo - 2023 - Behavioral and Brain Sciences 46:e271.
    Where does a probabilistic language-of-thought (PLoT) come from? How can we learn new concepts based on probabilistic inferences operating on a PLoT? Here, I explore these questions, sketching a traditional circularity objection to LoT and canvassing various approaches to addressing it. I conclude that PLoT-based cognitive architectures can support genuine concept learning; but, currently, it is unclear that they enjoy more explanatory breadth in relation to concept learning than alternative architectures that do not posit any LoT.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  15.  12
    Learning abstract visual concepts via probabilistic program induction in a Language of Thought.Matthew C. Overlan, Robert A. Jacobs & Steven T. Piantadosi - 2017 - Cognition 168 (C):320-334.
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  16.  15
    Learning a Generative Probabilistic Grammar of Experience: A Process-Level Model of Language Acquisition.Oren Kolodny, Arnon Lotem & Shimon Edelman - 2015 - Cognitive Science 39 (2):227-267.
    We introduce a set of biologically and computationally motivated design choices for modeling the learning of language, or of other types of sequential, hierarchically structured experience and behavior, and describe an implemented system that conforms to these choices and is capable of unsupervised learning from raw natural-language corpora. Given a stream of linguistic input, our model incrementally learns a grammar that captures its statistical patterns, which can then be used to parse or generate new data. The grammar constructed (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  17.  21
    Probabilistic models of cognitive development: Towards a rational constructivist approach to the study of learning and development.Fei Xu & Thomas L. Griffiths - 2011 - Cognition 120 (3):299-301.
  18.  29
    Expectancy Learning from Probabilistic Input by Infants.Alexa R. Romberg & Jenny R. Saffran - 2012 - Frontiers in Psychology 3.
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  19. Learning to plan probabilistically from neural networks.R. Sun - unknown
    Di erent from existing reinforcement learning algorithms that generate only reactive policies and existing probabilis tic planning algorithms that requires a substantial amount of a priori knowledge in order to plan we devise a two stage bottom up learning to plan process in which rst reinforce ment learning dynamic programming is applied without the use of a priori domain speci c knowledge to acquire a reactive policy and then explicit plans are extracted from the learned reactive policy (...)
     
    Export citation  
     
    Bookmark  
  20.  36
    Probabilistic Motor Sequence Yields Greater Offline and Less Online Learning than Fixed Sequence.Yue Du, Shikha Prashad, Ilana Schoenbrun & Jane E. Clark - 2016 - Frontiers in Human Neuroscience 10.
  21.  26
    Supplementary report: Discrimination learning with probabilistic reinforcement schedules.R. C. Atkinson, W. H. Bogartz & R. N. Turner - 1959 - Journal of Experimental Psychology 57 (5):349.
  22. A probabilistic incremental model of word learning in the presence of referential uncertainty.Afsaneh Fazly, Afra Alishahi & Suzanne Stevenson - 2008 - In B. C. Love, K. McRae & V. M. Sloutsky (eds.), Proceedings of the 30th Annual Conference of the Cognitive Science Society. Cognitive Science Society.
  23.  10
    A probabilistic successor representation for context-dependent learning.Jesse P. Geerts, Samuel J. Gershman, Neil Burgess & Kimberly L. Stachenfeld - 2024 - Psychological Review 131 (2):578-597.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  24.  81
    Can amnesic patients learn without awareness? New evidence comparing deterministic and probabilistic sequence learning.Muriel Vandenberghe, Nicolas Schmidt, Patrick Fery & Axel Cleeremans - 2006 - Neuropsychologia 44 (10):1629-1641.
    Can associative learning take place without awareness? We explore this issue in a sequence learning paradigm with amnesic and control participants, who were simply asked to react to one of four possible stimuli on each trial. Unknown to them, successive stimuli occurred in a sequence. We manipulated the extent to which stimuli followed the sequence in a deterministic manner (noiseless condition) or only probabilistically so (noisy condition). Through this paradigm, we aimed at addressing two central issues: first, we (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  25.  21
    Biases in probabilistic category learning in relation to social anxiety.Anna Abraham & Christiane Hermann - 2015 - Frontiers in Psychology 6.
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark  
  26.  35
    From Exemplar to Grammar: A Probabilistic Analogy‐Based Model of Language Learning.Rens Bod - 2009 - Cognitive Science 33 (5):752-793.
    While rules and exemplars are usually viewed as opposites, this paper argues that they form end points of the same distribution. By representing both rules and exemplars as (partial) trees, we can take into account the fluid middle ground between the two extremes. This insight is the starting point for a new theory of language learning that is based on the following idea: If a language learner does not know which phrase‐structure trees should be assigned to initial sentences, s/he (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   15 citations  
  27.  52
    Learning experiences and the value of knowledge.Simon M. Huttegger - 2014 - Philosophical Studies 171 (2):279-288.
    Generalized probabilistic learning takes place in a black-box where present probabilities lead to future probabilities by way of a hidden learning process. The idea that generalized learning can be partially characterized by saying that it doesn’t foreseeably lead to harmful decisions is explored. It is shown that a martingale principle follows for finite probability spaces.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   17 citations  
  28. Radical probabilism and bayesian conditioning.Richard Bradley - 2005 - Philosophy of Science 72 (2):342-364.
    Richard Jeffrey espoused an antifoundationalist variant of Bayesian thinking that he termed ‘Radical Probabilism’. Radical Probabilism denies both the existence of an ideal, unbiased starting point for our attempts to learn about the world and the dogma of classical Bayesianism that the only justified change of belief is one based on the learning of certainties. Probabilistic judgment is basic and irreducible. Bayesian conditioning is appropriate when interaction with the environment yields new certainty of belief in some proposition but (...)
    Direct download (8 more)  
     
    Export citation  
     
    Bookmark   39 citations  
  29.  19
    Error-monitoring During Complex Probabilistic Association Learning in Adults with ADHD.Gerathy Matthew & Rushby Jacqueline - 2015 - Frontiers in Human Neuroscience 9.
  30.  10
    A Stochastic EM Learning Algorithm for Structured Probabilistic Neural Networks.Gerhard Paass - 1990 - In G. Dorffner (ed.), Konnektionismus in Artificial Intelligence Und Kognitionsforschung. Berlin: Springer-Verlag. pp. 196--201.
  31. Sensitivity to frequency in probabilistic category learning.A. F. Smith - 1990 - Bulletin of the Psychonomic Society 28 (6):493-493.
  32. Probabilistic models of language processing and acquisition.Nick Chater & Christopher D. Manning - 2006 - Trends in Cognitive Sciences 10 (7):335–344.
    Probabilistic methods are providing new explanatory approaches to fundamental cognitive science questions of how humans structure, process and acquire language. This review examines probabilistic models defined over traditional symbolic structures. Language comprehension and production involve probabilistic inference in such models; and acquisition involves choosing the best model, given innate constraints and linguistic and other input. Probabilistic models can account for the learning and processing of language, while maintaining the sophistication of symbolic models. A recent burgeoning (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   45 citations  
  33.  45
    A unified account of abstract structure and conceptual change: Probabilistic models and early learning mechanisms.Alison Gopnik - 2011 - Behavioral and Brain Sciences 34 (3):129-130.
    We need not propose, as Carey does, a radical discontinuity between core cognition, which is responsible for abstract structure, and language and which are responsible for learning and conceptual change. From a probabilistic models view, conceptual structure and learning reflect the same principles, and they are both in place from the beginning.
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  34.  9
    Monitoring of perception systems: Deterministic, probabilistic, and learning-based fault detection and identification.Pasquale Antonante, Heath G. Nilsen & Luca Carlone - 2023 - Artificial Intelligence 325 (C):103998.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  35.  7
    Hyperbolic Secant representation of the logistic function: Application to probabilistic Multiple Instance Learning for CT intracranial hemorrhage detection.Francisco M. Castro-Macías, Pablo Morales-Álvarez, Yunan Wu, Rafael Molina & Aggelos K. Katsaggelos - 2024 - Artificial Intelligence 331 (C):104115.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  36.  23
    Hypothesis behavior in a concept-learning task with probabilistic feedback.Steven P. Rogers & Robert C. Haygood - 1968 - Journal of Experimental Psychology 76 (1p1):160.
  37. Intention, attention, and consciousness in probabilistic sequence learning.Luis Jimenez - 2003 - In Attention and Implicit Learning. John Benjamins.
  38.  13
    Effect of different stimulus frequencies on discrimination learning with probabilistic reinforcement.Juliet Popper Shaffer - 1963 - Journal of Experimental Psychology 65 (3):265.
  39.  22
    Probabilistic abstract argumentation: an investigation with Boltzmann machines.Régis Riveret, Dimitrios Korkinof, Moez Draief & Jeremy Pitt - 2015 - Argument and Computation 6 (2):178-218.
    Probabilistic argumentation and neuro-argumentative systems offer new computational perspectives for the theory and applications of argumentation, but their principled construction involves two entangled problems. On the one hand, probabilistic argumentation aims at combining the quantitative uncertainty addressed by probability theory with the qualitative uncertainty of argumentation, but probabilistic dependences amongst arguments as well as learning are usually neglected. On the other hand, neuro-argumentative systems offer the opportunity to couple the computational advantages of learning and massive (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  40.  28
    Making Probabilistic Relational Categories Learnable.Wookyoung Jung & John E. Hummel - 2015 - Cognitive Science 39 (6):1259-1291.
    Theories of relational concept acquisition based on structured intersection discovery predict that relational concepts with a probabilistic structure ought to be extremely difficult to learn. We report four experiments testing this prediction by investigating conditions hypothesized to facilitate the learning of such categories. Experiment 1 showed that changing the task from a category-learning task to choosing the “winning” object in each stimulus greatly facilitated participants' ability to learn probabilistic relational categories. Experiments 2 and 3 further investigated (...)
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  41.  19
    Bayes rules all: On the equivalence of various forms of learning in a probabilistic setting.Balazs Gyenis - unknown
    Jeffrey conditioning is said to provide a more general method of assimilating uncertain evidence than Bayesian conditioning. We show that Jeffrey learning is merely a particular type of Bayesian learning if we accept either of the following two observations: – Learning comprises both probability kinematics and proposition kinematics. – What can be updated is not the same as what can do the updating; the set of the latter is richer than the set of the former. We address (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  42.  25
    Predicting Protein Interactions Using a Deep Learning Method-Stacked Sparse Autoencoder Combined with a Probabilistic Classification Vector Machine.Yanbin Wang, Zhuhong You, Liping Li, Li Cheng, Xi Zhou, Libo Zhang, Xiao Li & Tonghai Jiang - 2018 - Complexity 2018:1-12.
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  43. Debunking Debunking: Explanationism, Probabilistic Sensitivity, and Why There is No Specifically Metacognitive Debunking Principle.David Bourget & Angela Mendelovici - 2023 - Midwest Studies in Philosophy 47:25-52.
    On explanationist accounts of genealogical debunking, roughly, a belief is debunked when its explanation is not suitably related to its content. We argue that explanationism cannot accommodate cases in which beliefs are explained by factors unrelated to their contents but are nonetheless independently justified. Justification-specific versions of explanationism face an iteration of the problem. The best account of debunking is a probabilistic account according to which subject S’s justification J for their belief that P is debunked when S learns (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  44.  6
    Interaction of phonological biases and frequency in learning a probabilistic language pattern.Hanbyul Song & James White - 2022 - Cognition 226 (C):105170.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  45.  18
    Probabilistic and Causal Inference: the Works of Judea Pearl.Hector Geffner, Rita Dechter & Joseph Halpern (eds.) - 2022 - ACM Books.
    Professor Judea Pearl won the 2011 Turing Award "for fundamental contributions to artificial intelligence through the development of a calculus for probabilistic and causal reasoning." This book contains the original articles that led to the award, as well as other seminal works, divided into four parts: heuristic search, probabilistic reasoning, causality, first period (1988-2001), and causality, recent period (2002-2020). Each of these parts starts with an introduction written by Judea Pearl. The volume also contains original, contributed articles by (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  46. Probabilistic Induction and Hume’s Problem: Reply to Lange.Samir Okasha - 2003 - Philosophical Quarterly 53 (212):419–424.
    Marc Lange has criticized my assertion that relative to a Bayesian conception of inductive reasoning, Hume's argument for inductive scepticism cannot be run. I reply that the way in which Lange suggests one should run the Humean argument in a Bayesian framework ignores the fact that in Bayesian models of learning from experience, the domain of an agent's probability measure is exogenously determined. I also show that Lange is incorrect to equate probability distributions which 'support inductive inferences' with probability (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  47.  27
    Probabilistic models as theories of children's minds.Alison Gopnik - 2011 - Behavioral and Brain Sciences 34 (4):200-201.
    My research program proposes that children have representations and learning mechanisms that can be characterized as causal models of the world Bayesian Fundamentalism.”.
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  48. Uncertainty, Learning, and the “Problem” of Dilation.Seamus Bradley & Katie Siobhan Steele - 2014 - Erkenntnis 79 (6):1287-1303.
    Imprecise probabilism—which holds that rational belief/credence is permissibly represented by a set of probability functions—apparently suffers from a problem known as dilation. We explore whether this problem can be avoided or mitigated by one of the following strategies: (a) modifying the rule by which the credal state is updated, (b) restricting the domain of reasonable credal states to those that preclude dilation.
    Direct download (7 more)  
     
    Export citation  
     
    Bookmark   22 citations  
  49.  28
    Learning Orthographic Structure With Sequential Generative Neural Networks.Alberto Testolin, Ivilin Stoianov, Alessandro Sperduti & Marco Zorzi - 2016 - Cognitive Science 40 (3):579-606.
    Learning the structure of event sequences is a ubiquitous problem in cognition and particularly in language. One possible solution is to learn a probabilistic generative model of sequences that allows making predictions about upcoming events. Though appealing from a neurobiological standpoint, this approach is typically not pursued in connectionist modeling. Here, we investigated a sequential version of the restricted Boltzmann machine, a stochastic recurrent neural network that extracts high-order structure from sensory data through unsupervised generative learning and (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  50. Learning from Conditionals.Benjamin Eva, Stephan Hartmann & Soroush Rafiee Rad - 2020 - Mind 129 (514):461-508.
    In this article, we address a major outstanding question of probabilistic Bayesian epistemology: how should a rational Bayesian agent update their beliefs upon learning an indicative conditional? A number of authors have recently contended that this question is fundamentally underdetermined by Bayesian norms, and hence that there is no single update procedure that rational agents are obliged to follow upon learning an indicative conditional. Here we resist this trend and argue that a core set of widely accepted (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   7 citations  
1 — 50 / 977