Results for 'bayesian learning'

988 found
Order:
  1. Bayesian Learning Models of Pain: A Call to Action.Abby Tabor & Christopher Burr - 2019 - Current Opinion in Behavioral Sciences 26:54-61.
    Learning is fundamentally about action, enabling the successful navigation of a changing and uncertain environment. The experience of pain is central to this process, indicating the need for a change in action so as to mitigate potential threat to bodily integrity. This review considers the application of Bayesian models of learning in pain that inherently accommodate uncertainty and action, which, we shall propose are essential in understanding learning in both acute and persistent cases of pain.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  2.  12
    General properties of bayesian learning as statistical inference determined by conditional expectations.Zalán Gyenis & Miklós Rédei - 2017 - Review of Symbolic Logic 10 (4):719-755.
    We investigate the general properties of general Bayesian learning, where “general Bayesian learning” means inferring a state from another that is regarded as evidence, and where the inference is conditionalizing the evidence using the conditional expectation determined by a reference probability measure representing the background subjective degrees of belief of a Bayesian Agent performing the inference. States are linear functionals that encode probability measures by assigning expectation values to random variables via integrating them with respect (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   17 citations  
  3.  7
    Locally Bayesian learning with applications to retrospective revaluation and highlighting.John K. Kruschke - 2006 - Psychological Review 113 (4):677-699.
  4.  6
    Bayesian learning for cooperation in multi-agent systems.Mair Allen-Williams & Nicholas R. Jennings - 2009 - In L. Magnani (ed.), computational intelligence. pp. 321--360.
    Direct download  
     
    Export citation  
     
    Bookmark  
  5.  8
    Bayesian learning and the psychology of rule induction.Ansgar D. Endress - 2013 - Cognition 127 (2):159-176.
  6. Bayesian learning models with revision of evidence.William Harper - 1978 - Philosophia 7 (2):357-367.
  7.  14
    General properties of general Bayesian learning.Miklós Rédei & Zalán Gyenis - unknown
    We investigate the general properties of general Bayesian learning, where ``general Bayesian learning'' means inferring a state from another that is regarded as evidence, and where the inference is conditionalizing the evidence using the conditional expectation determined by a reference probability measure representing the background subjective degrees of belief of a Bayesian Agent performing the inference. States are linear functionals that encode probability measures by assigning expectation values to random variables via integrating them with respect (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  8. Belief propagation and locally Bayesian learning.Adam N. Sanborn & Ricardo Silva - 2009 - In N. A. Taatgen & H. van Rijn (eds.), Proceedings of the 31st Annual Conference of the Cognitive Science Society. pp. 31.
     
    Export citation  
     
    Bookmark   1 citation  
  9.  8
    Teaching-Learning Model of Structure-Constructivism Based on Piagetian Propositional Logic and Bayesian Causational Inference. 은은숙 - 2020 - Journal of the New Korean Philosophical Association 99:191-217.
    본 연구의 목적은 최근 20여 년 동안 진행되어 온 학습이론에 대한 피아제의 명제논리학적 학습이론과 베이즈주의의 확률론적 학습이론의 융합에 근거하는 새로운 융합교수학습모형을 개발하는 것이다. 연구자는 이 새로운 교수학습모델을 “베이지안 구조구성주의 교수학습모형”(Bayesian structure-constructivist Model of Teaching-learning: 이하 약칭 BMT)이라 명명한다. 본고는 역사-비판적 관점 및 형식화적 관점에서 피아제의 명제논리학적 학습모형에서 해석된 학습이론과 베이즈주의의 확률론적 추론모형에서 해석된 학습이론을 일차적으로 분석하고, 논문의 후반부에서는 이를 근거로 교수법의 관점에서 양자의 학습이론을 통합하는 새로운 교수학습모델, 즉 BMT의 중요한 특성들을 세부적으로 제시한다. 몇 가지 핵심만 언급하면, 첫째로, BMT는 (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  10.  6
    Is Trust the result of Bayesian Learning?Bernd Lahno - 2004 - Jahrbuch Für Handlungs- Und Entscheidungstheorei 3:47-68.
  11.  5
    Bayesian Teaching Model of image Based on Image Recognition by Deep Learning. 은은숙 - 2020 - Journal of the New Korean Philosophical Association 102:271-296.
    본고는 딥러닝의 이미지 인식 원리와 유아의 이미지 인식 원리를 종합하면서, 이미지-개념 학습을 위한 새로운 교수학습모델, 즉 “베이지안 구조구성주의 교수학습모델”(Bayesian Structure-constructivist Teaching-learning Model: BSTM)을 제안한다. 달리 말하면, 기계학습 원리와 인간학습 원리를 비교함으로써 얻게 되는 시너지 효과를 바탕으로, 유아들의 이미지-개념 학습을 위한 새로운 교수 모델을 구성하는 것을 목표로 한다. 이런 맥락에서 본고는 전체적으로 3가지 차원에서 논의된다. 첫째, 아동의 이미지 학습에 대한 역사적 중요 이론인 “대상 전체론적 가설”, “분류학적 가설”, “배타적 가설”, “기본 수준 범주 가설” 등을 역사 비판적 관점에서 검토한다. 둘째, (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  12.  14
    A Bayesian Theory of Sequential Causal Learning and Abstract Transfer.Hongjing Lu, Randall R. Rojas, Tom Beckers & Alan L. Yuille - 2016 - Cognitive Science 40 (2):404-439.
    Two key research issues in the field of causal learning are how people acquire causal knowledge when observing data that are presented sequentially, and the level of abstraction at which learning takes place. Does sequential causal learning solely involve the acquisition of specific cause-effect links, or do learners also acquire knowledge about abstract causal constraints? Recent empirical studies have revealed that experience with one set of causal cues can dramatically alter subsequent learning and performance with entirely (...)
    Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  13.  16
    Bayesians Still Don’t Learn from Conditionals.Mario Günther & Borut Trpin - 2022 - Acta Analytica 38 (3):439-451.
    One of the open questions in Bayesian epistemology is how to rationally learn from indicative conditionals (Douven, 2016). Eva et al. (Mind 129(514):461–508, 2020) propose a strategy to resolve this question. They claim that their strategy provides a “uniquely rational response to any given learning scenario”. We show that their updating strategy is neither very general nor always rational. Even worse, we generalize their strategy and show that it still fails. Bad news for the Bayesians.
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  14.  10
    Bayesian model learning based on predictive entropy.Jukka Corander & Pekka Marttinen - 2006 - Journal of Logic, Language and Information 15 (1-2):5-20.
    Bayesian paradigm has been widely acknowledged as a coherent approach to learning putative probability model structures from a finite class of candidate models. Bayesian learning is based on measuring the predictive ability of a model in terms of the corresponding marginal data distribution, which equals the expectation of the likelihood with respect to a prior distribution for model parameters. The main controversy related to this learning method stems from the necessity of specifying proper prior distributions (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  15.  9
    A Bayesian Model of Biases in Artificial Language Learning: The Case of a Word‐Order Universal.Jennifer Culbertson & Paul Smolensky - 2012 - Cognitive Science 36 (8):1468-1498.
    In this article, we develop a hierarchical Bayesian model of learning in a general type of artificial language‐learning experiment in which learners are exposed to a mixture of grammars representing the variation present in real learners’ input, particularly at times of language change. The modeling goal is to formalize and quantify hypothesized learning biases. The test case is an experiment (Culbertson, Smolensky, & Legendre, 2012) targeting the learning of word‐order patterns in the nominal domain. The (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  16. bayesvl: Visually learning the graphical structure of Bayesian networks and performing MCMC with ‘Stan’.Viet-Phuong La & Quan-Hoang Vuong - 2019 - Vienna, Austria: The Comprehensive R Archive Network (CRAN).
    La, V. P., & Vuong, Q. H. (2019). bayesvl: Visually learning the graphical structure of Bayesian networks and performing MCMC with ‘Stan’. The Comprehensive R Archive Network (CRAN).
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  17. Bayesian updating when what you learn might be false.Richard Pettigrew - 2023 - Erkenntnis 88 (1):309-324.
    Rescorla (Erkenntnis, 2020) has recently pointed out that the standard arguments for Bayesian Conditionalization assume that whenever I become certain of something, it is true. Most people would reject this assumption. In response, Rescorla offers an improved Dutch Book argument for Bayesian Conditionalization that does not make this assumption. My purpose in this paper is two-fold. First, I want to illuminate Rescorla’s new argument by giving a very general Dutch Book argument that applies to many cases of updating (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  18.  8
    Word learning as Bayesian inference.Fei Xu & Joshua B. Tenenbaum - 2007 - Psychological Review 114 (2):245-272.
  19.  12
    Bayesian collective learning emerges from heuristic social learning.P. M. Krafft, Erez Shmueli, Thomas L. Griffiths, Joshua B. Tenenbaum & Alex “Sandy” Pentland - 2021 - Cognition 212 (C):104469.
  20.  8
    Incremental Bayesian Category Learning From Natural Language.Lea Frermann & Mirella Lapata - 2016 - Cognitive Science 40 (6):1333-1381.
    Models of category learning have been extensively studied in cognitive science and primarily tested on perceptual abstractions or artificial stimuli. In this paper, we focus on categories acquired from natural language stimuli, that is, words. We present a Bayesian model that, unlike previous work, learns both categories and their features in a single process. We model category induction as two interrelated subproblems: the acquisition of features that discriminate among categories, and the grouping of concepts into categories based on (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  21.  12
    Learning the Form of Causal Relationships Using Hierarchical Bayesian Models.Christopher G. Lucas & Thomas L. Griffiths - 2010 - Cognitive Science 34 (1):113-147.
  22.  12
    Language Evolution by Iterated Learning With Bayesian Agents.Thomas L. Griffiths & Michael L. Kalish - 2007 - Cognitive Science 31 (3):441-480.
    Languages are transmitted from person to person and generation to generation via a process of iterated learning: people learn a language from other people who once learned that language themselves. We analyze the consequences of iterated learning for learning algorithms based on the principles of Bayesian inference, assuming that learners compute a posterior distribution over languages by combining a prior (representing their inductive biases) with the evidence provided by linguistic data. We show that when learners sample (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   58 citations  
  23.  7
    Bayesian Word Learning in Multiple Language Environments.Benjamin D. Zinszer, Sebi V. Rolotti, Fan Li & Ping Li - 2018 - Cognitive Science 42 (S2):439-462.
    Infant language learners are faced with the difficult inductive problem of determining how new words map to novel or known objects in their environment. Bayesian inference models have been successful at using the sparse information available in natural child-directed speech to build candidate lexicons and infer speakers’ referential intentions. We begin by asking how a Bayesian model optimized for monolingual input generalizes to new monolingual or bilingual corpora and find that, especially in the case of the bilingual input, (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  24.  4
    Learning the Structure of Bayesian Networks: A Quantitative Assessment of the Effect of Different Algorithmic Schemes.Stefano Beretta, Mauro Castelli, Ivo Gonçalves, Roberto Henriques & Daniele Ramazzotti - 2018 - Complexity 2018:1-12.
    One of the most challenging tasks when adopting Bayesian networks is the one of learning their structure from data. This task is complicated by the huge search space of possible solutions and by the fact that the problem isNP-hard. Hence, a full enumeration of all the possible solutions is not always feasible and approximations are often required. However, to the best of our knowledge, a quantitative analysis of the performance and characteristics of the different heuristics to solve this (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  25.  7
    Conditional Learning Through Causal Models.Jonathan Vandenburgh - 2020 - Synthese (1-2):2415-2437.
    Conditional learning, where agents learn a conditional sentence ‘If A, then B,’ is difficult to incorporate into existing Bayesian models of learning. This is because conditional learning is not uniform: in some cases, learning a conditional requires decreasing the probability of the antecedent, while in other cases, the antecedent probability stays constant or increases. I argue that how one learns a conditional depends on the causal structure relating the antecedent and the consequent, leading to a (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  26.  98
    bayesvl: Visually Learning the Graphical Structure of Bayesian Networks and Performing MCMC with 'Stan'.Quan-Hoang Vuong & Viet-Phuong La - 2019 - Open Science Framework 2019:01-47.
  27.  18
    Bayesian generic priors for causal learning.Hongjing Lu, Alan L. Yuille, Mimi Liljeholm, Patricia W. Cheng & Keith J. Holyoak - 2008 - Psychological Review 115 (4):955-984.
  28.  7
    Associative learning or Bayesian inference? Revisiting backwards blocking reasoning in adults.Deon T. Benton & David H. Rakison - 2023 - Cognition 241 (C):105626.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  29. A Bayesian metric for evaluating machine learning algorithms.Lucas Hope & Kevin Korb - unknown
     
    Export citation  
     
    Bookmark  
  30.  8
    Predicting Verbal Learning and Memory Assessments of Older Adults Using Bayesian Hierarchical Models.Endris Assen Ebrahim & Mehmet Ali Cengiz - 2022 - Frontiers in Psychology 13.
    Verbal learning and memory summaries of older adults have usually been used to describe neuropsychiatric complaints. Bayesian hierarchical models are modern and appropriate approaches for predicting repeated measures data where information exchangeability is considered and a violation of the independence assumption in classical statistics. Such models are complex models for clustered data that account for distributions of hyper-parameters for fixed-term parameters in Bayesian computations. Repeated measures are inherently clustered and typically occur in clinical trials, education, cognitive psychology, (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  31.  5
    Learning Bayesian networks from data: An information-theory based approach.Jie Cheng, Russell Greiner, Jonathan Kelly, David Bell & Weiru Liu - 2002 - Artificial Intelligence 137 (1-2):43-90.
  32.  22
    Bayesian argumentation and the value of logical validity.Benjamin Eva & Stephan Hartmann - 2018 - Psychological Review 125 (5):806-821.
    According to the Bayesian paradigm in the psychology of reasoning, the norms by which everyday human cognition is best evaluated are probabilistic rather than logical in character. Recently, the Bayesian paradigm has been applied to the domain of argumentation, where the fundamental norms are traditionally assumed to be logical. Here, we present a major generalisation of extant Bayesian approaches to argumentation that utilizes a new class of Bayesian learning methods that are better suited to modelling (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   23 citations  
  33.  3
    A Bayesian approach to (online) transfer learning: Theory and algorithms.Xuetong Wu, Jonathan H. Manton, Uwe Aickelin & Jingge Zhu - 2023 - Artificial Intelligence 324 (C):103991.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  34.  3
    Extending bayesian concept learning to deal with representational complexity and adaptation.Michael D. Lee - 2001 - Behavioral and Brain Sciences 24 (4):685-686.
    While Tenenbaum and Griffiths impressively consolidate and extend Shepard's research in the areas of stimulus representation and generalization, there is a need for complexity measures to be developed to control the flexibility of their “hypothesis space” approach to representation. It may also be possible to extend their concept learning model to consider the fundamental issue of representational adaptation. [Tenenbaum & Griffiths].
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark  
  35.  4
    Learning tractable Bayesian networks in the space of elimination orders.Marco Benjumeda, Concha Bielza & Pedro Larrañaga - 2019 - Artificial Intelligence 274 (C):66-90.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  36.  1
    A Bayesian Mixed-Methods Analysis of Basic Psychological Needs Satisfaction through Outdoor Learning and Its Influence on Motivational Behavior in Science Class.Ulrich Dettweiler, Gabriele Lauterbach, Christoph Becker & Perikles Simon - 2017 - Frontiers in Psychology 8.
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark  
  37.  4
    Learning discrete Bayesian network parameters from continuous data streams: What is the best strategy?Parot Ratnapinda & Marek J. Druzdzel - 2015 - Journal of Applied Logic 13 (4):628-642.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  38.  8
    Learning Bayesian network parameters under equivalence constraints.Tiansheng Yao, Arthur Choi & Adnan Darwiche - 2017 - Artificial Intelligence 244 (C):239-257.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  39. Iterated learning in populations of Bayesian agents.Kenny Smith - 2009 - In N. A. Taatgen & H. van Rijn (eds.), Proceedings of the 31st Annual Conference of the Cognitive Science Society. pp. 697--702.
     
    Export citation  
     
    Bookmark   7 citations  
  40.  4
    Beyond the discussion between Learning Theory of Piagetian Propositional Logic and that of Bayesian Causational Inference. 은은숙 - 2019 - Journal of the New Korean Philosophical Association 97:247-266.
    15년 전부터 등장한 베이지안 확률론적 추론 모형은 통계학, 과학철학, 심리학, 인지과학, 컴퓨터과학, 신경과학 등에서 학계의 연구를 지배하는 강력한 핵심논제가 된 후로 이젠 교육학과 심지어는 논리학 분야에까지 큰 반향을 일으키고 있다. 이러한 다양화로 인해, 굿에 따르면, 베이즈주의 유형은 46,656 가지로 분류된다. 이러한 다양한 모형들 중에서 베이지안 확률모형을 학습이론에 적용하려는 학자들은 학습자의 학습 과정이 정확히 베이지안 확률추론 과정을 항상 따른다고 주장한다. 그런데 이러한 확률론적 모형은 피아제의 구성주의적 전망을 계승한 것은 분명하다. 왜냐하면, 이 모델을 지지하는 학자들 자신이 자신들의 학적 운동을 “새로운 합리적 구성주의”라고 (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  41.  21
    Bayesian reverse-engineering considered as a research strategy for cognitive science.Carlos Zednik & Frank Jäkel - 2016 - Synthese 193 (12):3951-3985.
    Bayesian reverse-engineering is a research strategy for developing three-level explanations of behavior and cognition. Starting from a computational-level analysis of behavior and cognition as optimal probabilistic inference, Bayesian reverse-engineers apply numerous tweaks and heuristics to formulate testable hypotheses at the algorithmic and implementational levels. In so doing, they exploit recent technological advances in Bayesian artificial intelligence, machine learning, and statistics, but also consider established principles from cognitive psychology and neuroscience. Although these tweaks and heuristics are highly (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   21 citations  
  42.  5
    Bayesian associative learning.David R. Shanks - 2006 - Trends in Cognitive Sciences 10 (11):477-478.
  43.  3
    Confidence biases and learning among intuitive Bayesians.Louis Lévy-Garboua, Muniza Askari & Marco Gazel - 2018 - Theory and Decision 84 (3):453-482.
    We design a double-or-quits game to compare the speed of learning one’s specific ability with the speed of rising confidence as the task gets increasingly difficult. We find that people on average learn to be overconfident faster than they learn their true ability and we present an intuitive-Bayesian model of confidence which integrates confidence biases and learning. Uncertainty about one’s true ability to perform a task in isolation can be responsible for large and stable confidence biases, namely (...)
    No categories
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  44. Learning from games: Inductive bias and Bayesian inference.Michael H. Coen & Yue Gao - 2009 - In N. A. Taatgen & H. van Rijn (eds.), Proceedings of the 31st Annual Conference of the Cognitive Science Society. pp. 2729--2734.
     
    Export citation  
     
    Bookmark   1 citation  
  45.  8
    What hinge epistemology and Bayesian epistemology can learn from each other.Olav Benjamin Vassend - 2023 - Asian Journal of Philosophy 2 (2):1-21.
    Hinge epistemology and Bayesianism are two prominent approaches in contemporary epistemology, but the relationship between these approaches has not been systematically studied. This paper formalizes the central commitments of hinge epistemology in a Bayesian framework and argues for the following two theses: (1) many of the types of claims that are treated as paradigmatic hinges in the hinge epistemology literature, such as the claim that there exists an external world of physical objects, are not capable of enabling rational inquiry, (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  46.  8
    Non-Bayesian Inference: Causal Structure Trumps Correlation.Bénédicte Bes, Steven Sloman, Christopher G. Lucas & Éric Raufaste - 2012 - Cognitive Science 36 (7):1178-1203.
    The study tests the hypothesis that conditional probability judgments can be influenced by causal links between the target event and the evidence even when the statistical relations among variables are held constant. Three experiments varied the causal structure relating three variables and found that (a) the target event was perceived as more probable when it was linked to evidence by a causal chain than when both variables shared a common cause; (b) predictive chains in which evidence is a cause of (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  47.  4
    What the Bayesian framework has contributed to understanding cognition: Causal learning as a case study.Keith J. Holyoak & Hongjing Lu - 2011 - Behavioral and Brain Sciences 34 (4):203-204.
    The field of causal learning and reasoning (largely overlooked in the target article) provides an illuminating case study of how the modern Bayesian framework has deepened theoretical understanding, resolved long-standing controversies, and guided development of new and more principled algorithmic models. This progress was guided in large part by the systematic formulation and empirical comparison of multiple alternative Bayesian models.
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  48.  15
    Theory-based Bayesian models of inductive learning and reasoning.Joshua B. Tenenbaum, Thomas L. Griffiths & Charles Kemp - 2006 - Trends in Cognitive Sciences 10 (7):309-318.
  49. The Bayesian and the Dogmatist.Brian Weatherson - 2007 - Proceedings of the Aristotelian Society 107 (1pt2):169-185.
    Dogmatism is sometimes thought to be incompatible with Bayesian models of rational learning. I show that the best model for updating imprecise credences is compatible with dogmatism.
    Direct download (8 more)  
     
    Export citation  
     
    Bookmark   72 citations  
  50.  9
    A Bayesian generative model for learning semantic hierarchies.Roni Mittelman, Min Sun, Benjamin Kuipers & Silvio Savarese - 2014 - Frontiers in Psychology 5.
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark  
1 — 50 / 988