Results for 'Hierarchical Bayesian inference'

1000+ found
Order:
  1.  20
    A Hierarchical Bayesian Model of Human Decision‐Making on an Optimal Stopping Problem.Michael D. Lee - 2006 - Cognitive Science 30 (3):1-26.
    We consider human performance on an optimal stopping problem where people are presented with a list of numbers independently chosen from a uniform distribution. People are told how many numbers are in the list, and how they were chosen. People are then shown the numbers one at a time, and are instructed to choose the maximum, subject to the constraint that they must choose a number at the time it is presented, and any choice below the maximum is incorrect. We (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  2.  54
    A Hierarchical Bayesian Modeling Approach to Searching and Stopping in Multi-Attribute Judgment.Don van Ravenzwaaij, Chris P. Moore, Michael D. Lee & Ben R. Newell - 2014 - Cognitive Science 38 (7):1384-1405.
    In most decision-making situations, there is a plethora of information potentially available to people. Deciding what information to gather and what to ignore is no small feat. How do decision makers determine in what sequence to collect information and when to stop? In two experiments, we administered a version of the German cities task developed by Gigerenzer and Goldstein (1996), in which participants had to decide which of two cities had the larger population. Decision makers were not provided with the (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  3.  24
    Generalized Bayesian Inference Nets Model and Diagnosis of Cardiovascular Diseases.Jiayi Dou, Mingchui Dong & Booma Devi Sekar - 2011 - Journal of Intelligent Systems 20 (3):209-225.
    A generalized Bayesian inference nets model is proposed to aid researchers to construct Bayesian inference nets for various applications. The benefit of such a model is well demonstrated by applying GBINM in constructing a hierarchical Bayesian fuzzy inference nets to diagnose five important types of cardiovascular diseases. The patients' medical records with doctors' confirmed diagnostic results obtained from two hospitals in China are used to design and verify HBFIN. Bayesian theorem is used (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  4.  29
    The structure and dynamics of scientific theories: a hierarchical Bayesian perspective.Leah Henderson, Noah D. Goodman, Joshua B. Tenenbaum & James F. Woodward - 2010 - Philosophy of Science 77 (2):172-200.
    Hierarchical Bayesian models (HBMs) provide an account of Bayesian inference in a hierarchically structured hypothesis space. Scientific theories are plausibly regarded as organized into hierarchies in many cases, with higher levels sometimes called ‘para- digms’ and lower levels encoding more specific or concrete hypotheses. Therefore, HBMs provide a useful model for scientific theory change, showing how higher-level theory change may be driven by the impact of evidence on lower levels. HBMs capture features described in the Kuhnian (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   32 citations  
  5.  14
    Probabilistic Inference: Task Dependency and Individual Differences of Probability Weighting Revealed by Hierarchical Bayesian Modeling.Moritz Boos, Caroline Seer, Florian Lange & Bruno Kopp - 2016 - Frontiers in Psychology 7.
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark  
  6. The Structure and Dynamics of Scientific Theories: A Hierarchical Bayesian Perspective.Leah Henderson, Noah D. Goodman, Joshua B. Tenenbaum & James F. Woodward - 2010 - Philosophy of Science 77 (2):172-200.
    Hierarchical Bayesian models (HBMs) provide an account of Bayesian inference in a hierarchically structured hypothesis space. Scientific theories are plausibly regarded as organized into hierarchies in many cases, with higher levels sometimes called ‘paradigms’ and lower levels encoding more specific or concrete hypotheses. Therefore, HBMs provide a useful model for scientific theory change, showing how higher‐level theory change may be driven by the impact of evidence on lower levels. HBMs capture features described in the Kuhnian tradition, (...)
    Direct download (7 more)  
     
    Export citation  
     
    Bookmark   37 citations  
  7.  25
    Exemplars, Prototypes, Similarities, and Rules in Category Representation: An Example of Hierarchical Bayesian Analysis.Michael D. Lee & Wolf Vanpaemel - 2008 - Cognitive Science 32 (8):1403-1424.
    This article demonstrates the potential of using hierarchical Bayesian methods to relate models and data in the cognitive sciences. This is done using a worked example that considers an existing model of category representation, the Varying Abstraction Model (VAM), which attempts to infer the representations people use from their behavior in category learning tasks. The VAM allows for a wide variety of category representations to be inferred, but this article shows how a hierarchical Bayesian analysis can (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  8.  44
    Learning the Form of Causal Relationships Using Hierarchical Bayesian Models.Christopher G. Lucas & Thomas L. Griffiths - 2010 - Cognitive Science 34 (1):113-147.
  9.  57
    A Bayesian Account of Psychopathy: A Model of Lacks Remorse and Self-Aggrandizing.Aaron Prosser, Karl Friston, Nathan Bakker & Thomas Parr - 2018 - Computational Psychiatry 2:92-140.
    This article proposes a formal model that integrates cognitive and psychodynamic psychotherapeutic models of psychopathy to show how two major psychopathic traits called lacks remorse and self-aggrandizing can be understood as a form of abnormal Bayesian inference about the self. This model draws on the predictive coding (i.e., active inference) framework, a neurobiologically plausible explanatory framework for message passing in the brain that is formalized in terms of hierarchical Bayesian inference. In summary, this model (...)
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  10.  54
    A Bayesian Model of Biases in Artificial Language Learning: The Case of a Word‐Order Universal.Jennifer Culbertson & Paul Smolensky - 2012 - Cognitive Science 36 (8):1468-1498.
    In this article, we develop a hierarchical Bayesian model of learning in a general type of artificial language‐learning experiment in which learners are exposed to a mixture of grammars representing the variation present in real learners’ input, particularly at times of language change. The modeling goal is to formalize and quantify hypothesized learning biases. The test case is an experiment (Culbertson, Smolensky, & Legendre, 2012) targeting the learning of word‐order patterns in the nominal domain. The model identifies internal (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  11.  48
    Biased belief in the Bayesian brain: A deeper look at the evidence.Ben M. Tappin & Stephen Gadsby - 2019 - Consciousness and Cognition 68:107-114.
    A recent critique of hierarchical Bayesian models of delusion argues that, contrary to a key assumption of these models, belief formation in the healthy (i.e., neurotypical) mind is manifestly non-Bayesian. Here we provide a deeper examination of the empirical evidence underlying this critique. We argue that this evidence does not convincingly refute the assumption that belief formation in the neurotypical mind approximates Bayesian inference. Our argument rests on two key points. First, evidence that purports to (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  12. Content and misrepresentation in hierarchical generative models.Alex Kiefer & Jakob Hohwy - 2018 - Synthese 195 (6):2387-2415.
    In this paper, we consider how certain longstanding philosophical questions about mental representation may be answered on the assumption that cognitive and perceptual systems implement hierarchical generative models, such as those discussed within the prediction error minimization framework. We build on existing treatments of representation via structural resemblance, such as those in Gładziejewski :559–582, 2016) and Gładziejewski and Miłkowski, to argue for a representationalist interpretation of the PEM framework. We further motivate the proposed approach to content by arguing that (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   56 citations  
  13.  79
    A possibilistic hierarchical model for behaviour under uncertainty.Gert de Cooman & Peter Walley - 2002 - Theory and Decision 52 (4):327-374.
    Hierarchical models are commonly used for modelling uncertainty. They arise whenever there is a `correct' or `ideal' uncertainty model but the modeller is uncertain about what it is. Hierarchical models which involve probability distributions are widely used in Bayesian inference. Alternative models which involve possibility distributions have been proposed by several authors, but these models do not have a clear operational meaning. This paper describes a new hierarchical model which is mathematically equivalent to some of (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  14. Hallucinations and perceptual inference.Karl J. Friston - 2005 - Behavioral and Brain Sciences 28 (6):764-766.
    This commentary takes a closer look at how “constructive models of subjective perception,” referred to by Collerton et al. (sect. 2), might contribute to the Perception and Attention Deficit (PAD) model. It focuses on the neuronal mechanisms that could mediate hallucinations, or false inference – in particular, the role of cholinergic systems in encoding uncertainty in the context of hierarchical Bayesian models of perceptual inference (Friston 2002b; Yu & Dayan 2002).
    Direct download (9 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  15.  8
    Experienced wholeness: integrating insights from Gestalt theory, cognitive neuroscience, and predictive processing.Wanja Wiese - 2018 - Cambridge, Massachusetts: The MIT Press.
    An interdisciplinary account of phenomenal unity, investigating how experiential wholes can be characterized and how such characterizations can be analyzed computationally. How can we account for phenomenal unity? That is, how can we characterize and explain our experience of objects and groups of objects, bodily experiences, successions of events, and the attentional structure of consciousness as wholes? In this book, Wanja Wiese develops an interdisciplinary account of phenomenal unity, investigating how experiential wholes can be characterized and how such characterization can (...)
    Direct download  
     
    Export citation  
     
    Bookmark   5 citations  
  16. Distrusting the present.Jakob Hohwy, Bryan Paton & Colin Palmer - 2016 - Phenomenology and the Cognitive Sciences 15 (3):315-335.
    We use the hierarchical nature of Bayesian perceptual inference to explain a fundamental aspect of the temporality of experience, namely the phenomenology of temporal flow. The explanation says that the sense of temporal flow in conscious perception stems from probabilistic inference that the present cannot be trusted. The account begins by describing hierarchical inference under the notion of prediction error minimization, and exemplifies distrust of the present within bistable visual perception and action initiation. Distrust (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   34 citations  
  17.  57
    Hierarchical Bayesian models of delusion.Daniel Williams - 2018 - Consciousness and Cognition 61:129-147.
  18.  55
    Bayesian inferences about the self : A review.Michael Moutoussis, Pasco Fearon, Wael El-Deredy, Raymond J. Dolan & Karl J. Friston - 2014 - Consciousness and Cognition 25:67-76.
    Viewing the brain as an organ of approximate Bayesian inference can help us understand how it represents the self. We suggest that inferred representations of the self have a normative function: to predict and optimise the likely outcomes of social interactions. Technically, we cast this predict-and-optimise as maximising the chance of favourable outcomes through active inference. Here the utility of outcomes can be conceptualised as prior beliefs about final states. Actions based on interpersonal representations can therefore be (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   18 citations  
  19.  17
    Delusion: Cognitive Approaches—Bayesian Inference and Compartmentalisation.Martin Davies & Andy Egan - 2013 - In K. W. M. Fulford, Martin Davies, Richard G. T. Gipps, George Graham, John Z. Sadler, Giovanni Stanghellini & Tim Thornton (eds.), The Oxford Handbook of Philosophy and Psychiatry. Oxford University Press. pp. 689-727.
    Cognitive approaches contribute to our understanding of delusions by providing an explanatory framework that extends beyond the personal level to the sub personal level of information-processing systems. According to one influential cognitive approach, two factors are required to account for the content of a delusion, its initial adoption as a belief, and its persistence. This chapter reviews Bayesian developments of the two-factor framework.
    Direct download  
     
    Export citation  
     
    Bookmark   11 citations  
  20.  52
    Hierarchical Bayesian models as formal models of causal reasoning.York Hagmayer & Ralf Mayrhofer - 2013 - Argument and Computation 4 (1):36 - 45.
    (2013). Hierarchical Bayesian models as formal models of causal reasoning. Argument & Computation: Vol. 4, Formal Models of Reasoning in Cognitive Psychology, pp. 36-45. doi: 10.1080/19462166.2012.700321.
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  21. Universal bayesian inference?David Dowe & Graham Oppy - 2001 - Behavioral and Brain Sciences 24 (4):662-663.
    We criticise Shepard's notions of “invariance” and “universality,” and the incorporation of Shepard's work on inference into the general framework of his paper. We then criticise Tenenbaum and Griffiths' account of Shepard (1987b), including the attributed likelihood function, and the assumption of “weak sampling.” Finally, we endorse Barlow's suggestion that minimum message length (MML) theory has useful things to say about the Bayesian inference problems discussed by Shepard and Tenenbaum and Griffiths. [Barlow; Shepard; Tenenbaum & Griffiths].
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  22.  97
    Generalization, similarity, and bayesian inference.Joshua B. Tenenbaum & Thomas L. Griffiths - 2001 - Behavioral and Brain Sciences 24 (4):629-640.
    Shepard has argued that a universal law should govern generalization across different domains of perception and cognition, as well as across organisms from different species or even different planets. Starting with some basic assumptions about natural kinds, he derived an exponential decay function as the form of the universal generalization gradient, which accords strikingly well with a wide range of empirical data. However, his original formulation applied only to the ideal case of generalization from a single encountered stimulus to a (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   115 citations  
  23. Bayesian inference, predictive coding and delusions.Rick A. Adams, Harriet R. Brown & Karl J. Friston - 2014 - Avant: Trends in Interdisciplinary Studies 5 (3):51-88.
  24.  63
    Non-Bayesian Inference: Causal Structure Trumps Correlation.Bénédicte Bes, Steven Sloman, Christopher G. Lucas & Éric Raufaste - 2012 - Cognitive Science 36 (7):1178-1203.
    The study tests the hypothesis that conditional probability judgments can be influenced by causal links between the target event and the evidence even when the statistical relations among variables are held constant. Three experiments varied the causal structure relating three variables and found that (a) the target event was perceived as more probable when it was linked to evidence by a causal chain than when both variables shared a common cause; (b) predictive chains in which evidence is a cause of (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  25.  10
    Learning to Learn Functions.Michael Y. Li, Fred Callaway, William D. Thompson, Ryan P. Adams & Thomas L. Griffiths - 2023 - Cognitive Science 47 (4):e13262.
    Humans can learn complex functional relationships between variables from small amounts of data. In doing so, they draw on prior expectations about the form of these relationships. In three experiments, we show that people learn to adjust these expectations through experience, learning about the likely forms of the functions they will encounter. Previous work has used Gaussian processes—a statistical framework that extends Bayesian nonparametric approaches to regression—to model human function learning. We build on this work, modeling the process of (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  26.  36
    Bayesian inference given data?significant at??: Tests of point hypotheses.D. J. Johnstone & D. V. Lindley - 1995 - Theory and Decision 38 (1):51-60.
  27.  4
    Hierarchical Bayesian narrative-making under variable uncertainty.Alex Jinich-Diamant & Leonardo Christov-Moore - 2023 - Behavioral and Brain Sciences 46:e97.
    While Conviction Narrative Theory correctly criticizes utility-based accounts of decision-making, it unfairly reduces probabilistic models to point estimates and treats affect and narrative as mechanistically opaque yet explanatorily sufficient modules. Hierarchically nested Bayesian accounts offer a mechanistically explicit and parsimonious alternative incorporating affect into a single biologically plausible precision-weighted mechanism that tunes decision-making toward narrative versus sensory dependence under varying uncertainty levels.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  28.  60
    Novelty and Inductive Generalization in Human Reinforcement Learning.Samuel J. Gershman & Yael Niv - 2015 - Topics in Cognitive Science 7 (3):391-415.
    In reinforcement learning, a decision maker searching for the most rewarding option is often faced with the question: What is the value of an option that has never been tried before? One way to frame this question is as an inductive problem: How can I generalize my previous experience with one set of options to a novel option? We show how hierarchical Bayesian inference can be used to solve this problem, and we describe an equivalence between the (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  29.  92
    Bayesian Inference and Contractualist Justification on Interstate 95.Arthur Isak Applbaum - 2014 - In Andrew I. Cohen & Christopher H. Wellman (eds.), Contemporary Debates in Applied Ethics. Wiley-Blackwell. pp. 219.
    Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  30. Performing Bayesian inference with exemplar models.Lei Shi, Naomi H. Feldman & Thomas L. Griffiths - 2008 - In B. C. Love, K. McRae & V. M. Sloutsky (eds.), Proceedings of the 30th Annual Conference of the Cognitive Science Society. Cognitive Science Society. pp. 745--750.
     
    Export citation  
     
    Bookmark   8 citations  
  31.  42
    Too Many Cooks: Bayesian Inference for Coordinating Multi‐Agent Collaboration.Sarah A. Wu, Rose E. Wang, James A. Evans, Joshua B. Tenenbaum, David C. Parkes & Max Kleiman-Weiner - 2021 - Topics in Cognitive Science 13 (2):414-432.
    Collaboration requires agents to coordinate their behavior on the fly, sometimes cooperating to solve a single task together and other times dividing it up into sub‐tasks to work on in parallel. Underlying the human ability to collaborate is theory‐of‐mind (ToM), the ability to infer the hidden mental states that drive others to act. Here, we develop Bayesian Delegation, a decentralized multi‐agent learning mechanism with these abilities. Bayesian Delegation enables agents to rapidly infer the hidden intentions of others by (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  32.  67
    Word learning as Bayesian inference.Fei Xu & Joshua B. Tenenbaum - 2007 - Psychological Review 114 (2):245-272.
  33.  13
    Simplifying Bayesian Inference: The General Case.Stefan Krauβ, Laura Martignon & Ulrich Hoffrage - 1999 - In L. Magnani, N. J. Nersessian & P. Thagard (eds.), Model-Based Reasoning in Scientific Discovery. Kluwer/Plenum. pp. 165.
  34.  87
    Picturing classical and quantum Bayesian inference.Bob Coecke & Robert W. Spekkens - 2012 - Synthese 186 (3):651 - 696.
    We introduce a graphical framework for Bayesian inference that is sufficiently general to accommodate not just the standard case but also recent proposals for a theory of quantum Bayesian inference wherein one considers density operators rather than probability distributions as representative of degrees of belief. The diagrammatic framework is stated in the graphical language of symmetric monoidal categories and of compact structures and Frobenius structures therein, in which Bayesian inversion boils down to transposition with respect (...)
    Direct download (7 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  35.  91
    Vision as Bayesian inference: analysis by synthesis?Alan Yuille & Daniel Kersten - 2006 - Trends in Cognitive Sciences 10 (7):301-308.
  36.  16
    A Bayesian inference model for metamemory.Xiao Hu, Jun Zheng, Ningxin Su, Tian Fan, Chunliang Yang, Yue Yin, Stephen M. Fleming & Liang Luo - 2021 - Psychological Review 128 (5):824-855.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  37.  14
    The Bayesian sampler: Generic Bayesian inference causes incoherence in human probability judgments.Jian-Qiao Zhu, Adam N. Sanborn & Nick Chater - 2020 - Psychological Review 127 (5):719-748.
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  38. Hierarchical inductive inference methods.Moshe Koppel - 1989 - Logique Et Analyse 32 (128):285-295.
  39.  11
    Bayesian Inference with Indeterminate Probabilities.Stephen Spielman - 1976 - PSA: Proceedings of the Biennial Meeting of the Philosophy of Science Association 1976:185 - 196.
    The theory of personal probability needs to be developed as a logic of credibility in order to provide an adequate basis for the theories of scientific inference and rational decision making. But standard systems of personal probability impose formal structures on probability relationships which are too restrictive to qualify them as logics of credibility. Moreover, some rules for conditional probability have no justification as principles of credibility. A formal system of qualitative probability which is free of these defects and (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  40.  52
    New Semantics for Bayesian Inference: The Interpretive Problem and Its Solutions.Olav Benjamin Vassend - 2019 - Philosophy of Science 86 (4):696-718.
    Scientists often study hypotheses that they know to be false. This creates an interpretive problem for Bayesians because the probability assigned to a hypothesis is typically interpreted as the probability that the hypothesis is true. I argue that solving the interpretive problem requires coming up with a new semantics for Bayesian inference. I present and contrast two new semantic frameworks, and I argue that both of them support the claim that there is pervasive pragmatic encroachment on whether a (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  41.  41
    A Survey of Model Evaluation Approaches With a Tutorial on Hierarchical Bayesian Methods.Richard M. Shiffrin, Michael D. Lee, Woojae Kim & Eric-Jan Wagenmakers - 2008 - Cognitive Science 32 (8):1248-1284.
    This article reviews current methods for evaluating models in the cognitive sciences, including theoretically based approaches, such as Bayes factors and minimum description length measures; simulation approaches, including model mimicry evaluations; and practical approaches, such as validation and generalization measures. This article argues that, although often useful in specific settings, most of these approaches are limited in their ability to give a general assessment of models. This article argues that hierarchical methods, generally, and hierarchical Bayesian methods, specifically, (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   26 citations  
  42. Conditional Degree of Belief and Bayesian Inference.Jan Sprenger - 2020 - Philosophy of Science 87 (2):319-335.
    Why are conditional degrees of belief in an observation E, given a statistical hypothesis H, aligned with the objective probabilities expressed by H? After showing that standard replies are not satisfactory, I develop a suppositional analysis of conditional degree of belief, transferring Ramsey’s classical proposal to statistical inference. The analysis saves the alignment, explains the role of chance-credence coordination, and rebuts the charge of arbitrary assessment of evidence in Bayesian inference. Finally, I explore the implications of this (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  43.  20
    Visual shape perception as Bayesian inference of 3D object-centered shape representations.Goker Erdogan & Robert A. Jacobs - 2017 - Psychological Review 124 (6):740-761.
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  44.  38
    Computational Neuropsychology and Bayesian Inference.Thomas Parr, Geraint Rees & Karl J. Friston - 2018 - Frontiers in Human Neuroscience 12.
  45.  96
    Enactivism and predictive processing: A non-representational view.Michael David Kirchhoff & Ian Robertson - 2018 - Philosophical Explorations 21 (2):264-281.
    This paper starts by considering an argument for thinking that predictive processing (PP) is representational. This argument suggests that the Kullback–Leibler (KL)-divergence provides an accessible measure of misrepresentation, and therefore, a measure of representational content in hierarchical Bayesian inference. The paper then argues that while the KL-divergence is a measure of information, it does not establish a sufficient measure of representational content. We argue that this follows from the fact that the KL-divergence is a measure of relative (...)
    Direct download (7 more)  
     
    Export citation  
     
    Bookmark   33 citations  
  46.  35
    Interpretations of Probability and Bayesian Inference—an Overview.Peter Lukan - 2020 - Acta Analytica 35 (1):129-146.
    In this article, I first give a short outline of the different interpretations of the concept of probability that emerged in the twentieth century. In what follows, I give an overview of the main problems and problematic concepts from the philosophy of probability and show how they relate to Bayesian inference. In this overview, I emphasise that the understanding of the main concepts related to different interpretations of probability influences the understanding and status of Bayesian inference. (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  47. A dual approach to Bayesian inference and adaptive control.Leigh Tesfatsion - 1982 - Theory and Decision 14 (2):177-194.
    Probability updating via Bayes' rule often entails extensive informational and computational requirements. In consequence, relatively few practical applications of Bayesian adaptive control techniques have been attempted. This paper discusses an alternative approach to adaptive control, Bayesian in spirit, which shifts attention from the updating of probability distributions via transitional probability assessments to the direct updating of the criterion function, itself, via transitional utility assessments. Results are illustrated in terms of an adaptive reinvestment two-armed bandit problem.
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  48.  27
    Children’s quantitative Bayesian inferences from natural frequencies and number of chances.Stefania Pighin, Vittorio Girotto & Katya Tentori - 2017 - Cognition 168 (C):164-175.
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  49.  8
    Associative learning or Bayesian inference? Revisiting backwards blocking reasoning in adults.Deon T. Benton & David H. Rakison - 2023 - Cognition 241 (C):105626.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  50. Trivalent Conditionals: Stalnaker's Thesis and Bayesian Inference.Paul Égré, Lorenzo Rossi & Jan Sprenger - manuscript
    This paper develops a trivalent semantics for indicative conditionals and extends it to a probabilistic theory of valid inference and inductive learning with conditionals. On this account, (i) all complex conditionals can be rephrased as simple conditionals, connecting our account to Adams's theory of p-valid inference; (ii) we obtain Stalnaker's Thesis as a theorem while avoiding the well-known triviality results; (iii) we generalize Bayesian conditionalization to an updating principle for conditional sentences. The final result is a unified (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 1000