Results for 'Bayesian computation'

1000+ found
Order:
  1. Bayesian computation and mechanism: Theoretical pluralism drives scientific emergence.David K. Sewell, Daniel R. Little & Stephan Lewandowsky - 2011 - Behavioral and Brain Sciences 34 (4):212-213.
    The breadth-first search adopted by Bayesian researchers to map out the conceptual space and identify what the framework can do is beneficial for science and reflective of its collaborative and incremental nature. Theoretical pluralism among researchers facilitates refinement of models within various levels of analysis, which ultimately enables effective cross-talk between different levels of analysis.
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  2.  16
    Bayesian Computation Methods for Inference in Stochastic Kinetic Models.Eugenia Koblents, Inés P. Mariño & Joaquín Míguez - 2019 - Complexity 2019:1-15.
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  3.  22
    Parameter Inference for Computational Cognitive Models with Approximate Bayesian Computation.Antti Kangasrääsiö, Jussi P. P. Jokinen, Antti Oulasvirta, Andrew Howes & Samuel Kaski - 2019 - Cognitive Science 43 (6):e12738.
    This paper addresses a common challenge with computational cognitive models: identifying parameter values that are both theoretically plausible and generate predictions that match well with empirical data. While computational models can offer deep explanations of cognition, they are computationally complex and often out of reach of traditional parameter fitting methods. Weak methodology may lead to premature rejection of valid models or to acceptance of models that might otherwise be falsified. Mathematically robust fitting methods are, therefore, essential to the progress of (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  4.  11
    Bayes and Darwin: How replicator populations implement Bayesian computations.Dániel Czégel, Hamza Giaffar, Joshua B. Tenenbaum & Eörs Szathmáry - 2022 - Bioessays 44 (4):2100255.
    Bayesian learning theory and evolutionary theory both formalize adaptive competition dynamics in possibly high‐dimensional, varying, and noisy environments. What do they have in common and how do they differ? In this paper, we discuss structural and dynamical analogies and their limits, both at a computational and an algorithmic‐mechanical level. We point out mathematical equivalences between their basic dynamical equations, generalizing the isomorphism between Bayesian update and replicator dynamics. We discuss how these mechanisms provide analogous answers to the challenge (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  5. Bayesian Nets and Causality: Philosophical and Computational Foundations.Jon Williamson - 2004 - Oxford, England: Oxford University Press.
    Bayesian nets are widely used in artificial intelligence as a calculus for causal reasoning, enabling machines to make predictions, perform diagnoses, take decisions and even to discover causal relationships. This book, aimed at researchers and graduate students in computer science, mathematics and philosophy, brings together two important research topics: how to automate reasoning in artificial intelligence, and the nature of causality and probability in philosophy.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   70 citations  
  6.  15
    The computational complexity of probabilistic inference using bayesian belief networks.Gregory F. Cooper - 1990 - Artificial Intelligence 42 (2-3):393-405.
  7.  38
    Computational Neuropsychology and Bayesian Inference.Thomas Parr, Geraint Rees & Karl J. Friston - 2018 - Frontiers in Human Neuroscience 12.
  8.  5
    The Computations Underlying Religious Conversion: A Bayesian Decision Model.Francesco Rigoli - 2023 - Journal of Cognition and Culture 23 (1-2):241-257.
    Inspired by recent Bayesian interpretations about the psychology underlying religion, the paper introduces a theory proposing that religious conversion is shaped by three factors: (i) novel relevant information, experienced in perceptual or in social form (e.g., following interaction with missionaries); (ii) changes in the utility (e.g., expressed in an opportunity to raise in social rank) associated with accepting a new religious creed; and (iii) prior beliefs, favouring religious faiths that, although new, still remain consistent with entrenched cultural views (resulting (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  9.  16
    Mental models, computational explanation and Bayesian cognitive science: Commentary on Knauff and Gazzo Castañeda (2023).Mike Oaksford - 2023 - Thinking and Reasoning 29 (3):371-382.
    Knauff and Gazzo Castañeda (2022) object to using the term “new paradigm” to describe recent developments in the psychology of reasoning. This paper concedes that the Kuhnian term “paradigm” may be queried. What cannot is that the work subsumed under this heading is part of a new, progressive movement that spans the brain and cognitive sciences: Bayesian cognitive science. Sampling algorithms and Bayes nets used to explain biases in JDM can implement the Bayesian new paradigm approach belying any (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  10.  35
    Comprehension and computation in Bayesian problem solving.Eric D. Johnson & Elisabet Tubau - 2015 - Frontiers in Psychology 6:137658.
    Humans have long been characterized as poor probabilistic reasoners when presented with explicit numerical information. Bayesian word problems provide a well-known example of this, where even highly educated and cognitively skilled individuals fail to adhere to mathematical norms. It is widely agreed that natural frequencies can facilitate Bayesian reasoning relative to normalized formats (e.g. probabilities, percentages), both by clarifying logical set-subset relations and by simplifying numerical calculations. Nevertheless, between-study performance on “transparent” Bayesian problems varies widely, and generally (...)
    Direct download (7 more)  
     
    Export citation  
     
    Bookmark   16 citations  
  11.  4
    Computing pure Bayesian-Nash equilibria in games with finite actions and continuous types.Zinovi Rabinovich, Victor Naroditskiy, Enrico H. Gerding & Nicholas R. Jennings - 2013 - Artificial Intelligence 195 (C):106-139.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  12. Intelligent Computing in Bioinformatics-An Efficient Attribute Ordering Optimization in Bayesian Networks for Prognostic Modeling of the Metabolic Syndrome.Han-Saem Park & Sung-Bae Cho - 2006 - In O. Stock & M. Schaerf (eds.), Lecture Notes in Computer Science. Springer Verlag. pp. 4115--381.
  13.  73
    Children can solve Bayesian problems: the role of representation in mental computation.Liqi Zhu & Gerd Gigerenzer - 2006 - Cognition 98 (3):287-308.
  14.  32
    Jon Williamson. Bayesian nets and causality: Philosophical and computational foundations.Kevin B. Korb - 2007 - Philosophia Mathematica 15 (3):389-396.
    Bayesian networks are computer programs which represent probabilitistic relationships graphically as directed acyclic graphs, and which can use those graphs to reason probabilistically , often at relatively low computational cost. Almost every expert system in the past tried to support probabilistic reasoning, but because of the computational difficulties they took approximating short-cuts, such as those afforded by MYCIN's certainty factors. That all changed with the publication of Judea Pearl's Probabilistic Reasoning in Intelligent Systems, in 1988, which synthesized a decade (...)
    Direct download (8 more)  
     
    Export citation  
     
    Bookmark  
  15.  74
    Review: Bayesian Nets and Causality: Philosophical and Computational Foundations. [REVIEW]S. Choi - 2006 - Mind 115 (458):502-506.
  16.  52
    Bayesian Psychiatry and the Social Focus of Delusions.Daniel Williams & Marcella Montagnese - manuscript
    A large and growing body of research in computational psychiatry draws on Bayesian modelling to illuminate the dysfunctions and aberrations that underlie psychiatric disorders. After identifying the chief attractions of this research programme, we argue that its typical focus on abstract, domain-general inferential processes is likely to obscure many of the distinctive ways in which the human mind can break down and malfunction. We illustrate this by appeal to psychosis and the social phenomenology of delusions.
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  17.  87
    Bayesian reverse-engineering considered as a research strategy for cognitive science.Carlos Zednik & Frank Jäkel - 2016 - Synthese 193 (12):3951-3985.
    Bayesian reverse-engineering is a research strategy for developing three-level explanations of behavior and cognition. Starting from a computational-level analysis of behavior and cognition as optimal probabilistic inference, Bayesian reverse-engineers apply numerous tweaks and heuristics to formulate testable hypotheses at the algorithmic and implementational levels. In so doing, they exploit recent technological advances in Bayesian artificial intelligence, machine learning, and statistics, but also consider established principles from cognitive psychology and neuroscience. Although these tweaks and heuristics are highly pragmatic (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   21 citations  
  18.  10
    A transformation of Bayesian statistics:Computation, prediction, and rationality.Johannes Lenhard - 2022 - Studies in History and Philosophy of Science Part A 92 (C):144-151.
  19.  28
    Jon Williamson, bayesian nets and causality: Philosophical and computational foundations. [REVIEW]Bradford McCall - 2008 - Minds and Machines 18 (2):301-302.
  20.  33
    Self-evaluation of decision-making: A general Bayesian framework for metacognitive computation.Stephen M. Fleming & Nathaniel D. Daw - 2017 - Psychological Review 124 (1):91-114.
    No categories
    Direct download (7 more)  
     
    Export citation  
     
    Bookmark   39 citations  
  21.  65
    Bayesian Intractability Is Not an Ailment That Approximation Can Cure.Johan Kwisthout, Todd Wareham & Iris van Rooij - 2011 - Cognitive Science 35 (5):779-784.
    Bayesian models are often criticized for postulating computations that are computationally intractable (e.g., NP-hard) and therefore implausibly performed by our resource-bounded minds/brains. Our letter is motivated by the observation that Bayesian modelers have been claiming that they can counter this charge of “intractability” by proposing that Bayesian computations can be tractably approximated. We would like to make the cognitive science community aware of the problematic nature of such claims. We cite mathematical proofs from the computer science literature (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   25 citations  
  22.  9
    A Bayesian model of the jumping-to-conclusions bias and its relationship to psychopathology.Nicole Tan, Yiyun Shou, Junwen Chen & Bruce K. Christensen - forthcoming - Cognition and Emotion.
    The mechanisms by which delusion and anxiety affect the tendency to make hasty decisions (Jumping-to-Conclusions bias) remain unclear. This paper proposes a Bayesian computational model that explores the assignment of evidence weights as a potential explanation of the Jumping-to-Conclusions bias using the Beads Task. We also investigate the Beads Task as a repeated measure by varying the key aspects of the paradigm. The Bayesian model estimations from two online studies showed that higher delusional ideation promoted reduced belief updating (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  23.  58
    Bayesian merging of opinions and algorithmic randomness.Francesca Zaffora Blando - forthcoming - British Journal for the Philosophy of Science.
    We study the phenomenon of merging of opinions for computationally limited Bayesian agents from the perspective of algorithmic randomness. When they agree on which data streams are algorithmically random, two Bayesian agents beginning the learning process with different priors may be seen as having compatible beliefs about the global uniformity of nature. This is because the algorithmically random data streams are of necessity globally regular: they are precisely the sequences that satisfy certain important statistical laws. By virtue of (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  24. Improving Bayesian statistics understanding in the age of Big Data with the bayesvl R package.Quan-Hoang Vuong, Viet-Phuong La, Minh-Hoang Nguyen, Manh-Toan Ho, Manh-Tung Ho & Peter Mantello - 2020 - Software Impacts 4 (1):100016.
    The exponential growth of social data both in volume and complexity has increasingly exposed many of the shortcomings of the conventional frequentist approach to statistics. The scientific community has called for careful usage of the approach and its inference. Meanwhile, the alternative method, Bayesian statistics, still faces considerable barriers toward a more widespread application. The bayesvl R package is an open program, designed for implementing Bayesian modeling and analysis using the Stan language’s no-U-turn (NUTS) sampler. The package combines (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  25. Bayesian Evidence Test for Precise Hypotheses.Julio Michael Stern - 2003 - Journal of Statistical Planning and Inference 117 (2):185-198.
    The full Bayesian signi/cance test (FBST) for precise hypotheses is presented, with some illustrative applications. In the FBST we compute the evidence against the precise hypothesis. We discuss some of the theoretical properties of the FBST, and provide an invariant formulation for coordinate transformations, provided a reference density has been established. This evidence is the probability of the highest relative surprise set, “tangential” to the sub-manifold (of the parameter space) that defines the null hypothesis.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   14 citations  
  26. Bayesian perspectives on mathematical practice.James Franklin - 2020 - Handbook of the History and Philosophy of Mathematical Practice.
    Mathematicians often speak of conjectures as being confirmed by evidence that falls short of proof. For their own conjectures, evidence justifies further work in looking for a proof. Those conjectures of mathematics that have long resisted proof, such as the Riemann hypothesis, have had to be considered in terms of the evidence for and against them. In recent decades, massive increases in computer power have permitted the gathering of huge amounts of numerical evidence, both for conjectures in pure mathematics and (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  27.  96
    Bayesian Cognitive Science. Routledge Encyclopaedia of Philosophy.Matteo Colombo - 2023 - Routledge Encyclopaedia of Philosophy.
    Bayesian cognitive science is a research programme that relies on modelling resources from Bayesian statistics for studying and understanding mind, brain, and behaviour. Conceiving of mental capacities as computing solutions to inductive problems, Bayesian cognitive scientists develop probabilistic models of mental capacities and evaluate their adequacy based on behavioural and neural data generated by humans (or other cognitive agents) performing a pertinent task. The overarching goal is to identify the mathematical principles, algorithmic procedures, and causal mechanisms that (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  28.  36
    A Bayesian Theory of Sequential Causal Learning and Abstract Transfer.Hongjing Lu, Randall R. Rojas, Tom Beckers & Alan L. Yuille - 2016 - Cognitive Science 40 (2):404-439.
    Two key research issues in the field of causal learning are how people acquire causal knowledge when observing data that are presented sequentially, and the level of abstraction at which learning takes place. Does sequential causal learning solely involve the acquisition of specific cause-effect links, or do learners also acquire knowledge about abstract causal constraints? Recent empirical studies have revealed that experience with one set of causal cues can dramatically alter subsequent learning and performance with entirely different cues, suggesting that (...)
    Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  29.  57
    A Bayesian Account of Psychopathy: A Model of Lacks Remorse and Self-Aggrandizing.Aaron Prosser, Karl Friston, Nathan Bakker & Thomas Parr - 2018 - Computational Psychiatry 2:92-140.
    This article proposes a formal model that integrates cognitive and psychodynamic psychotherapeutic models of psychopathy to show how two major psychopathic traits called lacks remorse and self-aggrandizing can be understood as a form of abnormal Bayesian inference about the self. This model draws on the predictive coding (i.e., active inference) framework, a neurobiologically plausible explanatory framework for message passing in the brain that is formalized in terms of hierarchical Bayesian inference. In summary, this model proposes that these two (...)
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  30.  27
    Bayesian Word Learning in Multiple Language Environments.Benjamin D. Zinszer, Sebi V. Rolotti, Fan Li & Ping Li - 2018 - Cognitive Science 42 (S2):439-462.
    Infant language learners are faced with the difficult inductive problem of determining how new words map to novel or known objects in their environment. Bayesian inference models have been successful at using the sparse information available in natural child-directed speech to build candidate lexicons and infer speakers’ referential intentions. We begin by asking how a Bayesian model optimized for monolingual input generalizes to new monolingual or bilingual corpora and find that, especially in the case of the bilingual input, (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  31.  8
    Profit-Driven Corporate Social Responsibility as a Bayesian Real Option in Green Computing.Hemantha S. B. Herath, Tejaswini C. Herath & Paul Dunn - 2019 - Journal of Business Ethics 158 (2):387-402.
    The idea that socially responsible investments can be viewed in terms of real options is relatively new. We expand on this notion by demonstrating how real option theory, within a Bayesian decision-making framework, can be used by managers to help when making green technology investment decisions. The Bayesian decision framework provides a more flexible approach to investment decision making because it adjusts for new information. Responding to a call for multidisciplinary and multifaceted research in environmental sustainability, this paper (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  32. Full Bayesian Significance Test Applied to Multivariate Normal Structure Models.Marcelo de Souza Lauretto, Carlos Alberto de Braganca Pereira, Julio Michael Stern & Shelemiahu Zacks - 2003 - Brazilian Journal of Probability and Statistics 17:147-168.
    Abstract: The Pull Bayesian Significance Test (FBST) for precise hy- potheses is applied to a Multivariate Normal Structure (MNS) model. In the FBST we compute the evidence against the precise hypothesis. This evi- dence is the probability of the Highest Relative Surprise Set (HRSS) tangent to the sub-manifold (of the parameter space) that defines the null hypothesis. The MNS model we present appears when testing equivalence conditions for genetic expression measurements, using micro-array technology.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  33.  40
    Bayesian Confirmation or Ordinary Confirmation?Yongfeng Yuan - 2020 - Studia Logica 108 (3):425-449.
    This article reveals one general scheme for creating counter examples to Bayesian confirmation theory. The reason of the problems is that: in daily life the degree of confirmation is affected not only by probability but also by some non-probabilistic factors, e.g., structural similarity, quantity of evidence, and marginal utility, while Bayesian confirmation theory considers only probabilities to measure the degree of confirmation. This article resolves these problems after some detail analyses, and proposes a new confirmation measure based on (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  34.  11
    A Context‐Dependent Bayesian Account for Causal‐Based Categorization.Nicolás Marchant, Tadeg Quillien & Sergio E. Chaigneau - 2023 - Cognitive Science 47 (1):e13240.
    The causal view of categories assumes that categories are represented by features and their causal relations. To study the effect of causal knowledge on categorization, researchers have used Bayesian causal models. Within that framework, categorization may be viewed as dependent on a likelihood computation (i.e., the likelihood of an exemplar with a certain combination of features, given the category's causal model) or as a posterior computation (i.e., the probability that the exemplar belongs to the category, given its (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  35.  9
    Bayesian Surprise Predicts Human Event Segmentation in Story Listening.Manoj Kumar, Ariel Goldstein, Sebastian Michelmann, Jeffrey M. Zacks, Uri Hasson & Kenneth A. Norman - 2023 - Cognitive Science 47 (10):e13343.
    Event segmentation theory posits that people segment continuous experience into discrete events and that event boundaries occur when there are large transient increases in prediction error. Here, we set out to test this theory in the context of story listening, by using a deep learning language model (GPT‐2) to compute the predicted probability distribution of the next word, at each point in the story. For three stories, we used the probability distributions generated by GPT‐2 to compute the time series of (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  36.  50
    Can Computational Goals Inform Theories of Vision?Barton L. Anderson - 2015 - Topics in Cognitive Science 7 (2):274-286.
    One of the most lasting contributions of Marr's posthumous book is his articulation of the different “levels of analysis” that are needed to understand vision. Although a variety of work has examined how these different levels are related, there is comparatively little examination of the assumptions on which his proposed levels rest, or the plausibility of the approach Marr articulated given those assumptions. Marr placed particular significance on computational level theory, which specifies the “goal” of a computation, its appropriateness (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   10 citations  
  37.  14
    Could Bayesian cognitive science undermine dual-process theories of reasoning?Mike Oaksford - 2023 - Behavioral and Brain Sciences 46:e134.
    Computational-level models proposed in recent Bayesian cognitive science predict both the “biased” and correct responses on many tasks. So, rather than possessing two reasoning systems, people can generate both possible responses within a single system. Consequently, although an account of why people make one response rather than another is required, dual processes of reasoning may not be.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  38.  79
    Bayesian Frugality and the Representation of Attention.K. Dolega & J. Dewhurst - 2019 - Journal of Consciousness Studies 26 (3-4):38-63.
    This paper spells out the attention schema theory of consciousness in terms of the predictive processing framework. As it stands, the attention schema theory lacks a plausible computational formalization that could be used for developing possible mechanistic models of how it is realized in the brain. The predictive processing framework, on the other hand, fails to provide a plausible explanation of the subjective quality or the phenomenal aspect of conscious experience. The aim of this work is to apply the formal (...)
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  39.  19
    A Computational Model of Early Argument Structure Acquisition.Afra Alishahi & Suzanne Stevenson - 2008 - Cognitive Science 32 (5):789-834.
    How children go about learning the general regularities that govern language, as well as keeping track of the exceptions to them, remains one of the challenging open questions in the cognitive science of language. Computational modeling is an important methodology in research aimed at addressing this issue. We must determine appropriate learning mechanisms that can grasp generalizations from examples of specific usages, and that exhibit patterns of behavior over the course of learning similar to those in children. Early learning of (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   13 citations  
  40. When the (Bayesian) ideal is not ideal.Danilo Fraga Dantas - 2023 - Logos and Episteme 15 (3):271-298.
    Bayesian epistemologists support the norms of probabilism and conditionalization using Dutch book and accuracy arguments. These arguments assume that rationality requires agents to maximize practical or epistemic value in every doxastic state, which is evaluated from a subjective point of view (e.g., the agent’s expectancy of value). The accuracy arguments also presuppose that agents are opinionated. The goal of this paper is to discuss the assumptions of these arguments, including the measure of epistemic value. I have designed AI agents (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  41.  16
    A Bayesian model of legal syllogistic reasoning.Axel Constant - forthcoming - Artificial Intelligence and Law:1-22.
    Bayesian approaches to legal reasoning propose causal models of the relation between evidence, the credibility of evidence, and ultimate hypotheses, or verdicts. They assume that legal reasoning is the process whereby one infers the posterior probability of a verdict based on observed evidence, or facts. In practice, legal reasoning does not operate quite that way. Legal reasoning is also an attempt at inferring applicable rules derived from legal precedents or statutes based on the facts at hand. To make such (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  42. For Bayesian Wannabes, Are Disagreements Not About Information?Robin Hanson - 2003 - Theory and Decision 54 (2):105-123.
    Consider two agents who want to be Bayesians with a common prior, but who cannot due to computational limitations. If these agents agree that their estimates are consistent with certain easy-to-compute consistency constraints, then they can agree to disagree about any random variable only if they also agree to disagree, to a similar degree and in a stronger sense, about an average error. Yet average error is a state-independent random variable, and one agent's estimate of it is also agreed to (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  43.  38
    Imprecise Bayesian Networks as Causal Models.David Kinney - 2018 - Information 9 (9):211.
    This article considers the extent to which Bayesian networks with imprecise probabilities, which are used in statistics and computer science for predictive purposes, can be used to represent causal structure. It is argued that the adequacy conditions for causal representation in the precise context—the Causal Markov Condition and Minimality—do not readily translate into the imprecise context. Crucial to this argument is the fact that the independence relation between random variables can be understood in several different ways when the joint (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  44.  52
    Hierarchical Bayesian models as formal models of causal reasoning.York Hagmayer & Ralf Mayrhofer - 2013 - Argument and Computation 4 (1):36 - 45.
    (2013). Hierarchical Bayesian models as formal models of causal reasoning. Argument & Computation: Vol. 4, Formal Models of Reasoning in Cognitive Psychology, pp. 36-45. doi: 10.1080/19462166.2012.700321.
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  45.  26
    Are Jurors Intuitive Statisticians? Bayesian Causal Reasoning in Legal Contexts.Tamara Shengelia & David Lagnado - 2021 - Frontiers in Psychology 11.
    In criminal trials, evidence often involves a degree of uncertainty and decision-making includes moving from the initial presumption of innocence to inference about guilt based on that evidence. The jurors’ ability to combine evidence and make accurate intuitive probabilistic judgments underpins this process. Previous research has shown that errors in probabilistic reasoning can be explained by a misalignment of the evidence presented with the intuitive causal models that people construct. This has been explored in abstract and context-free situations. However, less (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  46.  6
    Bayesian Natural Language Semantics and Pragmatics.Henk Zeevat & Hans-Christian Schmitz (eds.) - 2015 - Springer.
    The contributions in this volume focus on the Bayesian interpretation of natural languages, which is widely used in areas of artificial intelligence, cognitive science, and computational linguistics. This is the first volume to take up topics in Bayesian Natural Language Interpretation and make proposals based on information theory, probability theory, and related fields. The methodologies offered here extend to the target semantic and pragmatic analyses of computational natural language interpretation. Bayesian approaches to natural language semantics and pragmatics (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  47. Apragatic Bayesian Platform for Automating Scientific Induction.Kevin B. Korb - 1992 - Dissertation, Indiana University
    This work provides a conceptual foundation for a Bayesian approach to artificial inference and learning. I argue that Bayesian confirmation theory provides a general normative theory of inductive learning and therefore should have a role in any artificially intelligent system that is to learn inductively about its world. I modify the usual Bayesian theory in three ways directly pertinent to an eventual research program in artificial intelligence. First, I construe Bayesian inference rules as defeasible, allowing them (...)
     
    Export citation  
     
    Bookmark   1 citation  
  48.  45
    Bayesian estimation and testing of structural equation models.Richard Scheines - unknown
    The Gibbs sampler can be used to obtain samples of arbitrary size from the posterior distribution over the parameters of a structural equation model (SEM) given covariance data and a prior distribution over the parameters. Point estimates, standard deviations and interval estimates for the parameters can be computed from these samples. If the prior distribution over the parameters is uninformative, the posterior is proportional to the likelihood, and asymptotically the inferences based on the Gibbs sample are the same as those (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  49. Bayesian Test of Significance for Conditional Independence: The Multinomial Model.Julio Michael Stern, Pablo de Morais Andrade & Carlos Alberto de Braganca Pereira - 2014 - Entropy 16:1376-1395.
    Conditional independence tests have received special attention lately in machine learning and computational intelligence related literature as an important indicator of the relationship among the variables used by their models. In the field of probabilistic graphical models, which includes Bayesian network models, conditional independence tests are especially important for the task of learning the probabilistic graphical model structure from data. In this paper, we propose the full Bayesian significance test for tests of conditional independence for discrete datasets. The (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  50.  26
    Bayesian learning for cooperation in multi-agent systems.Mair Allen-Williams & Nicholas R. Jennings - 2009 - In L. Magnani (ed.), Computational Intelligence. pp. 321--360.
    Direct download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 1000