Results for 'Bayesian probabilistic model'

1000+ found
Order:
  1.  6
    Bayesian Teaching Model of image Based on Image Recognition by Deep Learning. 은은숙 - 2020 - Journal of the New Korean Philosophical Association 102:271-296.
    본고는 딥러닝의 이미지 인식 원리와 유아의 이미지 인식 원리를 종합하면서, 이미지-개념 학습을 위한 새로운 교수학습모델, 즉 “베이지안 구조구성주의 교수학습모델”(Bayesian Structure-constructivist Teaching-learning Model: BSTM)을 제안한다. 달리 말하면, 기계학습 원리와 인간학습 원리를 비교함으로써 얻게 되는 시너지 효과를 바탕으로, 유아들의 이미지-개념 학습을 위한 새로운 교수 모델을 구성하는 것을 목표로 한다. 이런 맥락에서 본고는 전체적으로 3가지 차원에서 논의된다. 첫째, 아동의 이미지 학습에 대한 역사적 중요 이론인 “대상 전체론적 가설”, “분류학적 가설”, “배타적 가설”, “기본 수준 범주 가설” 등을 역사 비판적 관점에서 검토한다. 둘째, (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  2.  22
    A Probabilistic Model of Melody Perception.David Temperley - 2008 - Cognitive Science 32 (2):418-444.
    This study presents a probabilistic model of melody perception, which infers the key of a melody and also judges the probability of the melody itself. The model uses Bayesian reasoning: For any “surface” pattern and underlying “structure,” we can infer the structure maximizing P(structure|surface) based on knowledge of P(surface, structure). The probability of the surface can then be calculated as ∑ P(surface, structure), summed over all structures. In this case, the surface is a pattern of notes; (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  3.  13
    Probabilistic Model-Based Malaria Disease Recognition System.Rahila Parveen, Wei Song, Baozhi Qiu, Mairaj Nabi Bhatti, Tallal Hassan & Ziyi Liu - 2021 - Complexity 2021:1-11.
    In this paper, we present a probabilistic-based method to predict malaria disease at an early stage. Malaria is a very dangerous disease that creates a lot of health problems. Therefore, there is a need for a system that helps us to recognize this disease at early stages through the visual symptoms and from the environmental data. In this paper, we proposed a Bayesian network model to predict the occurrences of malaria disease. The proposed BN model is (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  4.  27
    Probabilistic models as theories of children's minds.Alison Gopnik - 2011 - Behavioral and Brain Sciences 34 (4):200-201.
    My research program proposes that children have representations and learning mechanisms that can be characterized as causal models of the world Bayesian Fundamentalism.”.
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  5.  66
    The Hidden Markov Topic Model: A Probabilistic Model of Semantic Representation.Mark Andrews & Gabriella Vigliocco - 2010 - Topics in Cognitive Science 2 (1):101-113.
    In this paper, we describe a model that learns semantic representations from the distributional statistics of language. This model, however, goes beyond the common bag‐of‐words paradigm, and infers semantic representations by taking into account the inherent sequential nature of linguistic data. The model we describe, which we refer to as a Hidden Markov Topics model, is a natural extension of the current state of the art in Bayesian bag‐of‐words models, that is, the Topics model (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  6.  18
    Probabilistic modelling for software quality control.Norman Fenton, Paul Krause & Martin Neil - 2002 - Journal of Applied Non-Classical Logics 12 (2):173-188.
    As is clear to any user of software, quality control of software has not reached the same levels of sophistication as it has with traditional manufacturing. In this paper we argue that this is because insufficient thought is being given to the methods of reasoning under uncertainty that are appropriate to this domain. We then describe how we have built a large-scale Bayesian network to overcome the difficulties that have so far been met in software quality control. This exploits (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  7.  97
    The Probabilistic Mind: Prospects for Bayesian Cognitive Science.Nick Chater & Mike Oaksford (eds.) - 2008 - Oxford University Press.
    'The Probabilistic Mind' is a follow-up to the influential and highly cited 'Rational Models of Cognition'. It brings together developments in understanding how, and how far, high-level cognitive processes can be understood in rational terms, and particularly using probabilistic Bayesian methods.
    Direct download  
     
    Export citation  
     
    Bookmark   45 citations  
  8.  71
    The punctuated equilibrium of scientific change: a Bayesian network model.Patrick Grim, Frank Seidl, Calum McNamara, Isabell N. Astor & Caroline Diaso - 2022 - Synthese 200 (4):1-25.
    Our scientific theories, like our cognitive structures in general, consist of propositions linked by evidential, explanatory, probabilistic, and logical connections. Those theoretical webs ‘impinge on the world at their edges,’ subject to a continuing barrage of incoming evidence. Our credences in the various elements of those structures change in response to that continuing barrage of evidence, as do the perceived connections between them. Here we model scientific theories as Bayesian nets, with credences at nodes and conditional links (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  9.  28
    Pinning down the theoretical commitments of Bayesian cognitive models.Matt Jones & Bradley C. Love - 2011 - Behavioral and Brain Sciences 34 (4):215-231.
    Mathematical developments in probabilistic inference have led to optimism over the prospects for Bayesian models of cognition. Our target article calls for better differentiation of these technical developments from theoretical contributions. It distinguishes between Bayesian Fundamentalism, which is theoretically limited because of its neglect of psychological mechanism, and Bayesian Enlightenment, which integrates rational and mechanistic considerations and is thus better positioned to advance psychological theory. The commentaries almost uniformly agree that mechanistic grounding is critical to the (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  10. Bayesian Fundamentalism or Enlightenment? On the explanatory status and theoretical contributions of Bayesian models of cognition.Matt Jones & Bradley C. Love - 2011 - Behavioral and Brain Sciences 34 (4):169-188.
    The prominence of Bayesian modeling of cognition has increased recently largely because of mathematical advances in specifying and deriving predictions from complex probabilistic models. Much of this research aims to demonstrate that cognitive behavior can be explained from rational principles alone, without recourse to psychological or neurological processes and representations. We note commonalities between this rational approach and other movements in psychology – namely, Behaviorism and evolutionary psychology – that set aside mechanistic explanations or make use of optimality (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   123 citations  
  11. Radical probabilism and bayesian conditioning.Richard Bradley - 2005 - Philosophy of Science 72 (2):342-364.
    Richard Jeffrey espoused an antifoundationalist variant of Bayesian thinking that he termed ‘Radical Probabilism’. Radical Probabilism denies both the existence of an ideal, unbiased starting point for our attempts to learn about the world and the dogma of classical Bayesianism that the only justified change of belief is one based on the learning of certainties. Probabilistic judgment is basic and irreducible. Bayesian conditioning is appropriate when interaction with the environment yields new certainty of belief in some proposition (...)
    Direct download (8 more)  
     
    Export citation  
     
    Bookmark   39 citations  
  12. Bayesian Epistemology.Luc Bovens & Stephan Hartmann - 2003 - Oxford: Oxford University Press. Edited by Stephan Hartmann.
    Probabilistic models have much to offer to philosophy. We continually receive information from a variety of sources: from our senses, from witnesses, from scientific instruments. When considering whether we should believe this information, we assess whether the sources are independent, how reliable they are, and how plausible and coherent the information is. Bovens and Hartmann provide a systematic Bayesian account of these features of reasoning. Simple Bayesian Networks allow us to model alternative assumptions about the nature (...)
    Direct download (8 more)  
     
    Export citation  
     
    Bookmark   302 citations  
  13. Bayesian Test of Significance for Conditional Independence: The Multinomial Model.Julio Michael Stern, Pablo de Morais Andrade & Carlos Alberto de Braganca Pereira - 2014 - Entropy 16:1376-1395.
    Conditional independence tests have received special attention lately in machine learning and computational intelligence related literature as an important indicator of the relationship among the variables used by their models. In the field of probabilistic graphical models, which includes Bayesian network models, conditional independence tests are especially important for the task of learning the probabilistic graphical model structure from data. In this paper, we propose the full Bayesian significance test for tests of conditional independence for (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  14.  46
    Bayesian model learning based on predictive entropy.Jukka Corander & Pekka Marttinen - 2006 - Journal of Logic, Language and Information 15 (1-2):5-20.
    Bayesian paradigm has been widely acknowledged as a coherent approach to learning putative probability model structures from a finite class of candidate models. Bayesian learning is based on measuring the predictive ability of a model in terms of the corresponding marginal data distribution, which equals the expectation of the likelihood with respect to a prior distribution for model parameters. The main controversy related to this learning method stems from the necessity of specifying proper prior distributions (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  15. Adjectival vagueness in a Bayesian model of interpretation.Daniel Lassiter & Noah D. Goodman - 2017 - Synthese 194 (10):3801-3836.
    We derive a probabilistic account of the vagueness and context-sensitivity of scalar adjectives from a Bayesian approach to communication and interpretation. We describe an iterated-reasoning architecture for pragmatic interpretation and illustrate it with a simple scalar implicature example. We then show how to enrich the apparatus to handle pragmatic reasoning about the values of free variables, explore its predictions about the interpretation of scalar adjectives, and show how this model implements Edgington’s Vagueness: a reader, 1997) account of (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   21 citations  
  16.  84
    The Logical Problem of Language Acquisition: A Probabilistic Perspective.Anne S. Hsu & Nick Chater - 2010 - Cognitive Science 34 (6):972-1016.
    Natural language is full of patterns that appear to fit with general linguistic rules but are ungrammatical. There has been much debate over how children acquire these “linguistic restrictions,” and whether innate language knowledge is needed. Recently, it has been shown that restrictions in language can be learned asymptotically via probabilistic inference using the minimum description length (MDL) principle. Here, we extend the MDL approach to give a simple and practical methodology for estimating how much linguistic data are required (...)
    Direct download  
     
    Export citation  
     
    Bookmark   13 citations  
  17.  99
    Bayesian reverse-engineering considered as a research strategy for cognitive science.Carlos Zednik & Frank Jäkel - 2016 - Synthese 193 (12):3951-3985.
    Bayesian reverse-engineering is a research strategy for developing three-level explanations of behavior and cognition. Starting from a computational-level analysis of behavior and cognition as optimal probabilistic inference, Bayesian reverse-engineers apply numerous tweaks and heuristics to formulate testable hypotheses at the algorithmic and implementational levels. In so doing, they exploit recent technological advances in Bayesian artificial intelligence, machine learning, and statistics, but also consider established principles from cognitive psychology and neuroscience. Although these tweaks and heuristics are highly (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   21 citations  
  18. A Model of Minimal Probabilistic Belief Revision.Andrés Perea - 2009 - Theory and Decision 67 (2):163-222.
    In the literature there are at least two models for probabilistic belief revision: Bayesian updating and imaging [Lewis, D. K. (1973), Counterfactuals, Blackwell, Oxford; Gärdenfors, P. (1988), Knowledge in flux: modeling the dynamics of epistemic states, MIT Press, Cambridge, MA]. In this paper we focus on imaging rules that can be described by the following procedure: (1) Identify every state with some real valued vector of characteristics, and accordingly identify every probabilistic belief with an expected vector of (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  19. The Myside Bias in Argument Evaluation: A Bayesian Model.Edoardo Baccini & Stephan Hartmann - 2022 - Proceedings of the Annual Meeting of the Cognitive Science Society 44:1512-1518.
    The "myside bias'' in evaluating arguments is an empirically well-confirmed phenomenon that consists of overweighting arguments that endorse one's beliefs or attack alternative beliefs while underweighting arguments that attack one's beliefs or defend alternative beliefs. This paper makes two contributions: First, it proposes a probabilistic model that adequately captures three salient features of myside bias in argument evaluation. Second, it provides a Bayesian justification of this model, thus showing that myside bias has a rational Bayesian (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  20.  7
    Beyond the discussion between Learning Theory of Piagetian Propositional Logic and that of Bayesian Causational Inference. 은은숙 - 2019 - Journal of the New Korean Philosophical Association 97:247-266.
    15년 전부터 등장한 베이지안 확률론적 추론 모형은 통계학, 과학철학, 심리학, 인지과학, 컴퓨터과학, 신경과학 등에서 학계의 연구를 지배하는 강력한 핵심논제가 된 후로 이젠 교육학과 심지어는 논리학 분야에까지 큰 반향을 일으키고 있다. 이러한 다양화로 인해, 굿에 따르면, 베이즈주의 유형은 46,656 가지로 분류된다. 이러한 다양한 모형들 중에서 베이지안 확률모형을 학습이론에 적용하려는 학자들은 학습자의 학습 과정이 정확히 베이지안 확률추론 과정을 항상 따른다고 주장한다. 그런데 이러한 확률론적 모형은 피아제의 구성주의적 전망을 계승한 것은 분명하다. 왜냐하면, 이 모델을 지지하는 학자들 자신이 자신들의 학적 운동을 “새로운 합리적 구성주의”라고 (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  21.  31
    A Bayesian‐Network Approach to Lexical Disambiguation.Leila M. R. Eizirik, Valmir C. Barbosa & Sueli B. T. Mendes - 1993 - Cognitive Science 17 (2):257-283.
    Lexical ambiguity can be syntactic if it involves more than one grammatical category for a single word, or semantic if more than one meaning can be associated with a word. In this article we discuss the application of a Bayesian‐network model in the resolution of lexical ambiguities of both types. The network we propose comprises a parsing subnetwork, which can be constructed automatically for any context‐free grammar, and a subnetwork for semantic analysis, which, in the spirit of Fillmore's (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  22.  42
    Uncertainty and Persistence: a Bayesian Update Semantics for Probabilistic Expressions.Deniz Rudin - 2018 - Journal of Philosophical Logic 47 (3):365-405.
    This paper presents a general-purpose update semantics for expressions of subjective uncertainty in natural language. First, a set of desiderata are established for how expressions of subjective uncertainty should behave in dynamic, update-based semantic systems; then extant implementations of expressions of subjective uncertainty in such models are evaluated and found wanting; finally, a new update semantics is proposed. The desiderata at the heart of this paper center around the contention that expressions of subjective uncertainty express beliefs which are not persistent, (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  23.  50
    Representing credal imprecision: from sets of measures to hierarchical Bayesian models.Daniel Lassiter - 2020 - Philosophical Studies 177 (6):1463-1485.
    The basic Bayesian model of credence states, where each individual’s belief state is represented by a single probability measure, has been criticized as psychologically implausible, unable to represent the intuitive distinction between precise and imprecise probabilities, and normatively unjustifiable due to a need to adopt arbitrary, unmotivated priors. These arguments are often used to motivate a model on which imprecise credal states are represented by sets of probability measures. I connect this debate with recent work in (...) cognitive science, where probabilistic models are typically provided with explicit hierarchical structure. Hierarchical Bayesian models are immune to many classic arguments against single-measure models. They represent grades of imprecision in probability assignments automatically, have strong psychological motivation, and can be normatively justified even when certain arbitrary decisions are required. In addition, hierarchical models show much more plausible learning behavior than flat representations in terms of sets of measures, which—on standard assumptions about update—rule out simple cases of learning from a starting point of total ignorance. (shrink)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  24. Coherentism, reliability and bayesian networks.Luc Bovens & Erik J. Olsson - 2000 - Mind 109 (436):685-719.
    The coherentist theory of justification provides a response to the sceptical challenge: even though the independent processes by which we gather information about the world may be of dubious quality, the internal coherence of the information provides the justification for our empirical beliefs. This central canon of the coherence theory of justification is tested within the framework of Bayesian networks, which is a theory of probabilistic reasoning in artificial intelligence. We interpret the independence of the information gathering processes (...)
    Direct download (9 more)  
     
    Export citation  
     
    Bookmark   67 citations  
  25.  17
    The Probabilistic Cell: Implementation of a Probabilistic Inference by the Biochemical Mechanisms of Phototransduction.Jacques Droulez - 2010 - Acta Biotheoretica 58 (2-3):103-120.
    When we perceive the external world, our brain has to deal with the incompleteness and uncertainty associated with sensory inputs, memory and prior knowledge. In theoretical neuroscience probabilistic approaches have received a growing interest recently, as they account for the ability to reason with incomplete knowledge and to efficiently describe perceptive and behavioral tasks. How can the probability distributions that need to be estimated in these models be represented and processed in the brain, in particular at the single cell (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  26.  10
    Objective Bayesian nets for integrating consistent datasets.Jürgen Landes & Jon Williamson - 2022 - Journal of Artificial Intelligence Research 74:393-458.
    This paper addresses a data integration problem: given several mutually consistent datasets each of which measures a subset of the variables of interest, how can one construct a probabilistic model that fits the data and gives reasonable answers to questions which are under-determined by the data? Here we show how to obtain a Bayesian network model which represents the unique probability function that agrees with the probability distributions measured by the datasets and otherwise has maximum entropy. (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  27.  92
    Bayesian argumentation and the value of logical validity.Benjamin Eva & Stephan Hartmann - 2018 - Psychological Review 125 (5):806-821.
    According to the Bayesian paradigm in the psychology of reasoning, the norms by which everyday human cognition is best evaluated are probabilistic rather than logical in character. Recently, the Bayesian paradigm has been applied to the domain of argumentation, where the fundamental norms are traditionally assumed to be logical. Here, we present a major generalisation of extant Bayesian approaches to argumentation that utilizes a new class of Bayesian learning methods that are better suited to modelling (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   23 citations  
  28.  11
    A Context‐Dependent Bayesian Account for Causal‐Based Categorization.Nicolás Marchant, Tadeg Quillien & Sergio E. Chaigneau - 2023 - Cognitive Science 47 (1):e13240.
    The causal view of categories assumes that categories are represented by features and their causal relations. To study the effect of causal knowledge on categorization, researchers have used Bayesian causal models. Within that framework, categorization may be viewed as dependent on a likelihood computation (i.e., the likelihood of an exemplar with a certain combination of features, given the category's causal model) or as a posterior computation (i.e., the probability that the exemplar belongs to the category, given its features). (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  29.  20
    Incremental Bayesian Category Learning From Natural Language.Lea Frermann & Mirella Lapata - 2016 - Cognitive Science 40 (6):1333-1381.
    Models of category learning have been extensively studied in cognitive science and primarily tested on perceptual abstractions or artificial stimuli. In this paper, we focus on categories acquired from natural language stimuli, that is, words. We present a Bayesian model that, unlike previous work, learns both categories and their features in a single process. We model category induction as two interrelated subproblems: the acquisition of features that discriminate among categories, and the grouping of concepts into categories based (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  30. Bayesian Cognitive Science. Routledge Encyclopaedia of Philosophy.Matteo Colombo - 2023 - Routledge Encyclopaedia of Philosophy.
    Bayesian cognitive science is a research programme that relies on modelling resources from Bayesian statistics for studying and understanding mind, brain, and behaviour. Conceiving of mental capacities as computing solutions to inductive problems, Bayesian cognitive scientists develop probabilistic models of mental capacities and evaluate their adequacy based on behavioural and neural data generated by humans (or other cognitive agents) performing a pertinent task. The overarching goal is to identify the mathematical principles, algorithmic procedures, and causal mechanisms (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  31.  14
    Gambling-Specific Cognitions Are Not Associated With Either Abstract or Probabilistic Reasoning: A Dual Frequentist-Bayesian Analysis of Individuals With and Without Gambling Disorder.Ismael Muela, Juan F. Navas & José C. Perales - 2021 - Frontiers in Psychology 11.
    BackgroundDistorted gambling-related cognitions are tightly related to gambling problems, and are one of the main targets of treatment for disordered gambling, but their etiology remains uncertain. Although folk wisdom and some theoretical approaches have linked them to lower domain-general reasoning abilities, evidence regarding that relationship remains unconvincing.MethodIn the present cross-sectional study, the relationship between probabilistic/abstract reasoning, as measured by the Berlin Numeracy Test, and the Matrices Test, respectively, and the five dimensions of the Gambling-Related Cognitions Scale, was tested in (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  32.  17
    A Bayesian approach to forward and inverse abstract argumentation problems.Hiroyuki Kido & Beishui Liao - 2022 - Journal of Applied Non-Classical Logics 32 (4):273-304.
    This paper studies a fundamental mechanism by which conflicts between arguments are drawn from sentiments regarding acceptability of the arguments. Given sets of arguments, an inverse abstract argumentation problem seeks attack relations between arguments such that acceptability semantics interprets each argument in the sets of arguments as being acceptable in each of the attack relations. It is an inverse problem of the traditional problem we refer to as the forward abstract argumentation problem. Given an attack relation, the forward abstract argumentation (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  33. Which Models of Scientific Explanation Are (In)Compatible with Inference to the Best Explanation?Yunus Prasetya - forthcoming - British Journal for the Philosophy of Science.
    In this article, I explore the compatibility of inference to the best explanation (IBE) with several influential models and accounts of scientific explanation. First, I explore the different conceptions of IBE and limit my discussion to two: the heuristic conception and the objective Bayesian conception. Next, I discuss five models of scientific explanation with regard to each model’s compatibility with IBE. I argue that Kitcher’s unificationist account supports IBE; Railton’s deductive–nomological–probabilistic model, Salmon’s statistical-relevance model, and (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  34.  6
    Hierarchical Bayesian narrative-making under variable uncertainty.Alex Jinich-Diamant & Leonardo Christov-Moore - 2023 - Behavioral and Brain Sciences 46:e97.
    While Conviction Narrative Theory correctly criticizes utility-based accounts of decision-making, it unfairly reduces probabilistic models to point estimates and treats affect and narrative as mechanistically opaque yet explanatorily sufficient modules. Hierarchically nested Bayesian accounts offer a mechanistically explicit and parsimonious alternative incorporating affect into a single biologically plausible precision-weighted mechanism that tunes decision-making toward narrative versus sensory dependence under varying uncertainty levels.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  35. Probabilistic causal interaction.Charles Twardy - manuscript
    Using Bayesian network causal models, we provide a simple general account of probabilistic causal interaction. We also detail problems in the leading accounts by Ellery Eells, and any others which require valence reversals, contextual unanimity, or average effects.
     
    Export citation  
     
    Bookmark  
  36. When the (Bayesian) ideal is not ideal.Danilo Fraga Dantas - 2023 - Logos and Episteme 15 (3):271-298.
    Bayesian epistemologists support the norms of probabilism and conditionalization using Dutch book and accuracy arguments. These arguments assume that rationality requires agents to maximize practical or epistemic value in every doxastic state, which is evaluated from a subjective point of view (e.g., the agent’s expectancy of value). The accuracy arguments also presuppose that agents are opinionated. The goal of this paper is to discuss the assumptions of these arguments, including the measure of epistemic value. I have designed AI agents (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  37.  25
    Purely subjective extended Bayesian models with Knightian unambiguity.Xiangyu Qu - 2015 - Theory and Decision 79 (4):547-571.
    This paper provides a model of belief representation in which ambiguity and unambiguity are endogenously distinguished in a purely subjective setting where objects of choices are, as usual, maps from states to consequences. Specifically, I first extend the maxmin expected utility theory and get a representation of beliefs such that the probabilistic beliefs over each ambiguous event are represented by a non-degenerate interval, while the ones over each unambiguous event are represented by a number. I then consider a (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  38. Probabilistic Alternatives to Bayesianism: The Case of Explanationism.Igor Douven & Jonah N. Schupbach - 2015 - Frontiers in Psychology 6.
    There has been a probabilistic turn in contemporary cognitive science. Far and away, most of the work in this vein is Bayesian, at least in name. Coinciding with this development, philosophers have increasingly promoted Bayesianism as the best normative account of how humans ought to reason. In this paper, we make a push for exploring the probabilistic terrain outside of Bayesianism. Non-Bayesian, but still probabilistic, theories provide plausible competitors both to descriptive and normative Bayesian (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   26 citations  
  39.  26
    ベイジアンネットワーク推定による確率モデル遺伝的プログラミング.伊庭 斉志 長谷川 禎彦 - 2007 - Transactions of the Japanese Society for Artificial Intelligence 22 (1):37-47.
    Genetic Programming is a powerful optimization algorithm, which employs the crossover for genetic operation. Because the crossover operator in GP randomly selects sub-trees, the building blocks may be destroyed by the crossover. Recently, algorithms called PMBGPs based on probabilistic techniques have been proposed in order to improve the problem mentioned above. We propose a new PMBGP employing Bayesian network for generating new individuals with a special chromosome called expanded parse tree, which much reduces a number of possible symbols (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  40. Bayesian Variations: Essays on the Structure, Object, and Dynamics of Credence.Aron Vallinder - 2018 - Dissertation, London School of Economics
    According to the traditional Bayesian view of credence, its structure is that of precise probability, its objects are descriptive propositions about the empirical world, and its dynamics are given by conditionalization. Each of the three essays that make up this thesis deals with a different variation on this traditional picture. The first variation replaces precise probability with sets of probabilities. The resulting imprecise Bayesianism is sometimes motivated on the grounds that our beliefs should not be more precise than the (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  41. Probabilistic Opinion Pooling with Imprecise Probabilities.Rush T. Stewart & Ignacio Ojea Quintana - 2018 - Journal of Philosophical Logic 47 (1):17-45.
    The question of how the probabilistic opinions of different individuals should be aggregated to form a group opinion is controversial. But one assumption seems to be pretty much common ground: for a group of Bayesians, the representation of group opinion should itself be a unique probability distribution, 410–414, [45]; Bordley Management Science, 28, 1137–1148, [5]; Genest et al. The Annals of Statistics, 487–501, [21]; Genest and Zidek Statistical Science, 114–135, [23]; Mongin Journal of Economic Theory, 66, 313–351, [46]; Clemen (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   18 citations  
  42.  41
    Discovering syntactic deep structure via Bayesian statistics.Jason Eisner - 2002 - Cognitive Science 26 (3):255-268.
    In the Bayesian framework, a language learner should seek a grammar that explains observed data well and is also a priori probable. This paper proposes such a measure of prior probability. Indeed it develops a full statistical framework for lexicalized syntax. The learner's job is to discover the system of probabilistic transformations (often called lexical redundancy rules) that underlies the patterns of regular and irregular syntactic constructions listed in the lexicon. Specifically, the learner discovers what transformations apply in (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  43.  32
    Bayes and the first person: consciousness of thoughts, inner speech and probabilistic inference.Franz Knappik - 2018 - Synthese 195 (5):2113-2140.
    On a widely held view, episodes of inner speech provide at least one way in which we become conscious of our thoughts. However, it can be argued, on the one hand, that consciousness of thoughts in virtue of inner speech presupposes interpretation of the simulated speech. On the other hand, the need for such self-interpretation seems to clash with distinctive first-personal characteristics that we would normally ascribe to consciousness of one’s own thoughts: a special reliability; a lack of conscious ambiguity (...)
    Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  44. A New Probabilistic Explanation of the Modus Ponens–Modus Tollens Asymmetry.Stephan Hartmann, Benjamin Eva & Henrik Singmann - 2019 - In Stephan Hartmann, Benjamin Eva & Henrik Singmann (eds.), CogSci 2019 Proceedings. Montreal, Québec, Kanada: pp. 289–294.
    A consistent finding in research on conditional reasoning is that individuals are more likely to endorse the valid modus ponens (MP) inference than the equally valid modus tollens (MT) inference. This pattern holds for both abstract task and probabilistic task. The existing explanation for this phenomenon within a Bayesian framework (e.g., Oaksford & Chater, 2008) accounts for this asymmetry by assuming separate probability distributions for both MP and MT. We propose a novel explanation within a computational-level Bayesian (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  45. Apragatic Bayesian Platform for Automating Scientific Induction.Kevin B. Korb - 1992 - Dissertation, Indiana University
    This work provides a conceptual foundation for a Bayesian approach to artificial inference and learning. I argue that Bayesian confirmation theory provides a general normative theory of inductive learning and therefore should have a role in any artificially intelligent system that is to learn inductively about its world. I modify the usual Bayesian theory in three ways directly pertinent to an eventual research program in artificial intelligence. First, I construe Bayesian inference rules as defeasible, allowing them (...)
     
    Export citation  
     
    Bookmark   1 citation  
  46.  6
    Modeling Sensory Preference in Speech Motor Planning: A Bayesian Modeling Framework.Jean-François Patri, Julien Diard & Pascal Perrier - 2019 - Frontiers in Psychology 10.
    Experimental studies of speech production involving compensations for auditory and somatosensory perturbations and adaptation after training suggest that both types of sensory information are considered to plan and monitor speech production. Interestingly, individual sensory preferences have been observed in this context: subjects who compensate less for somatosensory perturbations compensate more for auditory perturbations, and \textit{vice versa}. We propose to integrate this sensory preference phenomenon in a model of speech motor planning using a probabilistic model in which speech (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  47. Bayes and the first person: consciousness of thoughts, inner speech and probabilistic inference.Franz Knappik - 2017 - Synthese:1-28.
    On a widely held view, episodes of inner speech provide at least one way in which we become conscious of our thoughts. However, it can be argued, on the one hand, that consciousness of thoughts in virtue of inner speech presupposes interpretation of the simulated speech. On the other hand, the need for such self-interpretation seems to clash with distinctive first-personal characteristics that we would normally ascribe to consciousness of one’s own thoughts: a special reliability; a lack of conscious ambiguity (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  48. Probabilistic Reasoning in Cosmology.Yann Benétreau-Dupin - 2015 - Dissertation, The University of Western Ontario
    Cosmology raises novel philosophical questions regarding the use of probabilities in inference. This work aims at identifying and assessing lines of arguments and problematic principles in probabilistic reasoning in cosmology. -/- The first, second, and third papers deal with the intersection of two distinct problems: accounting for selection effects, and representing ignorance or indifference in probabilistic inferences. These two problems meet in the cosmology literature when anthropic considerations are used to predict cosmological parameters by conditionalizing the distribution of, (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  49.  28
    Are Jurors Intuitive Statisticians? Bayesian Causal Reasoning in Legal Contexts.Tamara Shengelia & David Lagnado - 2021 - Frontiers in Psychology 11.
    In criminal trials, evidence often involves a degree of uncertainty and decision-making includes moving from the initial presumption of innocence to inference about guilt based on that evidence. The jurors’ ability to combine evidence and make accurate intuitive probabilistic judgments underpins this process. Previous research has shown that errors in probabilistic reasoning can be explained by a misalignment of the evidence presented with the intuitive causal models that people construct. This has been explored in abstract and context-free situations. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  50.  10
    Syncopation as Probabilistic Expectation: Conceptual, Computational, and Experimental Evidence.Noah R. Fram & Jonathan Berger - 2023 - Cognitive Science 47 (12):e13390.
    Definitions of syncopation share two characteristics: the presence of a meter or analogous hierarchical rhythmic structure and a displacement or contradiction of that structure. These attributes are translated in terms of a Bayesian theory of syncopation, where the syncopation of a rhythm is inferred based on a hierarchical structure that is, in turn, learned from the ongoing musical stimulus. Several experiments tested its simplest possible implementation, with equally weighted priors associated with different meters and independence of auditory events, which (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
1 — 50 / 1000