IBE ('Inference to the best explanation' or abduction) is a popular and highly plausible theory of how we should judge the evidence for claims of past events based on present evidence. It has been notably developed and supported recently by Meyer following Lipton. I believe this theory is essentially correct. This paper supports IBE from a probability perspective, and argues that the retrodictive probabilities involved in such inferences should be analysed in terms of predictive probabilities and a priori probability ratios (...) of initial events. The key point is to separate these two features. Disagreements over evidence can be traced to disagreements over either the a priori probability ratios or predictive conditional ratios. In many cases, in real science, judgements of the former are necessarily subjective. The principles of iterated evidence are also discussed. The Sceptic's position is criticised as ignoring iteration of evidence, and characteristically failing to adjust a priori probability ratios in response to empirical evidence. (shrink)
In standard probability theory, probability zero is not the same as impossibility. But many have suggested that only impossible events should have probability zero. This can be arranged if we allow infinitesimal probabilities, but infinitesimals do not solve all of the problems. We will see that regular probabilities are not invariant over rigid transformations, even for simple, bounded, countable, constructive, and disjoint sets. Hence, regular chances cannot be determined by space-time invariant physical laws, and regular credences cannot satisfy seemingly reasonable (...) symmetry principles. Moreover, the examples here are immune to the objections against Williamson’s infinite coin flips. (shrink)
There is a divide in epistemology between those who think that, for any hypothesis and set of total evidence, there is a unique rational credence in that hypothesis, and those who think that there can be many rational credences. Schultheis offers a novel and potentially devastating objection to Permissivism, on the grounds that Permissivism permits dominated credences. I will argue that Permissivists can plausibly block Schultheis' argument. The issue turns on getting clear about whether we should be certain whether our (...) credences are rational. (shrink)
How do we ascribe subjective probability? In decision theory, this question is often addressed by representation theorems, going back to Ramsey (1926), which tell us how to define or measure subjective probability by observable preferences. However, standard representation theorems make strong rationality assumptions, in particular expected utility maximization. How do we ascribe subjective probability to agents which do not satisfy these strong rationality assumptions? I present a representation theorem with weak rationality assumptions which can be used to define or measure (...) subjective probability for partly irrational agents. (shrink)
I argue that when we use ‘probability’ language in epistemic contexts—e.g., when we ask how probable some hypothesis is, given the evidence available to us—we are talking about degrees of support, rather than degrees of belief. The epistemic probability of A given B is the mind-independent degree to which B supports A, not the degree to which someone with B as their evidence believes A, or the degree to which someone would or should believe A if they had B as (...) their evidence. My central argument is that the degree-of-support interpretation lets us better model good reasoning in certain cases involving old evidence. Degree-of-belief interpretations make the wrong predictions not only about whether old evidence confirms new hypotheses, but about the values of the probabilities that enter into Bayes’ Theorem when we calculate the probability of hypotheses conditional on old evidence and new background information. (shrink)
Supra-Bayesianism is the Bayesian response to learning the opinions of others. Probability pooling constitutes an alternative response. One natural question is whether there are cases where probability pooling gives the supra-Bayesian result. This has been called the problem of Bayes-compatibility for pooling functions. It is known that in a common prior setting, under standard assumptions, linear pooling cannot be nontrivially Bayes-compatible. We show by contrast that geometric pooling can be nontrivially Bayes-compatible. Indeed, we show that, under certain assumptions, geometric and (...) Bayes-compatible pooling are equivalent. Granting supra-Bayesianism its usual normative status, one upshot of our study is thus that, in a certain class of epistemic contexts, geometric pooling enjoys a normative advantage over linear pooling as a social learning mechanism. We discuss the philosophical ramifications of this advantage, which we show to be robust to variations in our statement of the Bayes-compatibility problem. (shrink)
The framework of Solomonoff prediction assigns prior probability to hypotheses inversely proportional to their Kolmogorov complexity. There are two well-known problems. First, the Solomonoff prior is relative to a choice of Universal Turing machine. Second, the Solomonoff prior is not computable. However, there are responses to both problems. Different Solomonoff priors converge with more and more data. Further, there are computable approximations to the Solomonoff prior. I argue that there is a tension between these two responses. This is because computable (...) approximations to Solomonoff prediction do not always converge. (shrink)
On one view of the traditional doxastic attitudes, belief is credence 1, disbelief is credence 0 and suspension is any precise credence between 0 and 1. In ‘Rational agnosticism and degrees of belief’ (2013) Jane Friedman argues, against this view, that there are cases where a credence of 0 is required but where suspension is permitted. If this were so, belief, disbelief and suspension could not be identified or reduced to the aforementioned credences. I argue that Friedman relies on two (...) different notions of epistemic rationality and two different kinds of evidential absence. I clarify these distinctions and show that her argument is either not valid or includes implausible premisses, twice over. If this is so, the view that belief is credence 1, disbelief is credence 0 and suspension is any precise credence between 0 and 1 cannot be rejected on the grounds that Friedman proposes. (shrink)
Is the fact that our universe contains fine-tuned life evidence that we live in a multiverse? Ian Hacking and Roger White influentially argue that it is not. We approach this question through a systematic framework for self-locating epistemology. As it turns out, leading approaches to self-locating evidence agree that the fact that our own universe contains fine-tuned life indeed confirms the existence of a multiverse. This convergence is no accident: we present two theorems showing that, in this setting, any updating (...) rule that satisfies a few reasonable conditions will have the same feature. The conclusion that fine-tuned life provides evidence for a multiverse is hard to escape. (shrink)
This M.A. thesis explores the intricate Problem of Induction, contrasting three seminal approaches: Hume's habit-centric view, Reichenbach's emphasis on the Principle of Uniformity of Nature, and Strawson's belief in the innate rationality of induction. While Hume's perspective lays the groundwork for Kant's a priori and Cleve's a posteriori validation, Reichenbach and Salmon present pragmatic justifications, underscoring the methodological and probabilistic underpinnings of inductive reasoning, specifying epistemological ignorance as a guidance for the optimality criteria. Strawson, challenging prevailing notions, posits that induction, (...) anchored by prior probabilities and evidence, is inherently rational, obviating the need for external validation. The study integrates concepts of frequentism, conditionalization, and probabilistic laws to develop the truth-conducive nuance of induction. The research culminates by confronting the dual challenges of quantitative and profound scepticism, championing a holistic approach. Therefore, the study aims to enrich the discourse on the epistemic foundations of the Problem of Induction, particularly its implications for scientific inquiry and the laws of nature. (shrink)
An aspect of Peirce’s thought that may still be underappreciated is his resistance to what Levi calls _pedigree epistemology_, to the idea that a central focus in epistemology should be the justification of current beliefs. Somewhat more widely appreciated is his rejection of the subjective view of probability. We argue that Peirce’s criticisms of subjectivism, to the extent they grant such a conception of probability is viable at all, revert back to pedigree epistemology. A thoroughgoing rejection of pedigree in the (...) context of probabilistic epistemology, however, _does_ challenge prominent subjectivist responses to the problem of the priors. (shrink)
Being a researcher is challenging, especially in the beginning. Early Career Researchers (ECRs) need achievements to secure and expand their careers. In today’s academic landscape, researchers are under many pressures: data collection costs, the expectation of novelty, analytical skill requirements, lengthy publishing process, and the overall competitiveness of the career. Innovative thinking and the ability to turn good ideas into good papers are the keys to success.
An important line of response to scepticism appeals to the best explanation. But anti-sceptics have not engaged much with work on explanation in the philosophy of science. I plan to investigate whether plausible assumptions about best explanations really do favour anti-scepticism. I will argue that there are ways of constructing sceptical hypotheses in which the assumptions do favour anti-scepticism, but the size of the support for anti-scepticism is small.
Our aim here is to present a result that connects some approaches to justifying countable additivity. This result allows us to better understand the force of a recent argument for countable additivity due to Easwaran. We have two main points. First, Easwaran’s argument in favour of countable additivity should have little persuasive force on those permissive probabilists who have already made their peace with violations of conglomerability. As our result shows, Easwaran’s main premiss – the comparative principle – is strictly (...) stronger than conglomerability. Second, with the connections between the comparative principle and other probabilistic concepts clearly in view, we point out that opponents of countable additivity can still make a case that countable additivity is an arbitrary stopping point between finite and full additivity. (shrink)
Many epistemological problems can be solved by the objective Bayesian view that there are rationality constraints on priors, that is, inductive probabilities. But attempts to work out these constraints have run into such serious problems that many have rejected objective Bayesianism altogether. I argue that the epistemologist should borrow the metaphysician’s concept of naturalness and assign higher priors to more natural hypotheses.
The epistemic probability of A given B is the degree to which B evidentially supports A, or makes A plausible. This paper is a first step in answering the question of what determines the values of epistemic probabilities. I break this question into two parts: the structural question and the substantive question. Just as an object’s weight is determined by its mass and gravitational acceleration, some probabilities are determined by other, more basic ones. The structural question asks what probabilities are (...) not determined in this way—these are the basic probabilities which determine values for all other probabilities. The substantive question asks how the values of these basic probabilities are determined. I defend an answer to the structural question on which basic probabilities are the probabilities of atomic propositions conditional on potential direct explanations. I defend this against the view, implicit in orthodox mathematical treatments of probability, that basic probabilities are the unconditional probabilities of complete worlds. I then apply my answer to the structural question to clear up common confusions in expositions of Bayesianism and shed light on the “problem of the priors.”. (shrink)
The Sleeping Beauty problem has attracted considerable attention in the literature as a paradigmatic example of how self-locating uncertainty creates problems for the Bayesian principles of Conditionalization and Reflection. Furthermore, it is also thought to raise serious issues for diachronic Dutch Book arguments. I show that, contrary to what is commonly accepted, it is possible to represent the Sleeping Beauty problem within a standard Bayesian framework. Once the problem is correctly represented, the ‘thirder’ solution satisfies standard rationality principles, vindicating why (...) it is not vulnerable to diachronic Dutch Book arguments. Moreover, the diachronic Dutch Books against the ‘halfer’ solutions fail to undermine the standard arguments for Conditionalization. The main upshot that emerges from my discussion is that the disagreement between different solutions does not challenge the applicability of Bayesian reasoning to centered settings, nor the commitment to Conditionalization, but is instead an instance of the familiar problem of choosing the priors. (shrink)
Must probabilities be countably additive? On the one hand, arguably, requiring countable additivity is too restrictive. As de Finetti pointed out, there are situations in which it is reasonable to use merely finitely additive probabilities. On the other hand, countable additivity is fruitful. It can be used to prove deep mathematical theorems that do not follow from finite additivity alone. One of the most philosophically important examples of such a result is the Bayesian convergence to the truth theorem, which says (...) that conditional probabilities converge to 1 for true hypotheses and to 0 for false hypotheses. In view of the long-standing debate about countable additivity, it is natural to ask in what circumstances finitely additive theories deliver the same results as the countably additive theory. This paper addresses that question and initiates a systematic study of convergence to the truth in a finitely additive setting. There is also some discussion of how the formal results can be applied to ongoing debates in epistemology and the philosophy of science. (shrink)
A non-expert who struggles to make good decisions and who turns to decision theory for help, might be more than a little surprised by what they find. If they read a standard treatment of the subject, they will find that they are assumed to be logically omniscient: they know all the logical facts about the propositions whose truth they have considered. Their beliefs are also assumed to be logically closed: if they believe each of a set of propositions S, then (...) they believe everything that can be deduced from S. Finally, they are assumed to be maximally opinionated—they have assigned precise probabilities and cardinal utilities to each possible state of the world that can be formulated via S. The normative core of standard decision theory consists of some very weak axioms for these probabilities and car-dinal utilities, plus the advice to maximize their expected utility, which is a function of the probabil-ities and cardinal utilities for each possible state of the world given each possible choice that they can make. The non-expert might understandably react by saying that this theory is too idealized to be useful for human beings. It is this criticism that Richard Bradley addresses with patience, rigour, and ardour in this book. (shrink)
I examine what the mathematical theory of random structures can teach us about the probability of Plenitude, a thesis closely related to David Lewis's modal realism. Given some natural assumptions, Plenitude is reasonably probable a priori, but in principle it can be (and plausibly it has been) empirically disconfirmed—not by any general qualitative evidence, but rather by our de re evidence.
Bài mới xuất bản vào ngày 19-5-2020 với tác giả liên lạc là NCS Nguyễn Minh Hoàng, cán bộ nghiên cứu của Trung tâm ISR, trình bày tiếp cận thống kê Bayesian cho việc nghiên cứu dữ liệu khoa học xã hội. Đây là kết quả của định hướng Nhóm nghiên cứu SDAG được nêu rõ ngay từ ngày 18-5-2019.
Impermissivists hold that an agent with a given body of evidence has at most one rationally permitted attitude that she should adopt towards any particular proposition. Permissivists deny this, often motivating permissivism by describing scenarios that pump our intuitions that the agent could reasonably take one of several attitudes toward some proposition. We criticize the following impermissivist response: while it seems like any of that range of attitudes is permissible, what is actually required is the single broad attitude that encompasses (...) all of these single attitudes. While this might seem like an easy way to win over permissivists, we argue that this impermissivist response leads to an indefensible epistemology; permissive intuitions are not so easily co-opted. (shrink)
If the laws of nature are as the Humean believes, it is an unexplained cosmic coincidence that the actual Humean mosaic is as extremely regular as it is. This is a strong and well-known objection to the Humean account of laws. Yet, as reasonable as this objection may seem, it is nowadays sometimes dismissed. The reason: its unjustified implicit assignment of equiprobability to each possible Humean mosaic; that is, its assumption of the principle of indifference, which has been attacked on (...) many grounds ever since it was first proposed. In place of equiprobability, recent formal models represent the doxastic state of total ignorance as suspension of judgment. In this paper I revisit the cosmic coincidence objection to Humean laws by assessing which doxastic state we should endorse. By focusing on specific features of our scenario I conclude that suspending judgment results in an unnecessarily weak doxastic state. First, I point out that recent literature in epistemology has provided independent justifications of the principle of indifference. Second, given that the argument is framed within a Humean metaphysics, it turns out that we are warranted to appeal to these justifications and assign a uniform and additive credence distribution among Humean mosaics. This leads us to conclude that, contrary to widespread opinion, we should not dismiss the cosmic coincidence objection to the Humean account of laws. (shrink)
Putnam construed the aim of Carnap’s program of inductive logic as the specification of a “universal learning machine,” and presented a diagonal proof against the very possibility of such a thing. Yet the ideas of Solomonoff and Levin lead to a mathematical foundation of precisely those aspects of Carnap’s program that Putnam took issue with, and in particular, resurrect the notion of a universal mechanical rule for induction. In this paper, I take up the question whether the Solomonoff–Levin proposal is (...) successful in this respect. I expose the general strategy to evade Putnam’s argument, leading to a broader discussion of the outer limits of mechanized induction. I argue that this strategy ultimately still succumbs to diagonalization, reinforcing Putnam’s impossibility claim. (shrink)
Many who take a dismissive attitude towards metaphysics trace their view back to Carnap’s ‘Empiricism, Semantics and Ontology’. But the reason Carnap takes a dismissive attitude to metaphysics is a matter of controversy. I will argue that no reason is given in ‘Empiricism, Semantics and Ontology’, and this is because his reason for rejecting metaphysical debates was given in ‘Pseudo-Problems in Philosophy’. The argument there assumes verificationism, but I will argue that his argument survives the rejection of verificationism. The root (...) of his argument is the claim that metaphysical statements cannot be justified; the point is epistemic, not semantic. I will argue that this remains a powerful challenge to metaphysics that has yet to be adequately answered. (shrink)
The paper will compare two methods used in the design of diagnostic strategies. The first one is a method that precises predictive value of diagnostic tests. The second one is based on the use of Bayes’ theorem. The main aim of this article is to identify the epistemological assumptions underlying both of these methods. For the purpose of this objective, example projects of one and multi-stage diagnostic strategy developed using both methods will be considered.
In this thesis I investigate the theoretical possibility of a universal method of prediction. A prediction method is universal if it is always able to learn from data: if it is always able to extrapolate given data about past observations to maximally successful predictions about future observations. The context of this investigation is the broader philosophical question into the possibility of a formal specification of inductive or scientific reasoning, a question that also relates to modern-day speculation about a fully automatized (...) data-driven science. I investigate, in particular, a proposed definition of a universal prediction method that goes back to Solomonoff and Levin. This definition marks the birth of the theory of Kolmogorov complexity, and has a direct line to the information-theoretic approach in modern machine learning. Solomonoff's work was inspired by Carnap's program of inductive logic, and the more precise definition due to Levin can be seen as an explicit attempt to escape the diagonal argument that Putnam famously launched against the feasibility of Carnap's program. The Solomonoff-Levin definition essentially aims at a mixture of all possible prediction algorithms. An alternative interpretation is that the definition formalizes the idea that learning from data is equivalent to compressing data. In this guise, the definition is often presented as an implementation and even as a justification of Occam's razor, the principle that we should look for simple explanations. The conclusions of my investigation are negative. I show that the Solomonoff-Levin definition fails to unite two necessary conditions to count as a universal prediction method, as turns out be entailed by Putnam's original argument after all; and I argue that this indeed shows that no definition can. Moreover, I show that the suggested justification of Occam's razor does not work, and I argue that the relevant notion of simplicity as compressibility is already problematic itself. (shrink)
We develop a Bayesian framework for thinking about the way evidence about the here and now can bear on hypotheses about the qualitative character of the world as a whole, including hypotheses according to which the total population of the world is infinite. We show how this framework makes sense of the practice cosmologists have recently adopted in their reasoning about such hypotheses.
In this article applied and theoretical epistemologies benefit each other in a study of the British legal case of R. vs. Clark. Clark's first infant died at 11 weeks of age, in December 1996. About a year later, Clark had a second child. After that child died at eight weeks of age, Clark was tried for murdering both infants. Statisticians and philosophers have disputed how to apply Bayesian analyses to this case, and thereby arrived at different judgments about it. By (...) dwelling on this applied case, I make theoretical gains: clarifying and defending pragmatic principles of inference that are important for estimating key probabilities in a range of cases. Then, partly by drawing on such principles, and uncovering overlooked data on post-partum psychosis, I make applied gains: improving the rationality of judgments about the Sally Clark case in particular, judgments important to future similar cases. (shrink)
Defenders of Inference to the Best Explanation claim that explanatory factors should play an important role in empirical inference. They disagree, however, about how exactly to formulate this role. In particular, they disagree about whether to formulate IBE as an inference rule for full beliefs or for degrees of belief, as well as how a rule for degrees of belief should relate to Bayesianism. In this essay I advance a new argument against non-Bayesian versions of IBE. My argument focuses on (...) cases in which we are concerned with multiple levels of explanation of some phenomenon. I show that in many such cases, following IBE as an inference rule for full beliefs leads to deductively inconsistent beliefs, and following IBE as a non-Bayesian updating rule for degrees of belief leads to probabilistically incoherent degrees of belief. (shrink)
When a study shows statistically significant correlation between an exposure and an outcome, the credence of a real connection between the two increases. Should that credence remain the same when it is discovered that further independent studies between the exposure and other independent outcomes were conducted? Matthew Kotzen argues that it should remain the same, even if the results of those further studies are discovered. However, we argue that it can differ dependent upon the results of the studies.
Many theorists have proposed that we can use the principle of indifference to defeat the inductive sceptic. But any such theorist must confront the objection that different ways of applying the principle of indifference lead to incompatible probability assignments. Huemer offers the explanatory priority proviso as a strategy for overcoming this objection. With this proposal, Huemer claims that we can defend induction in a way that is not question-begging against the sceptic. But in this article, I argue that the opposite (...) is true: if anything, Huemer’s use of the principle of indifference supports the rationality of inductive scepticism. (shrink)
Say that an agent is "epistemically humble" if she is less than certain that her opinions will converge to the truth, given an appropriate stream of evidence. Is such humility rationally permissible? According to the orgulity argument : the answer is "yes" but long-run convergence-to-the-truth theorems force Bayesians to answer "no." That argument has no force against Bayesians who reject countable additivity as a requirement of rationality. Such Bayesians are free to count even extreme humility as rationally permissible.
Conditionalization is a widely endorsed rule for updating one’s beliefs. But a sea of complaints have been raised about it, including worries regarding how the rule handles error correction, changing desiderata of theory choice, evidence loss, self-locating beliefs, learning about new theories, and confirmation. In light of such worries, a number of authors have suggested replacing Conditionalization with a different rule — one that appeals to what I’ll call “ur-priors”. But different authors have understood the rule in different ways, and (...) these different understandings solve different problems. In this paper, I aim to map out the terrain regarding these issues. I survey the different problems that might motivate the adoption of such a rule, flesh out the different understandings of the rule that have been proposed, and assess their pros and cons. I conclude by suggesting that one particular batch of proposals, proposals that appeal to what I’ll call “loaded evidential standards”, are especially promising. (shrink)
In Bayesian epistemology, the problem of the priors is this: How should we set our credences (or degrees of belief) in the absence of evidence? That is, how should we set our prior or initial credences, the credences with which we begin our credal life? David Lewis liked to call an agent at the beginning of her credal journey a superbaby. The problem of the priors asks for the norms that govern these superbabies. -/- The Principle of Indifference gives a (...) very restrictive answer. It demands that such an agent divide her credences equally over all possibilities. That is, according to the Principle of Indifference, only one initial credence function is permissible, namely, the uniform distribution. In this paper, we offer a novel argument for the Principle of Indifference. I call it the Argument from Accuracy. (shrink)
Algorithmic information theory gives an idealized notion of compressibility that is often presented as an objective measure of simplicity. It is suggested at times that Solomonoff prediction, or algorithmic information theory in a predictive setting, can deliver an argument to justify Occam’s razor. This article explicates the relevant argument and, by converting it into a Bayesian framework, reveals why it has no such justificatory force. The supposed simplicity concept is better perceived as a specific inductive assumption, the assumption of effectiveness. (...) It is this assumption that is the characterizing element of Solomonoff prediction and wherein its philosophical interest lies. (shrink)
Formal methods are changing how epistemology is being studied and understood. A Critical Introduction to Formal Epistemology introduces the types of formal theories being used and explains how they are shaping the subject. Beginning with the basics of probability and Bayesianism, it shows how representing degrees of belief using probabilities informs central debates in epistemology. As well as discussing induction, the paradox of confirmation and the main challenges to Bayesianism, this comprehensive overview covers objective chance, peer disagreement, the concept of (...) full belief, and the traditional problems of justification and knowledge. Subjecting each position to a critical analysis, it explains the main issues in formal epistemology, and the motivations and drawbacks of each position. Written in an accessible language and supported study questions, guides to further reading and a glossary, positions are placed in an historic context to give a sense of the development of the field. As the first introductory textbook on formal epistemology, A Critical Introduction to Formal Epistemology is an invaluable resource for students and scholars of contemporary epistemology. (shrink)
The classical interpretation of probability together with the principle of indifference is formulated in terms of probability measure spaces in which the probability is given by the Haar measure. A notion called labelling invariance is defined in the category of Haar probability spaces; it is shown that labelling invariance is violated, and Bertrand’s paradox is interpreted as the proof of violation of labelling invariance. It is shown that Bangu’s attempt to block the emergence of Bertrand’s paradox by requiring the re-labelling (...) of random events to preserve randomness cannot succeed non-trivially. A non-trivial strategy to preserve labelling invariance is identified, and it is argued that, under the interpretation of Bertrand’s paradox suggested in the paper, the paradox does not undermine either the principle of indifference or the classical interpretation and is in complete harmony with how mathematical probability theory is used in the sciences to model phenomena. It is shown in particular that violation of labelling invariance does not entail that labelling of random events affects the probabilities of random events. It also is argued, however, that the content of the principle of indifference cannot be specified in such a way that it can establish the classical interpretation of probability as descriptively accurate or predictively successful. (shrink)
Belief-revision models of knowledge describe how to update one’s degrees of belief associated with hypotheses as one considers new evidence, but they typically do not say how probabilities become associated with meaningful hypotheses in the first place. Here we consider a variety of Skyrms–Lewis signaling game (Lewis in Convention. Harvard University Press, Cambridge, 1969; Skyrms in Signals evolution, learning, & information. Oxford University Press, New York, 2010) where simple descriptive language and predictive practice and associated basic expectations coevolve. Rather than (...) assigning prior probabilities to hypotheses in a fixed language then conditioning on new evidence, the agents begin with no meaningful language or expectations then evolve to have expectations conditional on their descriptions as they evolve to have meaningful descriptions for the purpose of successful prediction. The model, then, provides a simple but concrete example of how the process of evolving a descriptive language suitable for inquiry might also provide agents with conditional expectations that reflect the type and degree of predictive success in fact afforded by their evolved predictive practice. This illustrates one way in which the traditional problem of priors may simply fail to apply to one’s model of evolving inquiry. (shrink)
The paper starts by describing and clarifying what Williamson calls the consequence fallacy. I show two ways in which one might commit the fallacy. The first, which is rather trivial, involves overlooking background information; the second way, which is the more philosophically interesting, involves overlooking prior probabilities. In the following section, I describe a powerful form of sceptical argument, which is the main topic of the paper, elaborating on previous work by Huemer. The argument attempts to show the impossibility of (...) defeasible justification, justification based on evidence which does not entail the (allegedly) justified proposition or belief. I then discuss the relation between the consequence fallacy, or some similar enough reasoning, and that form of argument. I argue that one can resist that form of sceptical argument if one gives up the idea that a belief cannot be justified unless it is supported by the totality of the evidence available to the subject—a principle entailed by many prominent epistemological views, most clearly by epistemological evidentialism. The justification, in the relevant cases, should instead derive solely from the prior probability of the proposition. A justification of this sort, that does not rely on evidence, would amount to a form of entitlement, in (something like) Crispin Wright’s sense. I conclude with some discussion of how to understand prior probabilities, and how to develop the notion of entitlement in an externalist epistemological framework. (shrink)
Abstract The Preface Paradox, first introduced by David Makinson (1961), presents a plausible scenario where an agent is evidentially certain of each of a set of propositions without being evidentially certain of the conjunction of the set of propositions. Given reasonable assumptions about the nature of evidential certainty, this appears to be a straightforward contradiction. We solve the paradox by appeal to stake size sensitivity, which is the claim that evidential probability is sensitive to stake size. The argument is that (...) because the informational content in the conjunction is greater than the sum of the informational content of the conjuncts, the stake size in the conjunction is higher than the sum of the stake sizes in the conjuncts. We present a theory of evidential probability that identifies knowledge with value and allows for coherent stake sensitive beliefs. An agent’s beliefs are represented two dimensionally as a bid – ask spread, which gives a bid price and an ask price for bets at each stake size. The bid ask spread gets wider when there is less valuable evidence relative to the stake size, and narrower when there is more valuable evidence according to a simple formula. The bid-ask spread can represent the uncertainty in the first order probabilistic judgement. According to the theory it can be coherent to be evidentially certain at low stakes, but less than certain at high stakes, and therefore there is no contradiction in the Preface. The theory not only solves the paradox, but also gives a good model of decisions under risk that overcomes many of the problems associated with classic expected utility theory. (shrink)
An unknown process is generating a sequence of symbols, drawn from an alphabet, A. Given an initial segment of the sequence, how can one predict the next symbol? Ray Solomonoff’s theory of inductive reasoning rests on the idea that a useful estimate of a sequence’s true probability of being outputted by the unknown process is provided by its algorithmic probability (its probability of being outputted by a species of probabilistic Turing machine). However algorithmic probability is a “semimeasure”: i.e., the sum, (...) over all x∈A, of the conditional algorithmic probabilities of the next symbol being x, may be less than 1. Prevailing wisdom has it that algorithmic probability must be normalized, to eradicate this semimeasure property, before it can yield acceptable probability estimates. This paper argues, to the contrary, that the semimeasure property contributes substantially to the power and scope of an algorithmic-probability-based theory of induction, and that normalization is unnecessary. (shrink)
Bayesian epistemology tells us with great precision how we should move from prior to posterior beliefs in light of new evidence or information, but says little about where our prior beliefs come from. It offers few resources to describe some prior beliefs as rational or well-justified, and others as irrational or unreasonable. A different strand of epistemology takes the central epistemological question to be not how to change one’s beliefs in light of new evidence, but what reasons justify a given (...) set of beliefs in the first place. We offer an account of rational belief formation that closes some of the gap between Bayesianism and its reason-based alternative, formalizing the idea that an agent can have reasons for his or her (prior) beliefs, in addition to evidence or information in the ordinary Bayesian sense. Our analysis of reasons for belief is part of a larger programme of research on the role of reasons in rational agency (Dietrich and List, Nous, 2012a, in press; Int J Game Theory, 2012b, in press). (shrink)
This paper examines the debate between permissive and impermissive forms of Bayesianism. It briefly discusses some considerations that might be offered by both sides of the debate, and then replies to some new arguments in favor of impermissivism offered by Roger White. First, it argues that White’s (Oxford studies in epistemology, vol 3. Oxford University Press, Oxford, pp 161–186, 2010) defense of Indifference Principles is unsuccessful. Second, it contends that White’s (Philos Perspect 19:445–459, 2005) arguments against permissive views do not (...) succeed. (shrink)
We argue that in spite of their apparent dissimilarity, the methodologies employed in the a priori and a posteriori assessment of probabilities can both be justified by appeal to a single principle of inductive reasoning, viz., the principle of symmetry. The difference between these two methodologies consists in the way in which information about the single-trial probabilities in a repeatable chance process is extracted from the constraints imposed by this principle. In the case of a posteriori reasoning, these constraints inform (...) the analysis by fixing an a posteriori determinant of the probabilities, whereas, in the case of a priori reasoning, they imply certain claims which then serve as the basis for subsequent probabilistic deductions. In a given context of inquiry, the particular form which a priori or a posteriori reason may take depends, in large part, on the strength of the underlying symmetry assumed: the stronger the symmetry, the more information can be acquired a priori and the less information about the long-run behavior of the process is needed for an a posteriori assessment of the probabilities. In the context of this framework, frequency-based reasoning emerges as a limiting case of a posteriori reasoning, and reasoning about simple games of chance, as a limiting case of a priori reasoning. Between these two extremes, both a priori and a posteriori reasoning can take a variety of intermediate forms. (shrink)
The technique of minimizing information (infomin) has been commonly employed as a general method for both choosing and updating a subjective probability function. We argue that, in a wide class of cases, the use of infomin methods fails to cohere with our standard conception of rational degrees of belief. We introduce the notion of a deceptive updating method and argue that non-deceptiveness is a necessary condition for rational coherence. Infomin has been criticized on the grounds that there are no higher (...) order probabilities that ‘support’ it, but the appeal to higher order probabilities is a substantial assumption that some might reject. Our elementary arguments from deceptiveness do not rely on this assumption. While deceptiveness implies lack of higher order support, the converse does not, in general, hold, which indicates that deceptiveness is a more objectionable property. We offer a new proof of the claim that infomin updating of any strictly-positive prior with respect to conditional-probability constraints is deceptive. In the case of expected-value constraints, infomin updating of the uniform prior is deceptive for some random variables but not for others. We establish both a necessary condition and a sufficient condition (which extends the scope of the phenomenon beyond cases previously considered) for deceptiveness in this setting. Along the way, we clarify the relation which obtains between the strong notion of higher order support, in which the higher order probability is defined over the full space of first order probabilities, and the apparently weaker notion, in which it is defined over some smaller parameter space. We show that under certain natural assumptions, the two are equivalent. Finally, we offer an interpretation of Jaynes, according to which his own appeal to infomin methods avoids the incoherencies discussed in this paper. (shrink)
One can have no prior credence whatsoever (not even zero) in a temporally indexical claim. This fact saves the principle of conditionalization from potential counterexample and undermines the Elga and Arntzenius/Dorr arguments for the thirder position and Lewis' argument for the halfer position on the Sleeping Beauty Problem, thereby supporting the double-halfer position. -/- .
Probabilistic belief contraction has been a much neglected topic in the field of probabilistic reasoning. This is due to the difficulty in establishing a reasonable reversal of the effect of Bayesian conditionalization on a probabilistic distribution. We show that indifferent contraction, a solution proposed by Ramer to this problem through a judicious use of the principle of maximum entropy, is a probabilistic version of a full meet contraction. We then propose variations of indifferent contraction, using both the Shannon entropy measure (...) as well as the Hartley entropy measure, with an aim to avoid excessive loss of beliefs that full meet contraction entails. (shrink)