Taking Joyce’s (1998; 2009) recent argument(s) for probabilism as our point of departure, we propose a new way of grounding formal, synchronic, epistemic coherence requirements for (opinionated) full belief. Our approach yields principled alternatives to deductive consistency, sheds new light on the preface and lottery paradoxes, and reveals novel conceptual connections between alethic and evidential epistemic norms.
Contemporary Bayesian confirmation theorists measure degree of (incremental) confirmation using a variety of non-equivalent relevance measures. As a result, a great many of the arguments surrounding quantitative Bayesian confirmation theory are implicitly sensitive to choice of measure of confirmation. Such arguments are enthymematic, since they tacitly presuppose that certain relevance measures should be used (for various purposes) rather than other relevance measures that have been proposed and defended in the philosophical literature. I present a survey of this pervasive class of (...) Bayesian confirmation-theoretic enthymemes, and a brief analysis of some recent attempts to resolve the problem of measure sensitivity. (shrink)
According to Bayesian confirmation theory, evidence E (incrementally) confirms (or supports) a hypothesis H (roughly) just in case E and H are positively probabilistically correlated (under an appropriate probability function Pr). There are many logically equivalent ways of saying that E and H are correlated under Pr. Surprisingly, this leads to a plethora of non-equivalent quantitative measures of the degree to which E confirms H (under Pr). In fact, many non-equivalent Bayesian measures of the degree to which E confirms (or (...) supports) H have been proposed and defended in the literature on inductive logic. I provide a thorough historical survey of the various proposals, and a detailed discussion of the philosophical ramifications of the differences between them. I argue that the set of candidate measures can be narrowed drastically by just a few intuitive and simple desiderata. In the end, I provide some novel and compelling reasons to think that the correct measure of degree of evidential support (within a Bayesian framework) is the (log) likelihood ratio. The central analyses of this research have had some useful and interesting byproducts, including: (i ) a new Bayesian account of (confirmationally) independent evidence, which has applications to several important problems in con- firmation theory, including the problem of the (confirmational) value of evidential diversity, and (ii ) novel resolutions of several problems in Bayesian confirmation theory, motivated by the use of the (log) likelihood ratio measure, including a reply to the Popper-Miller critique of probabilistic induction, and a new analysis and resolution of the problem of irrelevant conjunction (a.k.a., the tacking problem). (shrink)
In this paper, we compare and contrast two methods for the revision of qualitative beliefs. The first method is generated by a simplistic diachronic Lockean thesis requiring coherence with the agent’s posterior credences after conditionalization. The second method is the orthodox AGM approach to belief revision. Our primary aim is to determine when the two methods may disagree in their recommendations and when they must agree. We establish a number of novel results about their relative behavior. Our most notable finding (...) is that the inverse of the golden ratio emerges as a non-arbitrary bound on the Bayesian method’s free-parameter—the Lockean threshold. This “golden threshold” surfaces in two of our results and turns out to be crucial for understanding the relation between the two methods. (shrink)
There are many things—call them ‘experts’—that you should defer to in forming your opinions. The trouble is, many experts are modest: they’re less than certain that they are worthy of deference. When this happens, the standard theories of deference break down: the most popular (“Reflection”-style) principles collapse to inconsistency, while their most popular (“New-Reflection”-style) variants allow you to defer to someone while regarding them as an anti-expert. We propose a middle way: deferring to someone involves preferring to make any decision (...) using their opinions instead of your own. In a slogan, deferring opinions is deferring decisions. Generalizing the proposal of Dorst (2020a), we first formulate a new principle that shows exactly how your opinions must relate to an expert’s for this to be so. We then build off the results of Levinstein (2019) and Campbell-Moore (2020) to show that this principle is also equivalent to the constraint that you must always expect the expert’s estimates to be more accurate than your own. Finally, we characterize the conditions an expert’s opinions must meet to be worthy of deference in this sense, showing how they sit naturally between the too-strong constraints of Reflection and the too-weak constraints of New Reflection. (shrink)
Let E be a set of n propositions E1, ..., En. We seek a probabilistic measure C(E) of the ‘degree of coherence’ of E. Intuitively, we want C to be a quantitative, probabilistic generalization of the (deductive) logical coherence of E. So, in particular, we require C to satisfy the following..
To the extent that we have reasons to avoid these “bad B -properties”, these arguments provide reasons not to have an incoherent credence function b — and perhaps even reasons to have a coherent one. But, note that these two traditional arguments for probabilism involve what might be called “pragmatic” reasons (not) to be (in)coherent. In the case of the Dutch Book argument, the “bad” property is pragmatically bad (to the extent that one values money). But, it is not clear (...) whether the DBA pinpoints any epistemic defect of incoherent agents. The same can be said for Representation Theorem arguments, since they involve the structure of an agent’s preferences. (shrink)
In this note, I consider various precisifications of the slogan ‘evidence of evidence is evidence’. I provide counter-examples to each of these precisifications (assuming an epistemic probabilistic relevance notion of ‘evidential support’).
Several forms of symmetry in degrees of evidential support areconsidered. Some of these symmetries are shown not to hold in general. This has implications for the adequacy of many measures of degree ofevidential support that have been proposed and defended in the philosophical literature.
In this paper, we investigate various possible (Bayesian) precisifications of the (somewhat vague) statements of “the equal weight view” (EWV) that have appeared in the recent literature on disagreement. We will show that the renditions of (EWV) that immediately suggest themselves are untenable from a Bayesian point of view. In the end, we will propose some tenable (but not necessarily desirable) interpretations of (EWV). Our aim here will not be to defend any particular Bayesian precisification of (EWV), but rather to (...) raise awareness about some of the difficulties inherent in formulating such precisifications. (shrink)
Likelihoodists and Bayesians seem to have a fundamental disagreement about the proper probabilistic explication of relational (or contrastive) conceptions of evidential support (or confirmation). In this paper, I will survey some recent arguments and results in this area, with an eye toward pinpointing the nexus of the dispute. This will lead, first, to an important shift in the way the debate has been couched, and, second, to an alternative explication of relational support, which is in some sense a "middle way" (...) between Likelihoodism and Bayesianism. In the process, I will propose some new work for an old probability puzzle: the "Monty Hall" problem. (shrink)
According to orthodox (Kolmogorovian) probability theory, conditional probabilities are by definition certain ratios of unconditional probabilities. As a result, orthodox conditional probabilities are undefined whenever their antecedents have zero unconditional probability. This has important ramifications for the notion of probabilistic independence. Traditionally, independence is defined in terms of unconditional probabilities (the factorization of the relevant joint unconditional probabilities). Various “equivalent” formulations of independence can be given using conditional probabilities. But these “equivalences” break down if conditional probabilities are permitted to have (...) conditions with zero unconditional probability. We reconsider probabilistic independence in this more general setting. We argue that a less orthodox but more general (Popperian) theory of conditional probability should be used, and that much of the conventional wisdom about probabilistic independence needs to be rethought. (shrink)
outlined. This account is partly inspired by the work of C.S. Peirce. When we want to consider how degree of confirmation varies with changing I show that a large class of quantitative Bayesian measures of con-.
The conjunction fallacy has been a key topic in debates on the rationality of human reasoning and its limitations. Despite extensive inquiry, however, the attempt to provide a satisfactory account of the phenomenon has proved challenging. Here we elaborate the suggestion (first discussed by Sides, Osherson, Bonini, & Viale, 2002) that in standard conjunction problems the fallacious probability judgements observed experimentally are typically guided by sound assessments of _confirmation_ relations, meant in terms of contemporary Bayesian confirmation theory. Our main formal (...) result is a confirmation-theoretic account of the conjunction fallacy, which is proven _robust_ (i.e., not depending on various alternative ways of measuring degrees of confirmation). The proposed analysis is shown distinct from contentions that the conjunction effect is in fact not a fallacy, and is compared with major competing explanations of the phenomenon, including earlier references to a confirmation-theoretic account. (shrink)
A Bayesian account of independent evidential support is outlined. This account is partly inspired by the work of C. S. Peirce. I show that a large class of quantitative Bayesian measures of confirmation satisfy some basic desiderata suggested by Peirce for adequate accounts of independent evidence. I argue that, by considering further natural constraints on a probabilistic account of independent evidence, all but a very small class of Bayesian measures of confirmation can be ruled out. In closing, another application of (...) my account to the problem of evidential diversity is also discussed. (shrink)
The Paradox of the Ravens (a.k.a,, The Paradox of Confirmation) is indeed an old chestnut. A great many things have been written and said about this paradox and its implications for the logic of evidential support. The first part of this paper will provide a brief survey of the early history of the paradox. This will include the original formulation of the paradox and the early responses of Hempel, Goodman, and Quine. The second part of the paper will describe attempts (...) to resolve the paradox within a Bayesian framework, and show how to improve upon them. This part begins with a discussion of how probabilistic methods can help to clarify the statement of the paradox itself. And it describes some of the early responses to probabilistic explications. We then inspect the assumptions employed by traditional (canonical) Bayesian approaches to the paradox. These assumptions may appear to be overly strong. So, drawing on weaker assumptions, we formulate a new-and-improved Bayesian confirmation-theoretic resolution of the Paradox of the Ravens. (shrink)
First, a brief historical trace of the developments in confirmation theory leading up to Goodman's infamous "grue" paradox is presented. Then, Goodman's argument is analyzed from both Hempelian and Bayesian perspectives. A guiding analogy is drawn between certain arguments against classical deductive logic, and Goodman's "grue" argument against classical inductive logic. The upshot of this analogy is that the "New Riddle" is not as vexing as many commentators have claimed. Specifically, the analogy reveals an intimate connection between Goodman's problem, and (...) the "problem of old evidence". Several other novel aspects of Goodman's argument are also discussed. (shrink)
In this discussion note, we explain how to relax some of the standard assumptions made in Garber-style solutions to the Problem of Old Evidence. The result is a more general and explanatory Bayesian approach.
Suppositions can be introduced in either the indicative or subjunctive mood. The introduction of either type of supposition initiates judgments that may be either qualitative, binary judgments about whether a given proposition is acceptable or quantitative, numerical ones about how acceptable it is. As such, accounts of qualitative/quantitative judgment under indicative/subjunctive supposition have been developed in the literature. We explore these four different types of theories by systematically explicating the relationships canonical representatives of each. Our representative qualitative accounts of indicative (...) and subjunctive supposition are based on the belief change operations provided by AGM revision and KM update respectively; our representative quantitative ones are offered by conditionalization and imaging. This choice is motivated by the familiar approach of understanding supposition as `provisional belief revision' wherein one temporarily treats the supposition as true and forms judgments by making appropriate changes to their other opinions. To compare the numerical judgments recommended by the quantitative theories with the binary ones recommended by the qualitative accounts, we rely on a suitably adapted version of the Lockean thesis. Ultimately, we establish a number of new results that we interpret as vindicating the often-repeated claim that conditionalization is a probabilistic version of revision, while imaging is a probabilistic version of update. (shrink)
In Chapter 12 of Warrant and Proper Function, Alvin Plantinga constructs two arguments against evolutionary naturalism, which he construes as a conjunction E&N .The hypothesis E says that “human cognitive faculties arose by way of the mechanisms to which contemporary evolutionary thought directs our attention (p.220).”1 With respect to proposition N , Plantinga (p. 270) says “it isn’t easy to say precisely what naturalism is,” but then adds that “crucial to metaphysical naturalism, of course, is the view that there is (...) no such person as the God of traditional theism.” Plantinga tries to cast doubt on the conjunction E&N in two ways.His “preliminary argument” aims to show that the conjunction is probably false, given the fact (R) that our psychological mechanisms for forming beliefs about the world are generally reliable.His “main argument” aims to show that the conjunction E&N is self-defeating — if you believe E&N , then you should stop believing that conjunction.Plantinga further develops the main argument in his unpublished paper “Naturalism Defeated” (Plantinga 1994).We will try to show that both arguments contain serious errors. (shrink)
Note: This is not an ad hoc change at all. It’s simply the natural thing say here – if one thinks of F as a generalization of classical logical entailment. The extra complexity I had in my original (incorrect) definition of F was there because I was foolishly trying to encode some non-classical, or “relavant” logical structure in F. I now think this is a mistake, and that I should go with the above, classical account of F. Arguments about relevance (...) logic need to be handled in a different way (and a different context!). And, besides, as Luca Moretti has shown (see below), the original definition of F cannot be the right basis for C ! OK, now on to C. (shrink)
Naive deductive accounts of confirmation have the undesirable consequence that if E confirms H, then E also confirms the conjunction H & X, for any X—even if X is utterly irrelevant to H (and E). Bayesian accounts of confirmation also have this property (in the case of deductive evidence). Several Bayesians have attempted to soften the impact of this fact by arguing that—according to Bayesian accounts of confirmation— E will confirm the conjunction H & X less strongly than E confirms (...) H (again, in the case of deductive evidence). I argue that existing Bayesian “resolutions” of this problem are inadequate in several important respects. In the end, I suggest a new‐and‐improved Bayesian account (and understanding) of the problem of irrelevant conjunction. (shrink)
Hempel first introduced the paradox of confirmation in (Hempel 1937). Since then, a very extensive literature on the paradox has evolved (Vranas 2004). Much of this literature can be seen as responding to Hempel’s subsequent discussions and analyses of the paradox in (Hempel 1945). Recently, it was noted that Hempel’s intuitive (and plausible) resolution of the paradox was inconsistent with his official theory of confirmation (Fitelson & Hawthorne 2006). In this article, we will try to explain how this inconsistency affects (...) the historical dialectic about the paradox and how it illuminates the nature of confirmation. In the end, we will argue that Hempel’s intuitions about the paradox of confirmation were (basically) correct, and that it is his theory that should be rejected, in favor of a (broadly) Bayesian account of confirmation. (shrink)
We give an analysis of the Monty Hall problem purely in terms of confirmation, without making any lottery assumptions about priors. Along the way, we show the Monty Hall problem is structurally identical to the Doomsday Argument.
As every philosopher knows, “the design argument” concludes that God exists from premisses that cite the adaptive complexity of organisms or the lawfulness and orderliness of the whole universe. Since 1859, it has formed the intellectual heart of creationist opposition to the Darwinian hypothesis that organisms evolved their adaptive features by the mindless process of natural selection. Although the design argument developed as a defense of theism, the logic of the argument in fact encompasses a larger set of issues. William (...) Paley saw clearly that we sometimes have an excellent reason to postulate the existence of an intelligent designer. If we find a watch on the heath, we reasonably infer that it was produced by an intelligent watchmaker. This design argument makes perfect sense. Why is it any different to claim that the eye was produced by an intelligent designer? Both critics and defenders of the design argument need to understand what the ground rules are for inferring that an intelligent designer is the unseen cause of an observed effect. (shrink)
In this paper, the authors describe their initial investigations in computational metaphysics. Our method is to implement axiomatic metaphysics in an automated reasoning system. In this paper, we describe what we have discovered when the theory of abstract objects is implemented in PROVER9 (a first-order automated reasoning system which is the successor to OTTER). After reviewing the second-order, axiomatic theory of abstract objects, we show (1) how to represent a fragment of that theory in PROVER9's first-order syntax, and (2) how (...) PROVER9 then finds proofs of interesting theorems of metaphysics, such as that every possible world is maximal. We conclude the paper by discussing some issues for further research. (shrink)
has proposed an interesting and novel Bayesian analysis of the Quine-Duhem (Q–D) problem (i.e., the problem of auxiliary hypotheses). Strevens's analysis involves the use of a simplifying idealization concerning the original Q–D problem. We will show that this idealization is far stronger than it might appear. Indeed, we argue that Strevens's idealization oversimplifies the Q–D problem, and we propose a diagnosis of the source(s) of the oversimplification. Some background on Quine–Duhem Strevens's simplifying idealization Indications that (I) oversimplifies Q–D Strevens's argument (...) for the legitimacy of (I). (shrink)
Bayesian epistemology suggests various ways of measuring the support that a piece of evidence provides a hypothesis. Such measures are defined in terms of a subjective probability assignment, pr, over propositions entertained by an agent. The most standard measure (where “H” stands for “hypothesis” and “E” stands for “evidence”) is: the difference measure: d(H,E) = pr(H/E) - pr(H).0 This may be called a “positive (probabilistic) relevance measure” of confirmation, since, according to it, a piece of evidence E qualitatively confirms a (...) hypothesis H if and only if pr(H/E) > pr(H), where qualitative disconfirmation is characterized by replacing “>” with “ “ with “=”. Other more or less standard positive relevance measures that have been proposed are: the log-ratio measure: r(H,E) = log[pr(H/E)/pr(H)] and the log-likelihood-ratio measure: l(H,E) = log[pr(E/H)/pr(E/~H)]. (shrink)
The (recent, Bayesian) cognitive science literature on the Wason Task (WT) has been modeled largely after the (not-so-recent, Bayesian) philosophy of science literature on the Paradox of Confirmation (POC). In this paper, we apply some insights from more recent Bayesian approaches to the (POC) to analogous models of (WT). This involves, first, retracing the history of the (POC), and, then, re-examining the (WT) with these historico-philosophical insights in mind.
Naive deductivist accounts of confirmation have the undesirable consequence that if E confirms H, then E also confirms the conjunction H·X, for any X—even if X is completely irrelevant to E and H. Bayesian accounts of confirmation may appear to have the same problem. In a recent article in this journal Fitelson (2002) argued that existing Bayesian attempts to resolve of this problem are inadequate in several important respects. Fitelson then proposes a new‐and‐improved Bayesian account that overcomes the problem of (...) irrelevant conjunction, and does so in a more general setting than past attempts. We will show how to simplify and improve upon Fitelson's solution. (shrink)
The (recent, Bayesian) cognitive science literature on The Wason Task (WT) has been modeled largely after the (not-so-recent, Bayesian) philosophy of science literature on The Paradox of Confirmation (POC). In this paper, we apply some insights from more recent Bayesian approaches to the (POC) to analogous models of (WT). This involves, first, retracing the history of the (POC), and, then, reexamining the (WT) with these historico-philosophical insights in mind.
Corroborating Testimony, Probability and Surprise’, Erik J. Olsson ascribes to L. Jonathan Cohen the claims that if two witnesses provide us with the same information, then the less probable the information is, the more confident we may be that the information is true (C), and the stronger the information is corroborated (C*). We question whether Cohen intends anything like claims (C) and (C*). Furthermore, he discusses the concurrence of witness reports within a context of independent witnesses, whereas the witnesses in (...) Olsson's model are not independent in the standard sense. We argue that there is much more than, in Olsson's words, ‘a grain of truth’ to claim (C), both on his own characterization as well as on Cohen's characterization of the witnesses. We present an analysis for independent witnesses in the contexts of decision-making under risk and decision-making under uncertainty and generalize the model for n witnesses. As to claim (C*), Olsson's argument is contingent on the choice of a particular measure of corroboration and is not robust in the face of alternative measures. Finally, we delimit the set of cases to which Olsson's model is applicable. 1 Claim (C) examined for Olsson's characterization of the relationship between the witnesses 2 Claim (C) examined for two or more independent witnesses 3 Robustness and multiple measures of corroboration 4 Discussion. (shrink)
There are two central questions concerning probability. First, what are its formal features? That is a mathematical question, to which there is a standard, widely (though not universally) agreed upon answer. This answer is reviewed in the next section. Second, what sorts of things are probabilities---what, that is, is the subject matter of probability theory? This is a philosophical question, and while the mathematical theory of probability certainly bears on it, the answer must come from elsewhere. To see why, observe (...) that there are many things in the world that have the mathematical structure of probabilities---the set of measurable regions on the surface of a table, for example---but that one would never mistake for being probabilities. So probability is distinguished by more than just its formal characteristics. The bulk of this essay will be taken up with the central question of what this “more” might be. (shrink)
In Thinking and Acting John Pollock offers some criticisms of Bayesian epistemology, and he defends an alternative understanding of the role of probability in epistemology. Here, I defend the Bayesian against some of Pollock's criticisms, and I discuss a potential problem for Pollock's alternative account.
Wayne (1995) critiques the Bayesian explication of the confirmational significance of evidential diversity (CSED) offered by Horwich (1982). Presently, I argue that Wayne’s reconstruction of Horwich’s account of CSED is uncharitable. As a result, Wayne’s criticisms ultimately present no real problem for Horwich. I try to provide a more faithful and charitable rendition of Horwich’s account of CSED. Unfortunately, even when Horwich’s approach is charitably reconstructed, it is still not completely satisfying.
By and large, we think is a useful reply to our original critique of his article on the Quine–Duhem problem. But, we remain unsatisfied with several aspects of his reply. Ultimately, we do not think he properly addresses our most important worries. In this brief rejoinder, we explain our remaining worries, and we issue a revised challenge for Strevens's approach to QD.
In this article, I explain how a variant of David Miller's argument concerning the language dependence of the accuracy of predictions can be applied to Joyce's notion of the accuracy of “estimates of numerical truth-values”. This leads to a potential problem for Joyce's accuracy-dominance-based argument for the conclusion that credences should obey the probability calculus.
This (brief) note is about the (evidential) “favoring” relation. Pre-theoretically, favoring is a three-place (epistemic) relation, between an evidential proposition E and two hypotheses H1 and H2. Favoring relations are expressed via locutions of the form: E favors H1 over H2. Strictly speaking, favoring should really be thought of as a four-place relation, between E, H1, H2, and a corpus of background evidence K. But, for present purposes (which won't address issues involving K), I will suppress the background corpus, so (...) as to simplify our discussion. Moreover, the favoring relation is meant to be a propositional epistemic relation, as opposed to a doxastic epistemic relation. That is, the favoring relation is not meant to be restricted to bodies of evidence that are possessed (as evidence) by some actual agent(s), or to hypotheses that are (in fact) entertained by some actual agent(s). In this sense, favoring is analogous to the relation of propositional justification — as opposed to doxastic justification (Conee 1980). In order to facilitate a comparison of Likelihoodist vs Bayesian explications of favoring, I will presuppose the following bridge principle, linking favoring and evidential support: • E favors H1 over H2 iff E supports H1 more strongly than E supports H2.1 Finally, I will only be discussing instances of the favoring relation involving contingent, empirical claims. So, it is to be understood that “favoring” will not apply if any of E, H1, or H2 are non-contingent (and/or non-empirical). With this background in place, we're ready to begin. (shrink)
In applying Bayes’s theorem to the history of science, Bayesians sometimes assume – often without argument – that they can safely ignore very implausible theories. This assumption is false, both in that it can seriously distort the history of science as well as the mathematics and the applicability of Bayes’s theorem. There are intuitively very plausible counter-examples. In fact, one can ignore very implausible or unknown theories only if at least one of two conditions is satisfied: one is certain that (...) there are no unknown theories which explain the phenomenon in question, or the likelihood of at least one of the known theories used in the calculation of the posterior is reasonably large. Often in the history of science, a very surprising phenomenon is observed, and neither of these criteria is satisfied. (shrink)