We argue that David Lewis’s principal principle implies a version of the principle of indifference. The same is true for similar principles that need to appeal to the concept of admissibility. Such principles are thus in accord with objective Bayesianism, but in tension with subjective Bayesianism. 1 The Argument2 Some Objections Met.
John Locke proposed a straightforward relationship between qualitative and quantitative doxastic notions: belief corresponds to a sufficiently high degree of confidence. Richard Foley has further developed this Lockean thesis and applied it to an analysis of the preface and lottery paradoxes. Following Foley's lead, we exploit various versions of these paradoxes to chart a precise relationship between belief and probabilistic degrees of confidence. The resolutions of these paradoxes emphasize distinct but complementary features of coherent belief. These features suggest principles that (...) tie together qualitative and quantitative doxastic notions. We show how these principles may be employed to construct a quantitative model - in terms of degrees of confidence - of an agent's qualitative doxastic state. This analysis fleshes out the Lockean thesis and provides the foundation for a logic of belief that is responsive to the logic of degrees of confidence. (shrink)
We chart the ways in which closure properties of consequence relations for uncertain inference take on different forms according to whether the relations are generated in a quantitative or a qualitative manner. Among the main themes are: the identification of watershed conditions between probabilistically and qualitatively sound rules; failsafe and classicality transforms of qualitatively sound rules; non-Horn conditions satisfied by probabilistic consequence; representation and completeness problems; and threshold-sensitive conditions such as `preface' and `lottery' rules.
Direct inferences identify certain probabilistic credences or confirmation-function-likelihoods with values of objective chances or relative frequencies. The best known version of a direct inference principle is David Lewis’s Principal Principle. Certain kinds of statements undermine direct inferences. Lewis calls such statements inadmissible. We show that on any Bayesian account of direct inference several kinds of intuitively innocent statements turn out to be inadmissible. This may pose a significant challenge to Bayesian accounts of direct inference. We suggest some ways in which (...) these challenges may be addressed. (shrink)
In a penetrating investigation of the relationship between belief and quantitative degrees of confidence (or degrees of belief) Richard Foley (1992) suggests the following thesis: ... it is epistemically rational for us to believe a proposition just in case it is epistemically rational for us to have a sufficiently high degree of confidence in it, sufficiently high to make our attitude towards it one of belief. Foley goes on to suggest that rational belief may be just rational degree of confidence (...) above some threshold level that the agent deems sufficient for belief. He finds hints of this view in Locke’s discussion of probability and degrees of assent, so he calls it the Lockean Thesis.1 The Lockean Thesis has important implications for the logic of belief. Most prominently, it implies that even a logically ideal agent whose degrees of confidence satisfy the axioms of probability theory may quite rationally believe each of a large body of propositions that are jointly inconsistent. For example, an agent may legitimately believe that on each given occasion her well-maintained car will start, but nevertheless believe that she will eventually encounter a.. (shrink)
The Paradox of the Ravens (a.k.a,, The Paradox of Confirmation) is indeed an old chestnut. A great many things have been written and said about this paradox and its implications for the logic of evidential support. The first part of this paper will provide a brief survey of the early history of the paradox. This will include the original formulation of the paradox and the early responses of Hempel, Goodman, and Quine. The second part of the paper will describe attempts (...) to resolve the paradox within a Bayesian framework, and show how to improve upon them. This part begins with a discussion of how probabilistic methods can help to clarify the statement of the paradox itself. And it describes some of the early responses to probabilistic explications. We then inspect the assumptions employed by traditional (canonical) Bayesian approaches to the paradox. These assumptions may appear to be overly strong. So, drawing on weaker assumptions, we formulate a new-and-improved Bayesian confirmation-theoretic resolution of the Paradox of the Ravens. (shrink)
Sections 1 through 3 present all of the main ideas behind the probabilistic logic of evidential support. For most readers these three sections will suffice to provide an adequate understanding of the subject. Those readers who want to know more about how the logic applies when the implications of hypotheses about evidence claims (called likelihoods) are vague or imprecise may, after reading sections 1-3, skip to section 6. Sections 4 and 5 are for the more advanced reader who wants a (...) detailed understanding of some telling results about how this logic may bring about convergence to the truth. (shrink)
I will describe the logics of a range of conditionals that behave like conditional probabilities at various levels of probabilistic support. Families of these conditionals will be characterized in terms of the rules that their members obey. I will show that for each conditional, →, in a given family, there is a probabilistic support level r and a conditional probability function P such that, for all sentences C and B, 'C → B' holds just in case P[B | C] ≥ (...) r. Thus, each conditional in a given family behaves like conditional probability above some specific support level. (shrink)
I argue that Bayesians need two distinct notions of probability. We need the usual degree-of-belief notion that is central to the Bayesian account of rational decision. But Bayesians also need a separate notion of probability that represents the degree to which evidence supports hypotheses. Although degree-of-belief is well suited to the theory of rational decision, Bayesians have tried to apply it to the realm of hypothesis confirmation as well. This double duty leads to the problem of old evidence, a problem (...) that, we will see, is much more extensive than usually recognized. I will argue that degree-of-support is distinct from degree-of-belief, that it is not just a kind of counterfactual degree-of-belief, and that it supplements degree-of-belief in a way that resolves the problems of old evidence and provides a richer account of the logic of scientific inference and belief. (shrink)
Eliminative induction is a method for finding the truth by using evidence to eliminate false competitors. It is often characterized as "induction by means of deduction"; the accumulating evidence eliminates false hypotheses by logically contradicting them, while the true hypothesis logically entails the evidence, or at least remains logically consistent with it. If enough evidence is available to eliminate all but the most implausible competitors of a hypothesis, then (and only then) will the hypothesis become highly confirmed. I will argue (...) that, with regard to the evaluation of hypotheses, Bayesian inductive inference is essentially a probabilistic form of induction by elimination. Bayesian induction is an extension of eliminativism to cases where, rather than contradict the evidence, false hypotheses imply that the evidence is very unlikely, much less likely than the evidence would be if some competing hypothesis were true. This is not, I think, how Bayesian induction is usually understood. The recent book by Howson and Urbach, for example, provides an excellent, comprehensive explanation and defense of the Bayesian approach; but this book scarcely remarks on Bayesian induction's eliminative nature. Nevertheless, the very essence of Bayesian induction is the refutation of false competitors of a true hypothesis, or so I will argue. (shrink)
The (recent, Bayesian) cognitive science literature on The Wason Task (WT) has been modeled largely after the (not-so-recent, Bayesian) philosophy of science literature on The Paradox of Confirmation (POC). In this paper, we apply some insights from more recent Bayesian approaches to the (POC) to analogous models of (WT). This involves, first, retracing the history of the (POC), and, then, reexamining the (WT) with these historico-philosophical insights in mind.
Naive deductivist accounts of confirmation have the undesirable consequence that if E confirms H, then E also confirms the conjunction H·X, for any X—even if X is completely irrelevant to E and H. Bayesian accounts of confirmation may appear to have the same problem. In a recent article in this journal Fitelson (2002) argued that existing Bayesian attempts to resolve of this problem are inadequate in several important respects. Fitelson then proposes a new‐and‐improved Bayesian account that overcomes the problem of (...) irrelevant conjunction, and does so in a more general setting than past attempts. We will show how to simplify and improve upon Fitelson's solution. (shrink)
Confirmation theory is the study of the logic by which scientific hypotheses may be confirmed or disconfirmed, or even refuted by evidence. A specific theory of confirmation is a proposal for such a logic. Presumably the epistemic evaluation of scientific hypotheses should largely depend on their empirical content – on what they say the evidentially accessible parts of the world are like, and on the extent to which they turn out to be right about that. Thus, all theories of confirmation (...) rely on measures of how well various alternative hypotheses account for the evidence.1 Most contemporary confirmation theories employ probability functions to provide such a measure. They measure how well the evidence fits what the hypothesis says about the world in terms of how likely it is that the evidence should occur were the hypothesis true. Such hypothesis-based probabilities of evidence claims are called likelihoods. Clearly, when the evidence is more likely according to one hypothesis than according to an alternative, that should redound to the credit of the former hypothesis and the discredit of the later. But various theories of confirmation diverge on precisely how this credit is to be measured? (shrink)
I’ll describe a range of systems for nonmonotonic conditionals that behave like conditional probabilities above a threshold. The rules that govern each system are probabilistically sound in that each rule holds when the conditionals are interpreted as conditional probabilities above a threshold level specific to that system. The well-known preferential and rational consequence relations turn out to be special cases in which the threshold level is 1. I’ll describe systems that employ weaker rules appropriate to thresholds lower than 1, and (...) compare them to these two standard systems. (shrink)
The objectivity of Bayesian induction relies on the ability of evidence to produce a convergence to agreement among agents who initially disagree about the plausibilities of hypotheses. I will describe three sorts of Bayesian convergence. The first reduces the objectivity of inductions about simple "occurrent events" to the objectivity of posterior probabilities for theoretical hypotheses. The second reveals that evidence will generally induce converge to agreement among agents on the posterior probabilities of theories only if the convergence is 0 or (...) 1. The third establishes conditions under which evidence will very probably compel posterior probabilities of theories to converge to 0 or 1. (shrink)
I argue for an epistemic conception of voting, a conception on which the purpose of the ballot is at least in some cases to identify which of several policy proposals will best promote the public good. To support this view I first briefly investigate several notions of the kind of public good that public policy should promote. Then I examine the probability logic of voting as embodied in two very robust versions of the Condorcet Jury Theorem and some related results. (...) These theorems show that if the number of voters or legislators is sufficiently large and the average of their individual propensities to select the better of two policy proposals is a little above random chance, and if each person votes his or her own best judgment (rather than in alliance with a block or faction), then the majority is extremely likely to select the better alternative. Here ‘better alternative’ means that policy or law that will best promote the public good. I also explicate a Convincing Majorities Theorem, which shows the extent to which the majority vote should provide evidence that the better policy has been selected. Finally, I show how to extend all of these results to judgments among multiple alternatives through the kind of sequential balloting typical of the legislative amendment process. (shrink)
Rational consequence relations and Popper functions provide logics for reasoning under uncertainty, the former purely qualitative, the latter probabilistic. But few researchers seem to be aware of the close connection between these two logics. I’ll show that Popper functions are probabilistic versions of rational consequence relations. I’ll not assume that the reader is familiar with either logic. I present them, and explicate the relationship between them, from the ground up. I’ll also present alternative axiomatizations for each logic, showing them to (...) depend on weaker axioms than usually recognized. (shrink)
Scientifi c theories and hypotheses make claims that go well beyond what we can immediately observe. How can we come to know whether such claims are true? The obvious approach is to see what a hypothesis says about the observationally accessible parts of the world. If it gets that wrong, then it must be false; if it gets that right, then it may have some claim to being true. Any sensible a empt to construct a logic that captures how we (...) may come to reasonably believe the falsehood or truth of scientifi c hypotheses must be built on this idea. Philosophers refer to such logics as logics of confi rmation or as confi rmation theories. (shrink)
Jeffrey updating is a natural extension of Bayesian updating to cases where the evidence is uncertain. But, the resulting degrees of belief appear to be sensitive to the order in which the uncertain evidence is acquired, a rather un-Bayesian looking effect. This order dependence results from the way in which basic Jeffrey updating is usually extended to sequences of updates. The usual extension seems very natural, but there are other plausible ways to extend Bayesian updating that maintain order-independence. I will (...) explore three models of sequential updating, the usual extension and two alternatives. I will show that the alternative updating schemes derive from extensions of the usual rigidity requirement, which is at the heart of Jeffrey updating. Finally, I will establish necessary and sufficient conditions for order-independent updating, and show that extended rigidity is closely related to these conditions. (shrink)
We will formulate two Bell arguments. Together they show that if the probabilities given by quantum mechanics are approximately correct, then the properties exhibited by certain physical systems must be nontrivially dependent on thetypes of measurements performedand eithernonlocally connected orholistically related to distant events. Although a number of related arguments have appeared since John Bell's original paper (1964), they tend to be either highly technical or to lack full generality. The following arguments depend on the weakest of premises, and the (...) structure of the arguments is simpler than most (without any loss of rigor or generality). The technical simplicity is due in part to a novel version of the generalized Bell inequality. The arguments are self contained and presuppose no knowledge of quantum mechanics. We will also offer a Dutch Book argument for measurement type dependence. (shrink)
In a previous paper I described a range of nonmonotonic conditionals that behave like conditional probability functions at various levels of probabilistic support. These conditionals were defined as semantic relations on an object language for sentential logic. In this paper I extend the most prominent family of these conditionals to a language for predicate logic. My approach to quantifiers is closely related to Hartry Field's probabilistic semantics. Along the way I will show how Field's semantics differs from a substitutional interpretation (...) of quantifiers in crucial ways, and show that Field's approach is closely related to the usual objectual semantics. One of Field's quantifier rules, however, must be significantly modified to be adapted to nonmonotonic conditional semantics. And this modification suggests, in turn, an alternative quantifier rule for probabilistic semantics. (shrink)
Although the use of possible worlds in semantics has been very fruitful and is now widely accepted, there is a puzzle about the standard definition of validity in possible-worlds semantics that has received little notice and virtually no comment. A sentence of an intensional language is typically said to be valid just in case it is true at every world under every model on every model structure of the language. Each model structure contains a set of possible worlds, and models (...) are defined relative to model structures, assigning truth-values to sentences at each world countenanced by the model structure. The puzzle is why more than one model structure is used in the definition of validity. There is presumably just one class of all possible worlds and just one model structure defined on this class that does correctly the things that model structures are supposed to do. (These include, but need not be limited to, specifying the set of individuals in each world as well as various accessibility relations between worlds.) Why not define validity simply as truth at every world under every model on this one model structure? What is the point of bringing in more model structures than just this one?
We investigate these questions in some detail and conclude that for many intensional languages the puzzle points to a genuine difficulty: the standard definition of validity is insufficiently motivated. We begin (Section 1) by showing that a plausible and natural account of validity for intensional languages can be based on a single model structure, and that validity so defined is analogous in important respects to the standard account of validity for extensional languages. We call this notion of validity "validity!", and in Section 2 we contrast it with the standard notion, which we call "validity2". Several attempts are made to discover a rationale for the almost universal acceptance of validity2, but in most of these attempts we come up empty-handed. So in Section 3 we investigate validity! for some intensional languages. Our investigation includes providing axiomatizations for several propositional and predicate logics, most of which are provably complete. The completeness proofs are given in the Appendix, which also contains a sketch of a compactness proof for one of the predicate logics. (shrink)
Think of confirmation in the context of the Ravens Paradox this way. The likelihood ratio measure of incremental confirmation gives us, for an observed Black Raven and for an observed non-Black non-Raven, respectively, the following “full” likelihood ratios.
This essay is an attempt to gain better insight into Russell's positive account of inductive inference. I contend that Russell's postulates play only a supporting role in his overall account. At the center of Russell's positive view is a probabilistic, Bayesian model of inductive inference. Indeed, Russell and Maxwell actually held very similar Bayesian views. But the Bayesian component of Russell's view in Human Knowledge is sparse and easily overlooked. Maxwell was not aware of it when he developed his own (...) view, and I believe he was never fully aware of the extent to which Russell's account anticipates his own. The primary focus of this paper will be the explication of the Bayesian component of the Russell-Maxwell view, and the way in which it undermines judgment empiricism. (shrink)
Any inferential system in which the addition of new premises can lead to the retraction of previous conclusions is a non-monotonic logic. Classical conditional probability provides the oldest and most widely respected example of non-monotonic inference. This paper presents a semantic theory for a unified approach to qualitative and quantitative non-monotonic logic. The qualitative logic is unlike most other non- monotonic logics developed for AI systems. It is closely related to classical (i.e., Bayesian) probability theory. The semantic theory for qualitative (...) non-monotonic entailments extends in a straightforward way to a semantic theory for quantitative partial entailment relations, and these relations turn out to be the classical probability functions. (shrink)