This book is a major contribution to decision theory, focusing on the question of when it is rational to accept scientific theories. The author examines both Bayesian decision theory and confirmation theory, refining and elaborating the views of Ramsey and Savage. He argues that the most solid foundation for confirmation theory is to be found in decision theory, and he provides a decision-theoretic derivation of principles for how many probabilities should be revised over time. Professor Maher defines a notion of (...) accepting a hypothesis, and then shows that it is not reducible to probability and that it is needed to deal with some important questions in the philosophy of science. A Bayesian decision-theoretic account of rational acceptance is provided together with a proof of the foundations for this theory. A final chapter shows how this account can be used to cast light on such vexed issues as verisimilitude and scientific realism. (shrink)
Confirmation is commonly identified with positive relevance, E being said to confirm H if and only if E increases the probability of H. Today, analyses of this general kind are usually Bayesian ones that take the relevant probabilities to be subjective. I argue that these subjective Bayesian analyses are irremediably flawed. In their place I propose a relevance analysis that makes confirmation objective and which, I show, avoids the flaws of the subjective analyses. What I am proposing is in some (...) ways a return to Carnap's conception of confirmation, though there are also important differences between my analysis and his. My analysis includes new accounts of what evidence is and of the indexicality of confirmation claims. Finally, I defend my analysis against Achinstein's criticisms of the relevance concept of confirmation. (shrink)
How can formal methods be applied to philosophical problems that involve informal concepts of ordinary language? Carnap answered this question by describing a methodology that he called “explication." Strawson objected that explication changes the subject and does not address the original philosophical problem; this paper shows that Carnap’s response to that objection was inadequate and offers a better response. More recent criticisms of explication by Boniolo and Eagle are shown to rest on misunderstandings of the nature of explication. It is (...) concluded that explication is an appropriate methodology for formal philosophy. (shrink)
A widely endorsed thesis in the philosophy of science holds that if evidence for a hypothesis was not known when the hypothesis was proposed, then that evidence confirms the hypothesis more strongly than would otherwise be the case. The thesis has been thought to be inconsistent with Bayesian confirmation theory, but the arguments offered for that view are fallacious. This paper shows how the special value of prediction can in fact be given Bayesian explanation. The explanation involves consideration of the (...) reliability of the method by which the hypothesis was discovered, and thus reveals an intimate connection between the 'logic of discovery' and confirmation theory. (shrink)
This is an essay in the Bayesian theory of how opinions should be revised over time. It begins with a discussion of the principle that van Fraassen has dubbed "Reflection". This principle is not a requirement of rationality; a diachronic Dutch argument, that purports to show the contrary, is fallacious. But under suitable conditions, it is irrational to actually implement shifts in probability that violate Reflection. Conditionalization and probability kinematics are special cases of the principle not to implement shifts that (...) violate Reflection; hence these principles are also requirements of rationality under suitable conditions, though not universal requirements of rationality. (shrink)
The word ‘probability’ in ordinary language has two different senses, here called inductive and physical probability. This paper examines the concept of inductive probability. Attempts to express this concept in other words are shown to be either incorrect or else trivial. In particular, inductive probability is not the same as degree of belief. It is argued that inductive probabilities exist; subjectivist arguments to the contrary are rebutted. Finally, it is argued that inductive probability is an important concept and that it (...) is a mistake to try to replace it with the concept of degree of belief, as is usual today. (shrink)
Hempel's paradox of the ravens arises from the inconsistency of three prima facie plausible principles of confirmation. This paper uses Carnapian inductive logic to (a) identify which of the principles is false, (b) give insight into why this principle is false, and (c) identify a true principle that is sufficiently similar to the false one that failure to distinguish the two might explain why the false principle is prima facie plausible. This solution to the paradox is compared with a variety (...) of other responses and is shown to differ from all of them. (shrink)
James Joyce's 'Nonpragmatic Vindication of Probabilism' gives a new argument for the conclusion that a person's credences ought to satisfy the laws of probability. The premises of Joyce's argument include six axioms about what counts as an adequate measure of the distance of a credence function from the truth. This paper shows that (a) Joyce's argument for one of these axioms is invalid, (b) his argument for another axiom has a false premise, (c) neither axiom is plausible, and (d) without (...) these implausible axioms Joyce's vindication of probabilism fails. (shrink)
In 1959 Carnap published a probability model that was meant to allow forreasoning by analogy involving two independent properties. Maher (2000)derived a generalized version of this model axiomatically and defended themodel''s adequacy. It is thus natural to now consider how the model mightbe extended to the case of more than two properties. A simple extension waspublished by Hess (1964); this paper argues that it is inadequate. Amore sophisticated one was developed jointly by Carnap and Kemeny in theearly 1950s but never (...) published; this paper gives the first published descriptionof Carnap and Kemeny''s model and argues that it too is inadequate. Since noother way of extending the two-property model is currently known, the conclusionof this paper is that a satisfactory extension to multiple properties requires somenew approach. (shrink)
Recently a number of authors have tried to avoid the failures of traditional Dutch book arguments by separating them from pragmatic concerns of avoiding a sure loss. In this paper I examine defenses of this kind by Howson and Urbach, Hellman, and Christensen. I construct rigorous explications of their arguments and show that they are not cogent. I advocate abandoning Dutch book arguments in favor of a representation theorem.
Let R(X, B) denote the class of probability functions that are defined on algebra X and that represent rationally permissible degrees of certainty for a person whose total relevant background evidence is B. This paper is concerned with characterizing R(X, B) for the case in whichX is an algebra of propositions involving two properties and B is empty. It proposes necessary conditions for a probability function to be in R(X, B), some of which involve the notion of statistical dependence. The (...) class of probability functions that satisfy these conditions, here denoted PI, includes a class that Carnap once proposed for the same situation. Probability functions in PI violate Carnap's axiom of analogy but, it is argued, that axiom should be rejected. A derivation of Carnap's model by Hesse has limitations that are not present in the derivation of PI given here. Various alternative probability models are considered and rejected. (shrink)
Inductive probability is the logical concept of probability in ordinary language. It is vague but it can be explicated by defining a clear and precise concept that can serve some of the same purposes. This paper presents a general method for doing such an explication and then a particular explication due to Carnap. Common criticisms of Carnap's inductive logic are examined; it is shown that most of them are spurious and the others are not fundamental.
Evidence for a hypothesis typically confirms the hypothesis more if the evidence was predicted than if it was accommodated. Or so I argued in previous papers, where I also developed an analysis of why this should be so. But this was all a mistake if Howson and Franklin (1991) are to be believed. In this paper, I show why they are not to be believed. I also identify a grain of truth that may have been dimly grasped by those Bayesians (...) who deny the confirmatory value of prediction. (shrink)
Van Fraassen has maintained that acceptance of a scientific theory does not involve the belief that the theory is true. Blackburn, Mitchell and Horwich have claimed that acceptance, as understood by van Fraassen, is the same as belief; in which case, van Fraassen's position is incoherent. Van Fraassen identifies belief with subjective probability, so the question at issue is really whether acceptance of a theory involves a high subjective probability for the theory. Van Fraassen is not committed to this, and (...) hence the charge of incoherence is misplaced. Indeed, van Fraassen is correct on this point. However, he is wrong to think that acceptance requires a high subjective probability that the theory is empirically adequate; and his reason for thinking that science aims at empirical adequacy rather than truth rests on an overly crude theory of rational choice. (shrink)
I conceive of inductive logic as a project of explication. The explicandum is one of the meanings of the word `probability' in ordinary language; I call it inductive probability and argue that it is logical, in a certain sense. The explicatum is a conditional probability function that is specified by stipulative definition. This conception of inductive logic is close to Carnap's, but common objections to Carnapian inductive logic (the probabilities don't exist, are arbitrary, etc.) do not apply to this conception.
Bayesian decision theory is here construed as explicating a particular concept of rational choice and Bayesian probability is taken to be the concept of probability used in that theory. Bayesian probability is usually identified with the agent’s degrees of belief but that interpretation makes Bayesian decision theory a poor explication of the relevant concept of rational choice. A satisfactory conception of Bayesian decision theory is obtained by taking Bayesian probability to be an explicatum for inductive probability given the agent’s evidence.
Bayesian confirmation theory offers an explicatum for a pretheoretic concept of confirmation. The “problem of irrelevant conjunction” for this theory is that, according to some people's intuitions, the pretheoretic concept differs from the explicatum with regard to conjunctions involving irrelevant propositions. Previous Bayesian solutions to this problem consist in showing that irrelevant conjuncts reduce the degree of confirmation; they have the drawbacks that (i) they don't hold for all ways of measuring degree of confirmation and (ii) they don't remove the (...) conflict with intuition but merely “soften the impact” (as Fitelson has written). A better solution, which avoids both these drawbacks, is to show that the intuition is wrong. (shrink)
In October 2009 I decided to stop doing philosophy. This meant, in particular, stopping work on the book that I was writing on the nature of probability. At that time, I had no intention of making my unfinished draft available to others. However, I recently noticed how many people are reading the lecture notes and articles on my web site. Since this draft book contains some important improvements on those materials, I decided to make it available to anyone who wants (...) to read it. That is what you have in front of you. The account of Laplace’s theory of probability in Chapter 4 is very different to what I said in my seminar lectures, and also very different to any other account I have seen; it is based on a reading of important texts by Laplace that appear not to have been read by other commentators. The discussion of von Mises’ theory in Chapter 7 is also new, though perhaps less revolutionary. And the final chapter is a new attempt to come to grips with the popular, but amorphous, subjective theory of probability. The material in the other chapters has mostly appeared in previous articles of mine but things are sometimes expressed differently here. I would like to say again that this is an incomplete draft of a book, not the book I would have written if I had decided to finish it. It no doubt contains poor expressions, it may contain some errors or inconsistencies, and it doesn’t cover all the theories that I originally intended to discuss. Apart from this preface, I have done no work on the book since October 2009. (shrink)
Contrary to what has been widely supposed, Bayesian theory deals successfully with the introduction of new theories that have never previously been entertained. The theory enables us to say what sorts of method should be used to assign probabilities to these new theories, and it allows that the probabilities of existing theories may be modified as a result.
A "symptomatic act" is an act that is evidence for a state that it has no tendency to cause. In this paper I show that when the evidential value of a symptomatic act might influence subsequent choices, causal decision theory may initially recommend against its own use for those subsequent choices. And if one knows that one will nevertheless use causal decision theory to make those subsequent choices, causal decision theory may favor the one-box solution in Newcomb's problem, and may (...) recommend against making cost-free observations. But if one can control one's future choices, then causal decision theory never recommends against cost-free observation. (shrink)
Dieser Aufsatz bringt eine Darstellung des Problems der Kontingenz, die mit Leibniz's Philosophie übereinstimmt und -vor allem -die den folgenden Bedingungen genügt: 1) einige Eigenschaften von Substanzen sind kontingent; 2) einige Eigenschaften von Substanzen sind notwendig; 3) das Gegenteil einer kontingenten Wahrheit impliziert keinen Widerspruch ; 4) es ist ein kontingentes Faktum, daß Gott immer das Beste wählt. -Eine Darstellung der Leibnizschen Theorie der Kontingenz muß diese Bedingungen erfüllen, denn bei ihnen handelt es sich um Prinzipien, an denen Leibniz -zumindest (...) in seinen späteren Schriften -festgehalten hat. Untersuchungen zu Leibniz' Theorie der Kontingenz neigen dazu, sich vor allem um eine Erfüllung der Bedingung 1 zu bemühen und die anderen unberücksichtigt zu lassen. In § 1 des Aufsatzes wird gezeigt, daß die Versuche, die Kontingenz auf die Möglichkeit der Nicht-Existenz zu gründen, entweder den Bedingungen 2 und 3 nicht genügen oder der Bedingung 4. In § 2 wird dargelegt, daß dasselbe gilt für Versuche, die Kontingenz auf die unendliche Komplexität der Individualbegriffe zu gründen. In § 3 wird die eigene Interpretation der Kontingenz dargeboten. Sie genügt den angegebenen Bedingungen. (shrink)
By “physical probability” I mean the empirical concept of probability in ordinary language. It can be represented as a function of an experiment type and an outcome type, which explains how non-extreme physical probabilities are compatible with determinism. Two principles, called specification and independence, put restrictions on the existence of physical probabilities, while a principle of direct inference connects physical probability with inductive probability. This account avoids a variety of weaknesses in the theories of Levi and Lewis.
In From Instrumentalism to Constructive Realism Theo Kuipers presents a theory of qualitative confirmation that is supposed to not assume the existence of quantitative probabilities. He claims that this theory is able to resolve some paradoxes in confirmation theory, including the ravens paradox. This paper shows that there are flaws in Kuipers' qualitative confirmation theory and in his application of it to the ravens paradox.
In 1756 Joseph Black published a new theory of the nature of lime, one that is now viewed as essentially correct. Black's theory was not immediately accepted, and a competing theory, published in 1764 by Johann Meyer, was widely preferred to Black's for some years. In this paper, probability theory is used to show that, and why, some of Black's evidence made his theory more probable than Meyer's.
Predictions about the future and unrestricted universal generalizations are never logically implied by our observational evidence, which is limited to particular facts in the present and past. Nevertheless, propositions of these and other kinds are often said to be confirmed by observational evidence. A natural place to begin the study of confirmation theory is to consider what it means to say that some evidence E confirms a hypothesis H.
On other formatting matters, follow the 15th edition of the Chicago Manual of Style (CMS). The online edition is free with a uiuc internet connection and there are non-circulating paper copies in many campus libraries, including the History and Philosophy Library.
Bayesian decision theory, in its classical or strict form, requires agents to have a determinate probability function. In recent years many decision theorists have come to think that this requirement should be weakened to allow for cases in which the agent makes indeterminate probability judgments. It has been claimed that this weakening makes the theory more realistic, and that it makes the theory more tenable as a normative ideal. This paper shows that the usual technique for weakening strict Bayesianism has (...) neither of these claimed advantages. (shrink)
Can Bayesians make sense of the notion of acceptance? And should they want to? This paper argues that the answer to both questions is yes. While these answers have been defended before, the way of making sense of acceptance offered here differs from what others have proposed, and the reasons given for why Bayesians should want to make sense of acceptance are also different.