The problem addressed in this paper is “the main epistemic problem concerning science”, viz. “the explication of how we compare and evaluate theories [...] in the light of the available evidence” (van Fraassen, BC, 1983, Theory comparison and relevant Evidence. In J. Earman (Ed.), Testing scientific theories (pp. 27–42). Minneapolis: University of Minnesota Press). Sections 1– 3 contain the general plausibility-informativeness theory of theory assessment. In a nutshell, the message is (1) that there are two values a theory should exhibit: (...) truth and informativeness—measured respectively by a truth indicator and a strength indicator; (2) that these two values are conflicting in the sense that the former is a decreasing and the latter an increasing function of the logical strength of the theory to be assessed; and (3) that in assessing a given theory by the available data one should weigh between these two conflicting aspects in such a way that any surplus in informativeness succeeds, if the shortfall in plausibility is small enough. Particular accounts of this general theory arise by inserting particular strength indicators and truth indicators. In Section 4 the theory is spelt out for the Bayesian paradigm of subjective probabilities. It is then compared to incremental Bayesian confirmation theory. Section 4 closes by asking whether it is likely to be lovely. Section 5 discusses a few problems of confirmation theory in the light of the present approach. In particular, it is briefly indicated how the present account gives rise to a new analysis of Hempel’s conditions of adequacy for any relation of confirmation (Hempel, CG, 1945, Studies in the logic of comfirmation. Mind, 54, 1–26, 97–121.), differing from the one Carnap gave in § 87 of his Logical foundations of probability (1962, Chicago: University of Chicago Press). Section 6 adresses the question of justification any theory of theory assessment has to face: why should one stick to theories given high assessment values rather than to any other theories? The answer given by the Bayesian version of the account presented in section 4 is that one should accept theories given high assessment values, because, in the medium run, theory assessment almost surely takes one to the most informative among all true theories when presented separating data. The concluding section 7 continues the comparison between the present account and incremental Bayesian confirmation theory. (shrink)
Degrees of belief are familiar to all of us. Our conﬁdence in the truth of some propositions is higher than our conﬁdence in the truth of other propositions. We are pretty conﬁdent that our computers will boot when we push their power button, but we are much more conﬁdent that the sun will rise tomorrow. Degrees of belief formally represent the strength with which we believe the truth of various propositions. The higher an agent’s degree of belief for a particular (...) proposition, the higher her conﬁdence in the truth of that proposition. For instance, Sophia’s degree of belief that it will be sunny in Vienna tomorrow might be .52, whereas her degree of belief that the train will leave on time might be .23. The precise meaning of these statements depends, of course, on the underlying theory of degrees of belief. These theories offer a formal tool to measure degrees of belief, to investigate the relations between various degrees of belief in different propositions, and to normatively evaluate degrees of belief. (shrink)
We argue that a semantics for counterfactual conditionals in terms of comparative overall similarity faces a formal limitation due to Arrow’s impossibility theorem from social choice theory. According to Lewis’s account, the truth-conditions for counterfactual conditionals are given in terms of the comparative overall similarity between possible worlds, which is in turn determined by various aspects of similarity between possible worlds. We argue that a function from aspects of similarity to overall similarity should satisfy certain plausible constraints while Arrow’s impossibility (...) theorem rules out that such a function satisfies all the constraints simultaneously. We argue that a way out of this impasse is to represent aspectual similarity in terms of ranking functions instead of representing it in a purely ordinal fashion. Further, we argue against the claim that the determination of overall similarity by aspects of similarity faces a difficulty in addition to the Arrovian limitation, namely the incommensurability of different aspects of similarity. The phenomena that have been cited as evidence for such incommensurability are best explained by ordinary vagueness. (shrink)
Recent accounts of actual causation are stated in terms of extended causal models. These extended causal models contain two elements representing two seemingly distinct modalities. The first element are structural equations which represent the or mechanisms of the model, just as ordinary causal models do. The second element are ranking functions which represent normality or typicality. The aim of this paper is to show that these two modalities can be unified. I do so by formulating two constraints under which extended (...) causal models with their two modalities can be subsumed under so called which contain just one modality. These two constraints will be formally precise versions of Lewissystem of weights or priorities” governing overall similarity between possible worlds. (shrink)
The paper provides an argument for the thesis that an agent’s degrees of disbelief should obey the ranking calculus. This Consistency Argument is based on the Consistency Theorem. The latter says that an agent’s belief set is and will always be consistent and deductively closed iff her degrees of entrenchment satisfy the ranking axioms and are updated according to the ranktheoretic update rules.
Philosophically, one of the most important questions in the enterprise termed confirmation theory is this: Why should one stick to well confirmed theories rather than to any other theories? This paper discusses the answers to this question one gets from absolute and incremental Bayesian confirmation theory. According to absolute confirmation, one should accept ''absolutely well confirmed'' theories, because absolute confirmation takes one to true theories. An examination of two popular measures of incremental confirmation suggests the view that one should stick (...) to incrementally well confirmed theories, because incremental confirmation takes one to (the most) informative (among all) true theories. However, incremental confirmation does not further this goal in general. I close by presenting a necessary and sufficient condition for revealing the confirmational structure in almost every world when presented separating data. (shrink)
Any theory of confirmation must answer the following question: what is the purpose of its conception of confirmation for scientific inquiry? In this article, we argue that no Bayesian conception of confirmation can be used for its primary intended purpose, which we take to be making a claim about how worthy of belief various hypotheses are. Then we consider a different use to which Bayesian confirmation might be put, namely, determining the epistemic value of experimental outcomes, and thus to decide (...) which experiments to carry out. Interestingly, Bayesian confirmation theorists rule out that confirmation be used for this purpose. We conclude that Bayesian confirmation is a means with no end. 1 Introduction2 Bayesian Confirmation Theory3 Bayesian Confirmation and Belief4 Confirmation and the Value of Experiments5 Conclusion. (shrink)
A Logical Introduction to Probability and Induction starts with elementary logic and uses it as basis for a philosophical discussion of probability and induction. Throughout the book results are carefully proved using the inference rules introduced at the beginning. The textbook is suitable for undergraduate courses in philosophy and logic.
Philosophers typically rely on intuitions when providing a semantics for counterfactual conditionals. However, intuitions regarding counterfactual conditionals are notoriously shaky. The aim of this paper is to provide a principled account of the semantics of counterfactual conditionals. This principled account is provided by what I dub the Royal Rule, a deterministic analogue of the Principal Principle relating chance and credence. The Royal Rule says that an ideal doxastic agent’s initial grade of disbelief in a proposition \(A\) , given that the (...) counterfactual distance in a given context to the closest \(A\) -worlds equals \(n\) , and no further information that is not admissible in this context, should equal \(n\) . Under the two assumptions that the presuppositions of a given context are admissible in this context, and that the theory of deterministic alethic or metaphysical modality is admissible in any context, it follows that the counterfactual distance distribution in a given context has the structure of a ranking function. The basic conditional logic V is shown to be sound and complete with respect to the resulting rank-theoretic semantics of counterfactuals. (shrink)
Belief revision theory studies how an ideal doxastic agent should revise her beliefs when she receives new information. In part I, I have first presented the AGM theory of belief revision. Then I have focused on the problem of iterated belief revisions. In part II, I will first present ranking theory (Spohn 1988). Then I will show how it solves the problem of iterated belief revisions. I will conclude by sketching two areas of future research.
The Spohnian paradigm of ranking functions is in many respects like an order-of-magnitude reverse of subjective probability theory. Unlike probabilities, however, ranking functions are only indirectly—via a pointwise ranking function on the underlying set of possibilities W —defined on a field of propositions A over W. This research note shows under which conditions ranking functions on a field of propositions A over W and rankings on a language L are induced by pointwise ranking functions on W and the set of (...) models for L, ModL, respectively. (shrink)
Belief revision theory studies how an ideal doxastic agent should revise her beliefs when she receives new information. In part I I will first present the AGM theory of belief revision (Alchourrón & Gärdenfors & Makinson 1985). Then I will focus on the problem of iterated belief revisions.
Epistemology is the study of knowledge and justified belief. Belief is thus central to epistemology. It comes in a qualitative form, as when Sophia believes that Vienna is the capital of Austria, and a quantitative form, as when Sophia's degree of belief that Vienna is the capital of Austria is at least twice her degree of belief that tomorrow it will be sunny in Vienna. Formal epistemology, as opposed to mainstream epistemology (Hendricks 2006), is epistemology done in a formal way, (...) that is, by employing tools from logic and mathematics. The goal of this entry is to give the reader an overview of the formal tools available to epistemologists for the representation of belief. A particular focus will be the relation between formal representations of qualitative belief and formal representations of quantitative degrees of belief. (shrink)
This article shows that a slight variation of the argument in Milne 1996 yields the log‐likelihood ratio l rather than the log‐ratio measure r as “the one true measure of confirmation. ” *Received December 2006; revised December 2007. †To contact the author, please write to: Formal Epistemology Research Group, Zukunftskolleg and Department of Philosophy, University of Konstanz, P.O. Box X906, 78457 Konstanz, Germany; e‐mail: [email protected]‐konstanz.de.
Bayesianism is the position that scientific reasoning is probabilistic and that probabilities are adequately interpreted as an agent's actual subjective degrees of belief, measured by her betting behaviour. Confirmation is one important aspect of scientific reasoning. The thesis of this paper is the following: if scientific reasoning is at all probabilistic, the subjective interpretation has to be given up in order to get right confirmation—and thus scientific reasoning in general. The Bayesian approach to scientific reasoning Bayesian confirmation theory The example (...) The less reliable the source of information, the higher the degree of Bayesian confirmation Measure sensitivity A more general version of the problem of old evidence Conditioning on the entailment relation The counterfactual strategy Generalizing the counterfactual strategy The desired result, and a necessary and sufficient condition for it Actual degrees of belief The common knock-down feature, or ‘anything goes’ The problem of prior probabilities. (shrink)
This paper presents a new analysis of C.G. Hempel’s conditions of adequacy for any relation of confirmation [Hempel C. G. (1945). Aspects of scientific explanation and other essays in the philosophy of science. New York: The Free Press, pp. 3–51.], differing from the one Carnap gave in §87 of his [1962. Logical foundations of probability (2nd ed.). Chicago: University of Chicago Press.]. Hempel, it is argued, felt the need for two concepts of confirmation: one aiming at true hypotheses and another (...) aiming at informative hypotheses. However, he also realized that these two concepts are conflicting, and he gave up the concept of confirmation aiming at informative hypotheses. I then show that one can have Hempel’s cake and eat it too. There is a logic that takes into account both of these two conflicting aspects. According to this logic, a sentence H is an acceptable hypothesis for evidence E if and only if H is both sufficiently plausible given E and sufficiently informative about E. Finally, the logic sheds new light on Carnap’s analysis. (shrink)
The question I am addressing in this paper is the following: how is it possible to empirically test, or confirm, counterfactuals? After motivating this question in Section 1, I will look at two approaches to counterfactuals, and at how counterfactuals can be empirically tested, or confirmed, if at all, on these accounts in Section 2. I will then digress into the philosophy of probability in Section 3. The reason for this digression is that I want to use the way observable (...) absolute and relative frequencies, two empirical notions, are used to empirically test, or confirm, hypotheses about objective chances, a metaphysical notion, as a role-model. Specifically, I want to use this probabilistic account of the testing of chance hypotheses as a role-model for the account of the testing of counterfactuals, another metaphysical notion, that I will present in Sections 4 to 8. I will conclude by comparing my proposal to one non-probabilistic and one probabilistic alternative in Section 9. (shrink)
This paper starts by indicating the analysis of Hempel's conditions of adequacy for any relation of confirmation (Hempel, 1945) as presented in Huber (submitted). There I argue contra Carnap (1962, Section 87) that Hempel felt the need for two concepts of confirmation: one aiming at plausible theories and another aiming at informative theories. However, he also realized that these two concepts are conflicting, and he gave up the concept of confirmation aiming at informative theories. The main part of the paper (...) consists in working out the claim that one can have Hempel's cake and eat it too - in the sense that there is a logic of theory assessment that takes into account both of the two conflicting aspects of plausibility and informativeness. According to the semantics of this logic, a is an acceptable theory for evidence β if and only if a is both sufficiently plausible given β and sufficiently informative about β. This is spelt out in terms of ranking functions (Spohn, 1988) and shown to represent the syntactically specified notion of an assessment relation. The paper then compares these acceptability relations to explanatory and confirmatory consequence relations (Flach, 2000) as well as to nonmonotonic consequence relations (Kraus et al., 1990). It concludes by relating the plausibility-informativeness approach to Carnap's positive relevance account, thereby shedding new light on Carnap's analysis as well as solving another problem of confirmation theory. (shrink)
This note is a sequel to Huber. It is shown that obeying a normative principle relating counterfactual conditionals and conditional beliefs, viz. the royal rule, is a necessary and sufficient means to attaining a cognitive end that relates true beliefs in purely factual, non-modal propositions and true beliefs in purely modal propositions. Along the way I will sketch my idealism about alethic or metaphysical modality.
In this brief note I show how to model conceptual change, logical learning, and revision of one's beliefs in response to conditional information such as indicative conditionals that do not express propositions.
The thesis of this paper is that we can justify induction deductively relative to one end, and deduction inductively relative to a different end. I will begin by presenting a contemporary variant of Hume ’s argument for the thesis that we cannot justify the principle of induction. Then I will criticize the responses the resulting problem of induction has received by Carnap and Goodman, as well as praise Reichenbach ’s approach. Some of these authors compare induction to deduction. Haack compares (...) deduction to induction, and I will critically discuss her argument for the thesis that we cannot justify the principles of deduction next. In concluding I will defend the thesis that we can justify induction deductively relative to one end, and deduction inductively relative to a different end, and that we can do so in a non-circular way. Along the way I will show how we can understand deductive and inductive logic as normative theories, and I will briefly sketch an argument to the effect that there are only hypothetical, but no categorical imperatives. (shrink)
Weisberg introduces a phenomenon he terms perceptual undermining. He argues that it poses a problem for Jeffrey conditionalization, and Bayesian epistemology in general. This is Weisberg’s paradox. Weisberg argues that perceptual undermining also poses a problem for ranking theory and for Dempster-Shafer theory. In this note I argue that perceptual undermining does not pose a problem for any of these theories: for true conditionalizers Weisberg’s paradox is a false alarm.
Kroedel has proposed a new solution, the permissibility solution, to the lottery paradox. The lottery paradox results from the Lockean thesis according to which one ought to believe a proposition just in case one’s degree of belief in it is sufficiently high. The permissibility solution replaces the Lockean thesis by the permissibility thesis according to which one is permitted to believe a proposition if one’s degree of belief in it is sufficiently high. This note shows that the epistemology of belief (...) that results from the permissibility thesis and the epistemology of degrees of belief is empty in the sense that one need not believe anything, even if one’s degrees of belief are maximally bold. Since this result can also be achieved by simply dropping the Lockean thesis, or by replacing it with principles that are logically stronger than the permissibility thesis, the question arises what the permissibility solution is a solution of. (shrink)
The problem adressed in this paper is “the main epistemic problem concerning science”, viz. “the explication of how we compare and evaluate theories [...] in the light of the available evidence” (van Fraassen 1983, 27).
Ranking functions have been introduced under the name of ordinal conditional functions in Spohn (1988; 1990). They are representations of epistemic states and their dynamics. The most comprehensive and up to date presentation is Spohn (manuscript).
Crupi et al. propose a generalization of Bayesian conﬁrmation theory that they claim to adequately deal with conﬁrmation by uncertain evidence. Consider a series of points of time t0, . . . , ti, . . . , tn such that the agent’s subjective probability for an atomic proposition E changes from Pr0 at t0 to . . . to Pri at ti to . . . to Prn at tn. It is understood that the agent’s subjective probabilities change for (...) E and no logically stronger proposition, and that the agent updates her subjective probabilities by Jeffrey conditionalization. For this speciﬁc scenario the authors propose to take the difference between Pr0 and Pri as the degree to which E conﬁrms H for the agent at time ti , C0,i. This proposal is claimed to be adequate, because. (shrink)
Logic is the study of the quality of arguments. An argument consists of a set of premises and a conclusion. The quality of an argument depends on at least two factors: the truth of the premises, and the strength with which the premises confirm the conclusion. The truth of the premises is a contingent factor that depends on the state of the world. The strength with which the premises confirm the conclusion is supposed to be independent of the state of (...) the world. Logic is only concerned with this second, logical factor of the quality of arguments. (shrink)
This paper discusses an almost sixty year old problem in the philosophy of science -- that of a logic of confirmation. We present a new analysis of Carl G. Hempel's conditions of adequacy (Hempel 1945), differing from the one Carnap gave in §87 of his Logical Foundations of Probability (1962). Hempel, it is argued, felt the need for two concepts of confirmation: one aiming at true theories and another aiming at informative theories. However, he also realized that these two concepts (...) are conflicting, and he gave up the concept of confirmation aiming at informative theories. We then show that one can have Hempel's cake and eat it, too: There is a (rank-theoretic and genuinely nonmonotonic) logic of confirmation -- or rather, theory assessment -- that takes into account both of these two conflicting aspects. According to this logic, a statement H is an acceptable theory for the data E if and only if H is both sufficiently plausible given E and sufficiently informative about E. Finally, the logic sheds new light on Carnap's analysis (and solves another problem of confirmation theory). (shrink)
The paper presents a new analysis of Hempel’s conditions of adequacy, differing from the one in Carnap. Hempel, so it is argued, felt the need for two concepts of confirmation: one aiming at true theories, and another aiming at informative theories. However, so the analysis continues, he also realized that these two concepts were conflicting, and so he gave up the concept of confirmation aiming at informative theories. It is then shown that one can have the cake and eat it: (...) There is a logic of confirmation that accounts for both of these two conflicting aspects. (shrink)
In his (1996) Peter Milne shows that r (H, E, B) = log [Pr (H | E ∩ B) / Pr (H | B)] is the one true measure of conﬁrmation in the sense that r is the only function satisfying the following ﬁve constraints on measures of conﬁrmation C.