Scientific reasoning is—and ought to be—conducted in accordance with the axioms of probability. This Bayesian view—so called because of the central role it accords to a theorem first proved by Thomas Bayes in the late eighteenth ...
In the mid-eighteenth century David Hume argued that successful prediction tells us nothing about the truth of the predicting theory. But physical theory routinely predicts the values of observable magnitudes within very small ranges of error. The chance of this sort of predictive success without a true theory suggests that Hume's argument is flawed. However, Colin Howson argues that there is no flaw and examines the implications of this disturbing conclusion; he also offers a solution to one of the central (...) problems of Western philosophy, the problem of induction. (shrink)
In this paper I argue that de Finetti provided compelling reasons for rejecting countable additivity. It is ironical therefore that the main argument advanced by Bayesians against following his recommendation is based on the consistency criterion, coherence, he himself developed. I will show that this argument is mistaken. Nevertheless, there remain some counter-intuitive consequences of rejecting countable additivity, and one in particular has all the appearances of a full-blown paradox. I will end by arguing that in fact it is no (...) paradox, and that what it shows is that conditionalisation, often claimed to be integral to the Bayesian canon, has to be rejected as a general rule in a finitely additive environment. (shrink)
Timothy Williamson has claimed to prove that regularity must fail even in a nonstandard setting, with a counterexample based on tossing a fair coin infinitely many times. I argue that Williamson’s argument is mistaken, and that a corrected version shows that it is not regularity which fails in the non-standard setting but a fundamental property of shifts in Bernoulli processes.
Many people believe that there is a Dutch Book argument establishing that the principle of countable additivity is a condition of coherence. De Finetti himself did not, but for reasons that are at first sight perplexing. I show that he rejected countable additivity, and hence the Dutch Book argument for it, because countable additivity conflicted with intuitive principles about the scope of authentic consistency constraints. These he often claimed were logical in nature, but he never attempted to relate this idea (...) to deductive logic and its own concept of consistency. This I do, showing that at one level the definitions of deductive and probabilistic consistency are identical, differing only in the nature of the constraints imposed. In the probabilistic case I believe that R.T. Cox's scale-free axioms for subjective probability are the most suitable candidates. 1 Introduction 2 Coherence and Consistency 3 The Infinite Fair Lottery 4 The Puzzle Resolved—But Replaced by Another 5 Countable Additivity, Conglomerability and Dutch Books 6 The Probability Axioms and Cox's Theorem 7 Truth and Probability 8 Conclusion: Logical Omniscience CiteULike Connotea Del.icio.us What's this? (shrink)
In a well-known paper, Timothy Williamson claimed to prove with a coin-flipping example that infinitesimal-valued probabilities cannot save the principle of Regularity, because on pain of inconsistency the event ‘all tosses land heads’ must be assigned probability 0, whether the probability function is hyperreal-valued or not. A premise of Williamson’s argument is that two infinitary events in that example must be assigned the same probability because they are isomorphic. It was argued by Howson that the claim of isomorphism fails, but (...) a more radical objection to Williamson’s argument is that it had been, in effect, refuted long before it was published. (shrink)
The No-Miracles Argument has a natural representation as a probabilistic argument. As such, it commits the base-rate fallacy. In this article, I argue that a recent attempt to show that there is still a serviceable version that avoids the base-rate fallacy fails, and with it all realistic hope of resuscitating the argument.
My title is intended to recall Terence Fine's excellent survey, Theories of Probability [1973]. I shall consider some developments that have occurred in the intervening years, and try to place some of the theories he discussed in what is now a slightly longer perspective. Completeness is not something one can reasonably hope to achieve in a journal article, and any selection is bound to reflect a view of what is salient. In a subject as prone to dispute as this, there (...) will inevitably be many who will disagree with any author's views, and I take the opportunity to apologize in advance to all such people for what they will see as the narrowness and distortion of mine. (shrink)
In this paper, I present a simple and straightforward logic of induction: a consequence relation characterized by a proof theory and a semantics. This system will be called LI. The premises will be restricted to, on the one hand, a set of empirical data and, on the other hand, a set of background generalizations. Among the consequences will be generalizations as well as singular statements, some of which may serve as predictions and explanations.
This paper offers an answer to Glymour's ‘old evidence’ problem for Bayesian confirmation theory, and assesses some of the objections, in particular those recently aired by Chihara, that have been brought against that answer. The paper argues that these objections are easily dissolved, and goes on to show how the answer it proposes yields an intuitively satisfactory analysis of a problem recently discussed by Maher. Garber's, Niiniluoto's and others’ quite different answer to Glymour's problem is considered and rejected, and the (...) paper concludes with some brief reflections on the prediction/accommodation issue. (shrink)
Maher (1988, 1990) has recently argued that the way a hypothesis is generated can affect its confirmation by the available evidence, and that Bayesian confirmation theory can explain this. In particular, he argues that evidence known at the time a theory was proposed does not confirm the theory as much as it would had that evidence been discovered after the theory was proposed. We examine Maher's arguments for this "predictivist" position and conclude that they do not, in fact, support his (...) view. We also cast doubt on the assumptions of Maher's alleged Bayesian proofs. (shrink)
This paper discusses the Bayesian updating rules of ordinary and Jeffrey conditionalisation. Their justification has been a topic of interest for the last quarter century, and several strategies proposed. None has been accepted as conclusive, and it is argued here that this is for a good reason; for by extending the domain of the probability function to include propositions describing the agent's present and future degrees of belief one can systematically generate a class of counterexamples to the rules. Dynamic Dutch (...) Book and other arguments for them are examined critically. A concluding discussion attempts to put these results in perspective within the Bayesian approach. (shrink)
Machine generated contents note: Preface; 1. The trouble with God; 2. God unlimited; 3. How to reason if you must; 4. The well-tempered universe; 5. What does it all mean?; 6. Moral equilibrium; 7. What is life without thee?; 8. It necessarily ain't so.
This paper examines the famous doctrine that independent prediction garners more support than accommodation. The standard arguments for the doctrine are found to be invalid, and a more realistic position is put forward, that whether evidence supports or not a hypothesis depends on the prior probability of the hypothesis, and is independent of whether it was proposed before or after the evidence. This position is implicit in the subjective Bayesian theory of confirmation, and the paper ends with a brief account (...) of this theory, and answer to the principal objections to it. (shrink)
Pruss uses an example of Lester Dubins to argue against the claim that appealing to hyperreal-valued probabilities saves probabilistic regularity from the objection that in continuum outcome-spaces and with standard probability functions all save countably many possibilities must be assigned probability 0. Dubins’s example seems to show that merely finitely additive standard probability functions allow reasoning to a foregone conclusion, and Pruss argues that hyperreal-valued probability functions are vulnerable to the same charge. However, Pruss’s argument relies on the rule of (...) conditionalisation, but I show that in examples like Dubins’s involving nonconglomerable probabilities, conditionalisation is self-defeating. (shrink)
The Bayesian theory is outlined and its status as a logic defended. In this it is contrasted with the development and extension of Neyman-Pearson methodology by Mayo in her recently published book (1996). It is shown by means of a simple counterexample that the rule of inference advocated by Mayo is actually unsound. An explanation of why error-probablities lead us to believe that they supply a sound rule is offered, followed by a discussion of two apparently powerful objections to the (...) Bayesian theory, one concerning old evidence and the other optional stopping. (shrink)
In a recent article in this journal, Daniel Steel charges me with committing a fallacy in my discussion of inductive rules. I show that the charge is false, and that Steel's own attempt to validate enumerative induction in terms of formal learning theory is itself fallacious. I go on to argue that, contra Steel, formal learning theory is in principle incapable of answering Hume's famous claim that any attempt to justify induction will beg the question.
This paper argues that Ramsey's view of the calculus of subjective probabilities as, in effect, logical axioms is the correct view, with powerful heuristic value. This heuristic value is seen particularly in the analysis of the role of conditionalization in the Bayesian theory, where a semantic criterion of synchronic coherence is employed as the test of soundness, which the traditional formulation of conditionalization fails. On the other hand, there is a generally sound rule which supports conditionalization in appropriate contexts, though (...) these contexts are not universal. This sound Bayesian rule is seen to be analogous in certain respects to the deductive rule of modus ponens. (shrink)
Lakatos, I. History of science and its rational reconstructions.--Clark, P. Atomism vs. thermodynamics.--Worrall, J. Thomas Young and the "rufutation" of Newtonian optics.--Musgrave, A. Why did oxygen supplant phlogiston?--Zahar, E. Why did Einstein's programme supersede Lorentz's?--Frické, M. The rejection of Avogadro's hypotheses.--Feyerabend, P. On the critique of scientific reason.
On the basis of an analysis of a single paper on plate tectonics, a paper whose actual content is nowhere in evidence, Frederick Suppe concludes that no standard model of confirmation—hypothetico-deductive, Bayesian-inductive, or inference to the best explanation—can account for the structure of a scientific paper that reports an experimental result. He further argues on the basis of a survey of scientific papers, a survey whose data and results are also absent, that papers which have a rather stringent length limit, (...) such as the one on plate tectonics, are typical of science. Thus, he concludes that no standard confirmation scheme is capable of dealing with scientific practice. Suppe also requires that an adequate model of philosophical testing should be able to account for everything in such scientific papers, in which space is at a premium. (shrink)
Hume’s Theorem.Colin Howson - 2013 - Studies in History and Philosophy of Science Part A 44 (3):339-346.details
A common criticism of Hume’s famous anti-induction argument is that it is vitiated because it fails to foreclose the possibility of an authentically probabilistic justification of induction. I argue that this claim is false, and that on the contrary, the probability calculus itself, in the form of an elementary consequence that I call Hume’s Theorem, fully endorses Hume’s argument. Various objections, including the often-made claim that Hume is defeated by de Finetti’s exchangeability results, are considered and rejected.
I consider Dutch Book arguments for three principles of classical Bayesianism: (i) agents' belief-probabilities are consistent only if they obey the probability axioms. (ii) beliefs are updated by Bayesian conditionalisation. (iii) that the so-called Principal Principle connects statistical and belief probabilities. I argue that while there is a sound Dutch Book argument for (i), the standard ones for (ii) based on the Lewis-Teller strategy are unsound, for reasons pointed out by Christensen. I consider a type of Dutch Book argument for (...) (iii), where the statistical probability is a von Mises one. (shrink)
Kyburg’s opposition to the subjective Bayesian theory, and in particular to its advocates’ indiscriminate and often questionable use of Dutch Book arguments, is documented and much of it strongly endorsed. However, it is argued that an alternative version, proposed by both de Finetti at various times during his long career, and by Ramsey, is less vulnerable to Kyburg’s misgivings. This is a logical interpretation of the formalism, one which, it is argued, is both more natural and also avoids other, widely-made (...) objections to Bayesianism. (shrink)
Hume's essay ‘Of Miracles’ has been a focus of controversy ever since its publication. The challenge to Christian orthodoxy was only too evident, but the balance-of-probabilities criterion advanced by Hume for determining when testimony justifies belief in miracles has also been a subject of contention among philosophers. The temptation for those familiar with Bayesian methodology to show that Hume's criterion determines a corresponding balance-of-posterior probabilities in favour of miracles is understandable, but I will argue that their attempts fail. However, I (...) show that his criterion generates a valid form of the so-called No-Miracles Argument appealed to by modern realist philosophers, whose own presentation of it, despite their possession of the probabilistic machinery Hume himself lacked, is invalid. (shrink)
This article argues that not only are there serious internal difficulties with both Garber’s and later ‘Garber-style’ solutions of the old-evidence problem, including a recent proposal of Hartmann and Fitelson, but Garber-style approaches in general cannot solve the problem. It also follows the earlier lead of Rosenkrantz in pointing out that, despite the appearance to the contrary which inspired Garber’s nonclassical development of the Bayesian theory, there is a straightforward, classically Bayesian, solution.
An argument has been recently proposed by Watkins, whose objective is to show the impossibility of a statistical explanation of single events. This present paper is an attempt to show that Watkins's argument is unsuccessful, and goes on to argue for an account of statistical explanation which has much in common with Hempel's classic treatment.
Dunn and Hellman's objection to Popper and Miller's alleged disproof of inductive probability is considered and rejected. Dunn and Hellman base their objection on a decomposition of the incremental support P(h/e)-P(h) of h by e dual to that of Popper and Miller, and argue, dually to Popper and Miller, to a conclusion contrary to the latters' that all support is deductive in character. I contend that Dunn and Hellman's dualizing argument fails because the elements of their decomposition are not supports (...) of parts of h. I conclude by reinforcing a different line of criticism of Popper and Miller due to Redhead. (shrink)
Logic With Trees is a new and original introduction to modern formal logic. It contains discussions on philosophical issues such as truth, conditionals and modal logic, presenting the formal material with clarity, and preferring informal explanations and arguments to intimidatingly rigorous development. Worked examples and exercises guide beginners through the book, with answers to selected exercises enabling readers to check their progress. Logic With Trees equips students with: a complete and clear account of the truth-tree system for first order logic; (...) the importance of logic and its relevance to many different disciplines; the skills to grasp sophisticated formal reasoning techniques necessary to explore complex metalogic; the ability to contest claims that "ordinary" reasoning is well represented by formal first order logic. (shrink)
In a recent survey of the literature on the relation between information and confirmation, Crupi and Tentori claim that the former is a fruitful source of insight into the latter, with two well-known measures of confirmation being definable purely information-theoretically. I argue that of the two explicata of semantic information which are considered by the authors, the one generating a popular Bayesian confirmation measure is a defective measure of information, while the other, although an admissible measure of information, generates a (...) defective measure of confirmation. Some results are proved about the representation of measures on consequence-classes. (shrink)
A recent article by Jeff Kochan contains a discussion of modus ponens that among other thing alleges that the paradox of the heap is a counterexample to it. In this note I show that it is the conditional major premise of a modus ponens inference, rather than the rule itself, that is impugned. This premise is the contrapositive of the inductive step in the principle of mathematical induction, confirming the widely accepted view that it is the vagueness of natural language (...) predicates, not modus ponens , that is challenged by Sorites. (shrink)