Bayes's theorem is a tool for assessing how probable evidence makes some hypothesis. The papers in this volume consider the worth and applicability of the theorem. Richard Swinburne sets out the philosophical issues. Elliott Sober argues that there are other criteria for assessing hypotheses. Colin Howson, Philip Dawid and John Earman consider how the theorem can be used in statistical science, in weighing evidence in criminal trials, and in assessing evidence for the occurrence of miracles. David Miller (...) argues for the worth of the probability calculus as a tool for measuring propensities in nature rather than the strength of evidence. The volume ends with the original paper containing the theorem, presented to the Royal Society in 1763. (shrink)
In introducing the papers of the symposiasts, I distinguish between statistical, physical, and evidential probability. The axioms of the probability calculus and so Bayes’stheorem can be expressed in terms of any of these kinds of probability. Sober questions the general utility of the theorem. Howson, Dawid, and Earman agree that it applies to the fields they discuss--statistics, assessment of guilt by juries, and miracles. Dawid and Earman consider that prior probabilities need to be supplied by empirical (...) evidence, while Howson considers that there are no objective constraints on prior probabilities. I argue that simplicity is a crucial determinant of prior probability. Miller discussed how Bayes’stheorem can be interpreted so as to apply to physical probability. (shrink)
This is a high quality, concise collection of articles on the foundations of probability and statistics. Its editor, Richard Swinburne, has collected five papers by contemporary leaders in the field, written a pretty thorough and even-handed introductory essay, and placed a very clean and accessible version of Reverend Thomas Bayes’s famous essay (“An Essay Towards the Solving a Problem in the Doctrine of Chances”) at the end, as an Appendix (with a brief historical introduction by the noted statistician G.A. (...) Barnard). I will briefly discuss each of the five papers in the volume, with an emphasis on certain issues arising from the use of probability as a tool for thinking about evidence. (shrink)
This is an introduction to a collected volume. It distinguishes between evidential, statistical, and physical probability, and between objective and subjective understandings of evidential probability, in the use of Bayes’stheorem. If Bayes’stheorem is to be used to assess an objective evidential probability, a priori criteria--mainly the criterion of simplicity--are required to determine prior probability. The five main contributors to the volume discuss the use of Bayes’stheorem to assess the evidential probability of (...) scientific theories, statistical hypotheses, criminal guilt, and miracles; and also its value for assessing physical probability. (shrink)
In this paper I identify a fallacy. The fallacy is worth noting for practical and theoretical reasons. First, the rampant occurrences ofthis fallacy-especially at moments calling for careful thought-indicate that it is more pernicious to clear thinking than many of those found in standard logic texts. Second, the fallacy stands apart from most others in that it contains multiple kinds oflogical error (i.e., fallacious and non-fallacious logical errors) that are themselves committed in abnormal ways, and thus it presents a two-tiered (...) challenge to oversimplified accounts of how an argument can go bad. (shrink)
Carter and Leslie (1996) have argued, using Bayes's theorem, that our being alive now supports the hypothesis of an early 'Doomsday'. Unlike some critics (Eckhardt 1997), we accept their argument in part: given that we exist, our existence now indeed favors 'Doom sooner' over 'Doom later'. The very fact of our existence, however, favors 'Doom later'. In simple cases, a hypothetical approach to the problem of 'old evidence' shows that these two effects cancel out: our existence now yields no (...) information about the coming of Doom. More complex cases suggest a move from countably additive to non-standard probability measures. (shrink)
Is the restricted, consistent, version of the T-scheme sufficient for an ‘implicit definition’ of truth? In a sense, the answer is yes (Haack 1978 , Quine 1953 ). Section 4 of Ketland 1999 mentions this but gives a result saying that the T-scheme does not implicitly define truth in the stronger sense relevant for Beth’s Definability Theorem. This insinuates that the T-scheme fares worse than the compositional truth theory as an implicit definition. However, the insinuation is mistaken. For, as (...) Bays rightly points out, the result given extends to the compositional truth theory also. So, as regards implicit definability, both kinds of truth theory are equivalent. Some further discussion of this topic is mentioned (Gupta 2008 , Ketland 2003 , McGee 1991 ), all in agreement with Bays’s analysis. (shrink)
Bell's theorem is expounded as an analysis in Bayesian probabilistic inference. Assume that the result of a spin measurement on a spin-1/2 particle is governed by a variable internal to the particle (local, “hidden”), and examine pairs of particles having zero combined angular momentum so that their internal variables are correlated: knowing something about the internal variable of one tells us something about that of the other. By measuring the spin of one particle, we infer something about its internal (...) variable; through the correlation, about the internal variable of the second particle, which may be arbitrarily distant and is by hypothesis unchanged by this measurement (locality); and make (probabilistic) prediction of spin observations on the second particle. Each link in this chain has a counterpart in the Bayesian analysis of the situation. Irrespective of the details of the internal variable description, such prediction is violated by measurements on many particle pairs, so that locality—effectively the only physics invoked—fails. The time ordering of the two measurements is not Lorentz-invariant, implying acausality. Quantum mechanics is irrelevant to this reasoning, although its correct predictions of the statistics of the results imply it has a nonlocal—acausal interpretation; one such, the “transactional” interpretation, is presented to demonstrable advantage, and some misconceptions about quantum theory are pursued. The “unobservability” loophole in photonic Bell experiments is proven to be closed. It is shown that this mechanism cannot be used for signalling; signalling would become possible only if the hidden variables, which we insist must underlie the statistical character of the observations (the alternative is to give up), are uncovered in deviations from quantum predictions. Their reticence is understood as a consequence of their nonlocality: it is not easy to isolate and measure something nonlocal. Once the hidden variables are found, all the problems of quantum field theory and of quantum gravity might melt away. (shrink)
In 1999, Jeffrey Ketland published a paper which posed a series of technical problems for deflationary theories of truth. Ketland argued that deflationism is incompatible with standard mathematical formalizations of truth, and he claimed that alternate deflationary formalizations are unable to explain some central uses of the truth predicate in mathematics. He also used Beth’s definability theorem to argue that, contrary to deflationists’ claims, the T-schema cannot provide an ‘implicit definition’ of truth. In this article, I want to challenge (...) this final argument. Whatever other faults deflationism may have, the T-schema does provide an implicit definition of the truth predicate. Or so, at any rate, I shall argue. (shrink)
Richard Swinburne: Introduction Elliott Sober: Bayesianism - its scopes and limits Colin Howson: Bayesianism in Statistics A P Dawid: Bayes's Theorem and Weighing Evidence by Juries John Earman: Bayes, Hume, Price, and Miracles David Miller: Propensities May Satisfy Bayes's Theorem 'An Essay Towards Solving a Problem in the Doctrine of Chances' by Thomas Bayes, presented to the Royal Society by Richard Price. Preceded by a historical introduction by G A Barnard.
Bayes' Theorem is a simple mathematical formula used for calculating conditional probabilities. It figures prominently in subjectivist or Bayesian approaches to epistemology, statistics, and inductive logic. Subjectivists, who maintain that rational belief is governed by the laws of probability, lean heavily on conditional probabilities in their theories of evidence and their models of empirical learning. Bayes' Theorem is central to these enterprises both because it simplifies the calculation of conditional probabilities and because it clarifies significant features of subjectivist (...) position. Indeed, the Theorem's central insight — that a hypothesis is confirmed by any body of data that its truth renders probable — is the cornerstone of all subjectivist methodology. (shrink)
A fundamental problem in science is how to make logical inferences from scientiﬁc data. Mere data does not suﬃce since additional information is necessary to select a domain of models or hypotheses and thus determine the likelihood of each model or hypothesis. Thomas Bayes’ Theorem relates the data and prior information to posterior probabilities associated with diﬀering models or hypotheses and thus is useful in identifying the roles played by the known data and the assumed prior information when making (...) inferences. Scientists, philosophers, and theologians accumulate knowledge when analyzing diﬀerent aspects of reality and search for particular hypotheses or models to ﬁt their respective subject matters. Of course, a main goal is then to integrate all kinds of knowledge into an all-encompassing worldview that would describe the whole of reality. A generous description of the whole of reality would span, in the order of complexity, from the purely physical to the supernatural. These two extreme aspects of reality are bridged by a nonphysical realm, which would include elements of life, man, consciousness, rationality, mental and mathematical abstractions, etc. An urgent problem in the theory of knowledge is what science is and what it is not. Albert Einstein’s notion of science in terms of sense perception is reﬁned by deﬁning operationally the data that makes up the subject matter of science. It is shown, for instance, that theological considerations included in the prior information assumed by Isaac Newton is irrelevant in relating the data logically to the model or hypothesis. In addition, the concepts of naturalism, intelligent design, and evolutionary theory are critically analyzed. Finally, Eugene P. Wigner’s suggestions concerning the nature of human consciousness, life, and the success of mathematics in the natural sciences is considered in the context of the creative power endowed in humans by God. (shrink)
The central problem with Bayesian philosophy of science is that it cannot take account of the relevance of simplicity and unification to confirmation, induction, and scientific inference. The standard Bayesian folklore about factoring simplicity into the priors, and convergence theorems as a way of grounding their objectivity are some of the myths that Earman's book does not address adequately. 1Review of John Earman: Bayes or Bust?, Cambridge, MA. MIT Press, 1992, £33.75cloth.
Skolem's Paradox involves a seeming conflict between two theorems from classical logic. The Löwenheim Skolem theorem says that if a first order theory has infinite models, then it has models whose domains are only countable. Cantor's theorem says that some sets are uncountable. Skolem's Paradox arises when we notice that the basic principles of Cantorian set theory—i.e., the very principles used to prove Cantor's theorem on the existence of uncountable sets—can themselves be formulated as a collection of (...) first order sentences. How can the very principles which prove the existence of uncountable sets be satisfied by a model which is itself only countable? How can a countable model satisfy the first order sentence which says that there are uncountably many mathematical objects—e.g., uncountably many real numbers? (shrink)
The Lowenheim-Skolem theorems say that if a first-order theory has infinite models, then it has models which are only countably infinite. Cantor's theorem says that some sets are uncountable. Together, these theorems induce a puzzle known as Skolem's Paradox: the very axioms of set theory which prove the existence of uncountable sets can be satisfied by a merely countable model. ;This dissertation examines Skolem's Paradox from three perspectives. After a brief introduction, chapters two and three examine several formulations of (...) Skolem's Paradox in order to disentangle the roles which set theory, model theory, and philosophy play in these formulations. In these chapters, I accomplish three things. First, I clear up some of the mathematical ambiguities which have all too often infected discussions of Skolem's Paradox. Second, I isolate a key assumption upon which Skolem's Paradox rests, and I show why this assumption has to be false. Finally, I argue that there is no single explanation as to how a countable model can satisfy the axioms of set theory ;In chapter four, I turn to a second puzzle. Why, even though philosophers have known since the early 1920's that Skolem's Paradox has a relatively simple technical solution, have they continued to find this paradox so troubling? I argue that philosophers' attitudes towards Skolem's Paradox have been shaped by the acceptance of certain, fairly specific, claims in the philosophy of language. I then tackle these philosophical claims head on. In some cases, I argue that the claims depend on an incoherent account of mathematical language. In other cases, I argue that the claims are so powerful that they render Skolem's Paradox trivial. In either case, though, examination of the philosophical underpinnings of Skolem's Paradox renders that paradox decidedly unparadoxical. ;Finally, in chapter five, I turn away from "generic" formulations of Skolem's Paradox to examine Hilary Putnam's "model-theoretic argument against realism." I show that Putnam's argument involves mistakes of both the mathematical and the philosophical variety, and that these two types of mistake are closely related. Along the way, I clear up some of the mutual charges of question begging which have characterized discussions between Putnam and his critics. (shrink)
Legal reasoning on the requirements and application of law has been studied for centuries, but in this subject area the legal profession maintains predominantly the same stance it did in the time of the Ancient Greeks. There is a gap between the standards of proof, one which has been always demonstrated by percentages and in terms of the evaluation of these standards by percentages by mathematical or statistical methods. One method to fill the gap is Bayes theorem that describes (...) an event’s probability based on conditions that might be related to an event. Bayes theorem can help to establish or confirm a relation between facts and rules if there is sufficient other evidence that connect a party in a procedure with a considered legal action. (shrink)
This paper claims that adoption of Bayes's theorem as the schema for the appraisal of scientific theories can greatly reduce the distance between Kuhnians and logical empiricists. It is argued that plausibility considerations, which Kuhn considered outside of the logic of science, can be construed as prior probabilities, which play an indispensable role in the logic of science. Problems concerning likelihoods, especially the likelihood on the "catchall," are also considered. Severe difficulties concerning the significance of this probability arise in (...) the evaluation of individual theories, but they can be avoided by restricting our judgments to comparative assessments of competing theories. (shrink)
odel’s theorem than he has often been credited with. Substantively, they find in Wittgenstein’s remarks “a philosophical claim of great interest,” and they argue that, when this claim is properly assessed, it helps to vindicate some of Wittgenstein’s broader views on G¨.
This chapter discusses the Bayesian analysis of miracles. It is set in the context of the eighteenth-century debate on miracles. The discussion is focused on the probable response of Thomas Bayes to David Hume's celebrated argument against miracles. The chapter presents the claim that the criticisms Richard Price made against Hume's argument against miracles were largely solid.
Jeremy Gwiazda made two criticisms of my formulation in terms of Bayes’stheorem of my probabilistic argument for the existence of God. The first criticism depends on his assumption that I claim that the intrinsic probabilities of all propositions depend almost entirely on their simplicity; however, my claim is that that holds only insofar as those propositions are explanatory hypotheses. The second criticism depends on a claim that the intrinsic probabilities of exclusive and exhaustive explanatory hypotheses of a (...) phenomenon must sum to 1; however it is only those probabilities plus the intrinsic probability of the non-occurrence of the phenomenon which must sum to 1. (shrink)
In the curve fitting problem two conflicting desiderata, simplicity and goodness-of-fit pull in opposite directions. To solve this problem, two proposals, the first one based on Bayes's theorem criterion (BTC) and the second one advocated by Forster and Sober based on Akaike's Information Criterion (AIC) are discussed. We show that AIC, which is frequentist in spirit, is logically equivalent to BTC, provided that a suitable choice of priors is made. We evaluate the charges against Bayesianism and contend that AIC (...) approach has shortcomings. We also discuss the relationship between Schwarz's Bayesian Information Criterion and BTC. (shrink)
The idea of ensembles which are both pre- and post-selected was introduced by Aharonov, Bergmann, and Lebowitz and developed by Aharonov and his school. To derive formulae for the probabilities of outcomes of a measurement performed on such an ensemble at a time intermediate between pre-selection and post-selection, the latter group introduces a two-vector formulation of quantum mechanics, one vector propagating in the forward direction in time and one in the backward direction. The formulae which they obtain by this radical (...) generalization are vindicated by a rigorous derivation using Bayes’stheorem together with standard quantum mechanical predictions regarding ensembles that are only pre-selected. Their own two-vector derivation, however, suffers from a serious lacuna. (shrink)
Analyses of the argument from design in Hume's Dialogues concerning Natural Religion have generally treated that argument as an example of reasoning by analogy. In this paper I examine whether it is in accord with Hume's thinking about the argument to subsume the version of it given in the Dialogues under the model of probabilistic reasoning offered by Bayes's theorem. Wesley Salmon attempted this project in 1978. In related projects, David Owen as well as Philip Dawid and Donald Gillies (...) have more recently attempted to construct Bayesian analyses of Hume's argument concerning testimony in "Of Miracles.". (shrink)
This article deals with the design argument for the existence of God as it is discussed in hume's "dialogues concerning natural religion". Using bayes's theorem in the probability calculus--Which hume almost certainly could not have known as such--It shows how the various arguments advanced by philo and cleanthes fit neatly into a comprehensive logical structure. The conclusion is drawn that, Not only does the empirical evidence fail to support the theistic hypothesis, But also renders the atheistic hypothesis quite highly (...) probable. A postscript speculates upon the historical question of hume's own attitude toward the design argument. (shrink)
Bell’s theorem admits several interpretations or ‘solutions’, the standard interpretation being ‘indeterminism’, a next one ‘nonlocality’. In this article two further solutions are investigated, termed here ‘superdeterminism’ and ‘supercorrelation’. The former is especially interesting for philosophical reasons, if only because it is always rejected on the basis of extra-physical arguments. The latter, supercorrelation, will be studied here by investigating model systems that can mimic it, namely spin lattices. It is shown that in these systems the Bell inequality can be (...) violated, even if they are local according to usual definitions. Violation of the Bell inequality is retraced to violation of ‘measurement independence’. These results emphasize the importance of studying the premises of the Bell inequality in realistic systems. (shrink)
First I employ Bayes's Theorem to give some precision to the atheologian's thesis that it is improbable that God exists given the amount of evil in the world (E). Two arguments result from this: (1) E disconfirms God's existence, and (2) E tends to disconfirm God's existence. Secondly, I evaluate these inductive arguments, suggesting against (1) that the atheologian has abstracted from and hence failed to consider the total evidence, and against (2) that the atheologian's evidence adduced to support (...) his thesis regarding the relevant probabilities is inadequate. (shrink)
The idea of a probabilistic logic of inductive inference based on some form of the principle of indifference has always retained a powerful appeal. However, up to now all modifications of the principle failed. In this paper, a new formulation of such a principle is provided that avoids generating paradoxes and inconsistencies. Because of these results, the thesis that probabilities cannot be logical quantities, determined in an objective way through some form of the principle of indifference, is no longer supportable. (...) Later, the paper investigates some implications of the new principle of indifference. To conclude, a re-examination of the foundations of the so-called objective Bayesian inference is called for. (shrink)
In this short survey article, I discuss Bell’s theorem and some strategies that attempt to avoid the conclusion of non-locality. I focus on two that intersect with the philosophy of probability: (1) quantum probabilities and (2) superdeterminism. The issues they raised not only apply to a wide class of no-go theorems about quantum mechanics but are also of general philosophical interest.
A BAYESIAN ARTICULATION OF HUME’S VIEWS IS OFFERED BASED ON A FORM OF THE BAYES-LAPLACE THEOREM THAT IS SUPERFICIALLY LIKE A FORMULA OF CONDORCET’S. INFINITESIMAL PROBABILITIES ARE EMPLOYED FOR MIRACLES AGAINST WHICH THERE ARE ’PROOFS’ THAT ARE NOT OPPOSED BY ’PROOFS’. OBJECTIONS MADE BY RICHARD PRICE ARE DEALT WITH, AND RECENT EXPERIMENTS CONDUCTED BY AMOS TVERSKY AND DANIEL KAHNEMAN ARE CONSIDERED IN WHICH PERSONS TEND TO DISCOUNT PRIOR IMPROBABILITIES WHEN ASSESSING REPORTS OF WITNESSES.
In response to recent work on the aggregation of individual judgments on logically connected propositions into collective judgments, it is often asked whether judgment aggregation is a special case of Arrowian preference aggregation. We argue for the converse claim. After proving two impossibility theorems on judgment aggregation (using "systematicity" and "independence" conditions, respectively), we construct an embedding of preference aggregation into judgment aggregation and prove Arrow’s theorem (stated for strict preferences) as a corollary of our second result. Although we (...) thereby provide a new proof of Arrow’s theorem, our main aim is to identify the analogue of Arrow’s theorem in judgment aggregation, to clarify the relation between judgment and preference aggregation, and to illustrate the generality of the judgment aggregation model. JEL Classi…cation: D70, D71.. (shrink)
A layman's guide to the mechanics of Gödel's proof together with a lucid discussion of the issues which it raises. Includes an essay discussing the significance of Gödel's work in the light of Wittgenstein's criticisms.
The paper considers the claim that quantum theories with a deterministic dynamics of objects in ordinary space-time, such as Bohmian mechanics, contradict the assumption that the measurement settings can be freely chosen in the EPR experiment. That assumption is one of the premises of Bell’s theorem. I first argue that only a premise to the effect that what determines the choice of the measurement settings is independent of what determines the past state of the measured system is needed for (...) the derivation of Bell’s theorem. Determinism as such does not undermine that independence . Only entanglement could do so. However, generic entanglement without collapse on the level of the universal wave-function can go together with effective wave-functions for subsystems of the universe, as in Bohmian mechanics. The paper argues that such effective wave-functions are sufficient for the mentioned independence premise to hold. (shrink)