The so-called "non-commutativity" of probability kinematics has caused much unjustified concern. When identical learning is properly represented, namely, by identical Bayes factors rather than identical posterior probabilities, then sequential probability-kinematical revisions behave just as they should. Our analysis is based on a variant of Field's reformulation of probability kinematics, divested of its (inessential) physicalist gloss.
We establish a probabilized version of modus tollens, deriving from p(E|H)=a and p()=b the best possible bounds on p(). In particular, we show that p() 1 as a, b 1, and also as a, b 0. Introduction Probabilities of conditionals Conditional probabilities 3.1 Adams' thesis 3.2 Modus ponens for conditional probabilities 3.3 Modus tollens for conditional probabilities.
It has often been recommended that the differing probability distributions of a group of experts should be reconciled in such a way as to preserve each instance of independence common to all of their distributions. When probability pooling is subject to a universal domain condition, along with state-wise aggregation, there are severe limitations on implementing this recommendation. In particular, when the individuals are epistemic peers whose probability assessments are to be accorded equal weight, universal preservation of independence is, with a (...) few exceptions, impossible. Under more reasonable restrictions on pooling, however, there is a natural method of preserving the independence of any fixed finite family of countable partitions, and hence of any fixed finite family of discrete random variables. (shrink)
A simple rule of probability revision ensures that the final result ofa sequence of probability revisions is undisturbed by an alterationin the temporal order of the learning prompting those revisions.This Uniformity Rule dictates that identical learning be reflectedin identical ratios of certain new-to-old odds, and is grounded in the oldBayesian idea that such ratios represent what is learned from new experiencealone, with prior probabilities factored out. The main theorem of this paperincludes as special cases (i) Field's theorem on commuting probability-kinematical (...) revisions and (ii) the equivalence of two strategiesfor generalizing Jeffrey's solution to the old evidence problem tothe case of uncertain old evidence and probabilistic new explanation. (shrink)
Recently, Rueger and Sharp and Koperski have been concerned to show that certain procedural accounts of model confirmation are compromised by non-linear dynamics. We suggest that the issues raised are better approached by considering whether chaotic data analysis methods allow for reliable inference from data. We provide a framework and an example of this approach.
Additional results are reported on the author's earlier generalization of Richard Jeffrey's solution to the problem of old evidence and new explanation.
Jeffrey has devised a probability revision method that increases the probability of hypothesis H when it is discovered that H implies previously known evidence E. A natural extension of Jeffrey's method likewise increases the probability of H when E has been established with sufficiently high probability and it is then discovered, quite apart from this, that H confers sufficiently higher probability on E than does its logical negation H̄.
Garber (1983) and Jeffrey (1991, 1995) have both proposed solutions to the old evidence problem. Jeffrey's solution, based on a new probability revision method called reparation, has been generalized to the case of uncertain old evidence and probabilistic new explanation in Wagner 1997, 1999. The present paper reformulates some of the latter work, highlighting the central role of Bayes factors and their associated uniformity principle, and extending the analysis to the case in which an hypothesis bears on a countable family (...) of evidentiary propositions. This extension shows that no Garber-type approach is capable of reproducing the results of generalized reparation. (shrink)
A simple rule of probability revision ensures that the final result of a sequence of probability revisions is undisturbed by an alteration in the temporal order of the learning prompting those revisions. This Uniformity Rule dictates that identical learning be reflected in identical ratios of certain new-to-old odds, and is grounded in the old Bayesian idea that such ratios represent what is learned from new experience alone, with prior probabilities factored out. The main theorem of this paper includes as special (...) cases Field's theorem on commuting probability-kinematical revisions and the equivalence of two strategies for generalizing Jeffrey 's solution to the old evidence problem to the case of uncertain old evidence and probabilistic new explanation. (shrink)
Jeffrey conditionalization is generalized to the case in which new evidence bounds the possible revisions of a prior below by a Dempsterian lower probability. Classical probability kinematics arises within this generalization as the special case in which the evidentiary focal elements of the bounding lower probability are pairwise disjoint.
Evidentiary propositions E 1 and E 2, each p-positively relevant to some hypothesis H, are mutually corroborating if p > p, i = 1, 2. Failures of such mutual corroboration are instances of what may be called the corroboration paradox. This paper assesses two rather different analyses of the corroboration paradox due, respectively, to John Pollock and Jonathan Cohen. Pollock invokes a particular embodiment of the principle of insufficient reason to argue that instances of the corroboration paradox are of negligible (...) probability, and that it is therefore defeasibly reasonable to assume that items of evidence positively relevant to some hypothesis are mutually corroborating. Taking a different approach, Cohen seeks to identify supplementary conditions that are sufficient to ensure that such items of evidence will be mutually corroborating, and claims to have identified conditions which account for most cases of mutual corroboration. Combining a proposed common framework for the general study of paradoxes of positive relevance with a simulation experiment, we conclude that neither Pollock’s nor Cohen’s claims stand up to detailed scrutiny. I am quite prepared to be told…”oh, that is an extreme case: it could never really happen!” Now I have observed that this answer is always given instantly, with perfect confidence, and without any examination of the proposed case. It must therefore rest on some general principle: the mental process being something like this—“I have formed a theory. This case contradicts my theory. Therefore, this is an extreme case, and would never occur in practice.”Rev. Charles L. Dodgson. (shrink)
Given someone’s fully specified posterior probability distribution q and information about the revision method that they employed to produce q, what can you infer about their prior probabilistic commitments? This question provides an entrée into a thoroughgoing discussion of a class of parameterizations of Jeffrey conditioning in which the parameters furnish information above and beyond that incorporated in \. Our analysis highlights the ubiquity of Bayes factors in the study of probability revision.
In framing the concept of rational consensus, decision theorists have tended to defer to an older, established literature on social welfare theory for guidance on how to proceed. But the uncritical adoption of standards meant to regulate the reconciliation of differing interests has unduly burdened the development of rational methods for the synthesis of differing judgments. In particular, the universality conditions typically postulated in social welfare theory, which derive from fundamentally ethical considerations, preclude a sensitive treatment of special cases when (...) carried over to the realm of judgment aggregation, especially in the case of probabilistic judgment. (shrink)
It is shown that the Fisher smoking problem and Newcomb's problem are decisiontheoretically identical, each having at its core an identical case of Simpson's paradox for certain probabilities. From this perspective, incorrect solutions to these problems arise from treating them as cases of decisionmaking under risk, while adopting certain global empirical conditional probabilities as the relevant subjective probabihties. The most natural correct solutions employ the methodology of decisionmaking under uncertainty with lottery acts, with certain local empirical conditional probabilities adopted as (...) the relevant subjective probabilities. (shrink)
The right interpretation of subjective probability is implicit in the theories of upper and lower odds, and upper and lower previsions, developed, respectively, by Cedric Smith (1961) and Peter Walley (1991). On this interpretation you are free to assign contingent events the probability 1 (and thus to employ conditionalization as a method of probability revision) without becoming vulnerable to a weak Dutch book.
Richard Jeffrey’s “Conditioning, Kinematics, and Exchangeability” is one of the foundational documents of probability kinematics. However, the section entitled “Successive Updating” contains a subtle error regarding the applicability of updating by so-called relevance quotients in order to ensure the commutativity of successive probability kinematical revisions. Upon becoming aware of this error, Jeffrey formulated the appropriate remedy, but he never discussed the issue in print. To head off any confusion, it seems worthwhile to alert readers of Jeffrey’s “Conditioning, Kinematics, and Exchangeability” (...) to the aforementioned error and to document his remedy, placing it in the context of both earlier and subsequent work on commuting probability kinematical revisions.1. (shrink)
It is shown that the Fisher smoking problem and Newcomb's problem are decisiontheoretically identical, each having at its core an identical case of Simpson's paradox for certain probabilities. From this perspective, incorrect solutions to these problems arise from treating them as cases of decisionmaking under risk, while adopting certain global empirical conditional probabilities as the relevant subjective probabihties. The most natural correct solutions employ the methodology of decisionmaking under uncertainty with lottery acts, with certain local empirical conditional probabilities adopted as (...) the relevant subjective probabilities. (shrink)