Summary |
In Bayesian epistemology, an updating principle is a
principle that specifies or puts restrictions on the changes in an agent’s
belief state that follow (or should follow) some initial change in the agent’s
belief state (usually – but maybe not always – as a result of the agent being
exposed to new evidence). Although in other academic fields a great deal of the
discussion regarding updating principles touches upon their empirical fit to
the way people actually update their beliefs, much of the relevant
philosophical literature is normative. The central questions are whether, why
and in which contexts, obeying different updating principles is rationally
required. In the simplest (but not uncommon) case, where the agent’s belief
state can be represented by a single probability distribution over a set of
propositions, and the initial change is that of learning a new proposition
(represented as raising the probability of the learnt proposition to 1), the
most popular updating rule is Bayesian Conditionalization. Richard Jeffrey
offered a generalization of Bayesian Conditionalization, usually called
“Jeffrey’s conditionalization”, to cases in which, although there is some
initial change in the agent’s belief state, the probability of no proposition
in the set is raised to 1. Others have introduced, discussed and explored the
formal features of other updating principles. These principles are usually ones
that either cover cases to which Jeffrey’s conditionalization does not apply
(such as cases of “growing awareness” in which the initial change is
represented as an addition of new propositions to the set or cases in which the
agent’s initial belief set cannot be represented by a single probability
distribution over a set of propositions) or constitute generalizations of or
alternatives to Bayesian Conditionalization and Jeffrey’s Conditionalization in
specific contexts (such as Adams’ conditionalization for the case of learning
conditional probabilities, Imaging which – in some contexts – seem to fit
better with other intuitive epistemic principles or different types of pooling
methods for the case of learning other agents’ beliefs). |