Abstract
Though not monolithic, Bayesianism offers a powerful and compelling set of methods for drawing inductive inferences. Its unifying ideas are (a) Pascal’s recognition that uncertainty is best expressed probabilistically and that values of unknown quantities are best estimated using the principle of mathematical expectation, and (b) Bayes’s insight that learning and inductive inference can be fruitfully modeled using conditional probabilities and Bayes’s theorem. The two central challenges for Bayesianism are the problem of the priors, and the development of general methods for Bayesian conditioning. Bayesians have responded to the problem of the priors by proposing the use of ignorance priors that are justified a priori, embracing a radical subjectivism in which probabilities are mere degrees of coherent credence, or have sought refuge in the idea that subjective prejudices will wash out as evidence increases. On the conditioning front, Jeffrey has extended Bayes’s basic approach to account for non-dogmatic learning experiences, and further developments based on measures of divergence among probabilities seem promising. Bayesians have a vexed relationship with objective chance. Some reject the notion outright and portray chances as projections of personal inductive tastes onto the world. Others hope to make room for chances by developing chance/credence principles that clarify and explain the evidential relationships between the two kinds of probability. At bottom, however, all Bayesians agree that inductive reasoning involves drawing conclusions from new data on the basis of prior information using update rules that require conditioning on the evidence.