Abstract
Bayesian epistemology provides a popular and powerful framework for modeling rational norms on credences, including how rational agents should respond to evidence. The framework is built on the assumption that ideally rational agents have credences, or degrees of belief, that are representable by numbers that obey the axioms of probability. From there, further constraints are proposed regarding which credence assignments are rationally permissible, and how rational agents’ credences should change upon learning new evidence. While the details are hotly disputed, all flavors of Bayesianism purport to give us norms of ideal rationality. This raises the question of how exactly these norms apply to you and me, since perfect compliance with those ideal norms is out of reach for human thinkers. A common response is that Bayesian norms are ideals that human reasoners are supposed to approximate – the closer they come to being ideally rational, the better. To make this claim plausible, we need to make it more precise. In what sense is it better to be closer to ideally rational, and what is an appropriate measure of such closeness? This article sketches some possible answers to these questions.