We can have credences in an infinite number of propositions—that is, our opinion set can be infinite. Accuracy-first epistemologists have devoted themselves to evaluating credal states with the help of the concept of ‘accuracy’. Unfortunately, under several innocuous assumptions, infinite opinion sets yield several undesirable results, some of which are even fatal, to accuracy-first epistemology. Moreover, accuracy-first epistemologists cannot circumvent these difficulties in any standard way. In this regard, we will suggest a non-standard approach, called a relativistic approach, to accuracy-first (...) epistemology and show that such an approach can successfully circumvent undesirable results while having some advantages over the standard approach. (shrink)
It seems intuitive that our credal states are improved if we obtain evidence favoring truth over any falsehood. In this regard, Fallis and Lewis have recently provided and discussed some formal versions of such an intuition, which they name ‘the Monotonicity Principle’ and ‘Elimination’. They argue, with those principles in hand, that the Brier rule, one of the most popular rules of accuracy, is not a good measure, and that accuracy-firsters cannot underwrite both probabilism and conditionalization. In this paper, I (...) will argue that their conclusions are somewhat hasty. Specifically, I will demonstrate that there is another version of the Monotonicity Principle that can be satisfied by some additive rules of accuracy, such as the Brier rule. Moreover, it will also be argued that their version of the principle has some undesirable features regarding the epistemic betterness. Therefore, their criticisms can hardly jeopardize accuracy-firsters until any further justification of their versions of the Monotonicity Principle and Elimination is provided. (shrink)
In this article, I suggest an argument that seems to show a conflict between the reflection principle and conditionalization. In particular, I show that when the reflection principle is formulated in a standard way, the principle conflicts with Jeffrey conditionalization. And it is argued that the source of the conflict resides in an ambiguity of the standard formulation. Furthermore, I attempt to rescue the principle using Bayes factors. That is, I suggest a new formulation of the principle so as to (...) avoid the conflict. (shrink)
There are some candidates that have been thought to measure the degree to which evidence incrementally confirms a hypothesis. This paper provides an argument for one candidate—the log-likelihood ratio measure. For this purpose, I will suggest a plausible requirement that I call the Requirement of Collaboration. And then, it will be shown that, of various candidates, only the log-likelihood ratio measure \(l\) satisfies this requirement. Using this result, Jeffrey conditionalization will be reformulated so as to disclose explicitly what determines new (...) credences after experience. (shrink)
Lewis’s Principal Principle is widely recognized as a rationality constraint that our credences should satisfy throughout our epistemic life. In practice, however, our credences often fail to satisfy this principle because of our various epistemic limitations. Facing such violations, we should correct our credences in accordance with this principle. In this paper, I will formulate a way of correcting our credences, which will be called the Adams Correcting Rules and then show that such a rule yields non-commutativity between conditionalizing and (...) correcting. With the help of the notion of ‘accuracy’, then, I attempt to provide a vindication of the Adams Correcting Rule and show how we can respond to the non-commutativity in question. (shrink)
This paper discusses simultaneous belief updates. I argue here that modeling such belief updates using the Principle of Minimum Information can be regarded as applying Jeffrey conditionalization successively, and so that, contrary to what many probabilists have thought, the simultaneous belief updates can be successfully modeled by means of Jeffrey conditionalization.