Jim Joyce argues for two amendments to probabilism. The first is the doctrine that credences are rational, or not, in virtue of their accuracy or “closeness to the truth” (1998). The second is a shift from a numerically precise model of belief to an imprecise model represented by a set of probability functions (2010). We argue that both amendments cannot be satisfied simultaneously. To do so, we employ a (slightly-generalized) impossibility theorem of Seidenfeld, Schervish, and Kadane (2012), who (...) show that there is no strictly proper scoring rule for imprecise probabilities. -/- The question then is what should give way. Joyce, who is well aware of this no-go result, thinks that a quantifiability constraint on epistemic accuracy should be relaxed to accommodate imprecision. We argue instead that another Joycean assumption — called strict immodesty— should be rejected, and we prove a representation theorem that characterizes all “mildly” immodest measures of inaccuracy. (shrink)
Does rationality require imprecisecredences? Many hold that it does: imprecise evidence requires correspondingly imprecisecredences. I argue that this is false. The imprecise view faces the same arbitrariness worries that were meant to motivate it in the first place. It faces these worries because it incorporates a certain idealization. But doing away with this idealization effectively collapses the imprecise view into a particular kind of precise view. On this alternative, our attitudes should (...) reflect a kind of normative uncertainty: uncertainty about what to believe. This view refutes the claim that precise credences are inappropriately informative or committal. Some argue that indeterminate evidential support requires imprecisecredences; but I argue that indeterminate evidential support instead places indeterminate requirements on credences, and is compatible with the claim that rational credences may always be precise. (shrink)
Unspecific evidence calls for imprecise credence. My aim is to vindicate this thought. First, I will pin down what it is that makes one's imprecisecredences more or less epistemically valuable. Then I will use this account of epistemic value to delineate a class of reasonable epistemic scoring rules for imprecisecredences. Finally, I will show that if we plump for one of these scoring rules as our measure of epistemic value or utility, then a (...) popular family of decision rules recommends imprecisecredences. In particular, a range of Hurwicz criteria, which generalise the Maximin decision rule, recommend imprecisecredences. If correct, the moral is this: an agent who adopts precise credences, rather than imprecise ones, in the face of unspecific and incomplete evidence, goes wrong by gambling with the epistemic utility of her doxastic state in too risky a fashion. Precise credences represent an overly risky epistemic bet, according to the Hurwicz criteria. (shrink)
A number of recent arguments purport to show that imprecisecredences are incompatible with accuracy-first epistemology. If correct, this conclusion suggests a conflict between evidential a...
A lot of conventional work in formal epistemology presupposes that subjects have precise credences. The advent of imprecise credence models has left much of this work surprisingly intact, as traditional requirements of rationality are often simply reinterpreted as constraints on the individual functions in your representor. But when it comes to agents with imprecisecredences, the requirements of rationality needn’t take this form. Whether you are rational might just as easily depend on global features of your (...) representor. What it takes for a group of people to lift a piano is not the same as what it takes for each individual member of the group to lift it. Similarly, what it takes for an imprecise agent to be rational might not be for each member of her representor to satisfy familiar constraints on precise credence functions. -/- -/- This paper is an extended investigation of global rules of rationality. I begin by distinguishing and defining various notions of globalness. Then I discuss three applications of these notions, using them to address several serious challenges that have been raised for fans of imprecisecredences. Sections 2 and 3 discuss cases in which it seems like imprecise agents are forced to make bad choices about whether to gather evidence. Section 4 discusses the notorious problem of belief inertia, according to which certain imprecise agents are unable to engage in inductive learning. Finally, section 5 addresses the well-known objection that imprecise agents are doomed to violate the rational principle of Reflection. In response to each challenge, fans of imprecisecredences have more argumentative resources at their disposal than previously thought, resources brought out by the observation that the rules of rationality could be global in character. Hence the notion of a global constraint is not only theoretically interesting, but also significant for the development and defense of the epistemology of imprecisecredences. (shrink)
Can we extend accuracy-based epistemic utility theory to imprecisecredences? There's no obvious way of proceeding: some stipulations will be necessary for either (i) the notion of accuracy or (ii) the epistemic decision rule. With some prima facie plausible stipulations, imprecisecredences are always required. With others, they’re always impermissible. Care is needed to reach the familiar evidential view of imprecise credence: that whether precise or imprecisecredences are required depends on the character (...) of one's evidence. I propose an epistemic utility theoretic defense of a common view about how evidence places demands on imprecise credence: that your spread of credence should cover the range of chance hypotheses left open by your evidence. I argue that objections to the form of epistemic utility theoretic argument that I use will extend to the standard motivation for epistemically mandatory imprecisecredences. (shrink)
What is it for an imprecise credence to be justified? It might be thought that this is not a particularly urgent question for friends of imprecisecredences to answer. For one might think that its answer just depends on how a well-trodden issue in epistemology plays out—namely, that of which theory of doxastic justification, be it reliabilism, evidentialism, or some other theory, is correct. I’ll argue, however, that it’s difficult for reliabilists to accommodate imprecisecredences, (...) at least if we understand such credences to be determinate first-order attitudes. If I’m right, reliabilists will have to reject imprecisecredences, and friends of imprecisecredences will have to reject reliabilism. Near the end of the paper, I’ll also consider whether reliabilism can accommodate indeterminate credences. (shrink)
It has been claimed that, in response to certain kinds of evidence, agents ought to adopt imprecisecredences: doxastic states that are represented by sets of credence functions rather than single ones. In this paper I argue that, given some plausible constraints on accuracy measures, accuracy-centered epistemologists must reject the requirement to adopt imprecisecredences. I then show that even the claim that imprecisecredences are permitted is problematic for accuracy-centered epistemology. It follows that (...) if imprecise credal states are permitted or required in the cases that their defenders appeal to, then the requirements of rationality can outstrip what would be warranted by an interest in accuracy. (shrink)
We offer a new motivation for imprecise probabilities. We argue that there are propositions to which precise probability cannot be assigned, but to which imprecise probability can be assigned. In such cases the alternative to imprecise probability is not precise probability, but no probability at all. And an imprecise probability is substantially better than no probability at all. Our argument is based on the mathematical phenomenon of non-measurable sets. Non-measurable propositions cannot receive precise probabilities, but there (...) is a natural way for them to receive imprecise probabilities. The mathematics of non-measurable sets is arcane, but its epistemological import is far-reaching; even apparently mundane propositions are liable to be affected by non-measurability. The phenomenon of non-measurability dramatically reshapes the dialectic between critics and proponents of imprecise credence. Non-measurability offers natural rejoinders to prominent critics of imprecise credence. Non-measurability even reverses some of the critics’ arguments—by the very lights that have been used to argue against imprecisecredences, imprecisecredences are better than precise credences. (shrink)
In this paper I investigate an alternative to imprecise probabilism. Imprecise probabilism is a popular revision of orthodox Bayesianism: while the orthodox Bayesian claims that a rational agent’s belief-state can be represented by a single credence function, the imprecise probabilist claims instead that a rational agent’s belief-state can be represented by a set of such functions. The alternative that I put forward in this paper is to claim that the expression ‘credence’ is vague, and then apply the (...) theory of supervaluationism to sentences containing this expression. This gives us a viable alternative to imprecise probabilism, and I end by comparing the two accounts. I show that supervaluationism has a simpler way of handling sentences relating the belief-states of two different people, or of the same person at two different times; that both accounts may have the resources to develop plausible decision theories; and finally that the supervaluationist can accommodate higher-order vagueness in a way that is not available to the imprecise probabilist. (shrink)
Adam Elga (Philosophers’ Imprint, 10(5), 1–11, 2010) presents a diachronic puzzle to supporters of imprecisecredences and argues that no acceptable decision rule for imprecisecredences can deliver the intuitively correct result. Elga concludes that agents should not hold imprecisecredences. In this paper, I argue for a two-part thesis. First, I show that Elga’s argument is incomplete: there is an acceptable decision rule that delivers the intuitive result. Next, I repair the argument by (...) offering a more elaborate diachronic puzzle that is more difficult for imprecise Bayesians to avoid. (shrink)
According to the Imprecise Credence Framework (ICF), a rational believer's doxastic state should be modelled by a set of probability functions rather than a single probability function, namely, the set of probability functions allowed by the evidence ( Joyce [2005] ). Roger White ( [2010] ) has recently given an arresting argument against the ICF, which has garnered a number of responses. In this article, I attempt to cast doubt on his argument. First, I point out that it's not (...) an argument against the ICF per se , but an argument for the Principle of Indifference. Second, I present an argument that's analogous to White's. I argue that if White's premises are true, the premises of this argument are too. But the premises of my argument entail something obviously false. Therefore, White's premises must not all be true. (shrink)
This paper is about a tension between two theses. The first is Value of Evidence: roughly, the thesis that it is always rational for an agent to gather and use cost-free evidence for making decisions. The second is Rationality of Imprecision: the thesis that an agent can be rationally required to adopt doxastic states that are imprecise, i.e., not representable by a single credence function. While others have noticed this tension, I offer a new diagnosis of it. I show (...) that it arises when an agent with an imprecise doxastic state engages in an unreflective inquiry, an inquiry where they revise their beliefs using an updating rule that doesn't satisfy a weak reflection principle. In such an unreflective inquiry, certain synchronic norms of instrumental rationality can make it instrumentally irrational for an agent to gather and use cost-free evidence. I then go on to propose a diachronic norm of instrumental rationality that preserves Value of Evidence in unreflective inquiries. This, I suggest, may help us reconcile this thesis with Rationality of Imprecision. (shrink)
After presenting a simple expressivist account of reports of probabilistic judgements, I explore a classic problem for it, namely the Frege-Geach problem. I argue that it is a problem not just for expressivism but for any reasonable account of ascriptions of graded judgements. I suggest that the problem can be resolved by appropriately modelling imprecisecredences.
Traditional Bayesianism requires that an agent’s degrees of belief be represented by a real-valued, probabilistic credence function. However, in many cases it seems that our evidence is not rich enough to warrant such precision. In light of this, some have proposed that we instead represent an agent’s degrees of belief as a set of credence functions. This way, we can respect the evidence by requiring that the set, often called the agent’s credal state, includes all credence functions that are in (...) some sense compatible with the evidence. One known problem for this evidentially motivated imprecise view is that in certain cases, our imprecise credence in a particular proposition will remain the same no matter how much evidence we receive. In this article I argue that the problem is much more general than has been appreciated so far, and that it’s difficult to avoid it without compromising the initial evidentialist motivation. _1_ Introduction _2_ Precision and Its Problems _3_ Imprecise Bayesianism and Respecting Ambiguous Evidence _4_ Local Belief Inertia _5_ From Local to Global Belief Inertia _6_ Responding to Global Belief Inertia _7_ Conclusion. (shrink)
Many have claimed that epistemic rationality sometimes requires us to have imprecise credal states (i.e. credal states representable only by sets of credence functions) rather than precise ones (i.e. credal states representable by single credence functions). Some writers have recently argued that this claim conflicts with accuracy-centered epistemology, i.e., the project of justifying epistemic norms by appealing solely to the overall accuracy of the doxastic states they recommend. But these arguments are far from decisive. In this essay, we prove (...) some new results, which show that there is little hope for reconciling the rationality of credal imprecision with accuracy-centered epistemology. (shrink)
It is natural to think of precise probabilities as being special cases of imprecise probabilities, the special case being when one’s lower and upper probabilities are equal. I argue, however, that it is better to think of the two models as representing two different aspects of our credences, which are often vague to some degree. I show that by combining the two models into one model, and understanding that model as a model of vague credence, a natural interpretation (...) arises that suggests a hypothesis concerning how we can improve the accuracy of aggregate credences. I present empirical results in support of this hypothesis. I also discuss how this modeling interpretation of imprecise probabilities bears upon a philosophical objection that has been raised against them, the so-called inductive learning problem. (shrink)
A number of Bayesians claim that, if one has no evidence relevant to a proposition P, then one's credence in P should be spread over the interval [0, 1]. Against this, I argue: first, that it is inconsistent with plausible claims about comparative levels of confidence; second, that it precludes inductive learning in certain cases. Two motivations for the view are considered and rejected. A discussion of alternatives leads to the conjecture that there is an in-principle limitation on formal representations (...) of belief: they cannot be both fully accurate and maximally specific. (shrink)
Many philosophers regard the imprecise credence framework as a more realistic model of probabilistic inferences with imperfect empirical information than the traditional precise credence framework. Hence, it is surprising that the literature lacks any discussion on how to update one’s imprecisecredences when the given evidence itself is imprecise. To fill this gap, I consider two updating principles. Unfortunately, each of them faces a serious problem. The first updating principle, which I call “generalized conditionalization,” sometimes forces (...) an agent to change her imprecise degrees of belief even though she does not have new evidence. The second updating principle, which I call “the generalized dynamic Keynesian model,” may result in a very precise credal state although the agent does not have sufficiently strong evidence to justify such an informative doxastic state. This means that it is much more difficult to come up with an acceptable updating principle for the imprecise credence framework than one might have thought it would be. (shrink)
Many have argued that a rational agent's attitude towards a proposition may be better represented by a probability range than by a single number. I show that in such cases an agent will have unstable betting behaviour, and so will behave in an unpredictable way. I use this point to argue against a range of responses to the ‘two bets’ argument for sharp probabilities.
There is a trade-off between specificity and accuracy in existing models of belief. Descriptions of agents in the tripartite model, which recognizes only three doxastic attitudes—belief, disbelief, and suspension of judgment—are typically accurate, but not sufficiently specific. The orthodox Bayesian model, which requires real-valued credences, is perfectly specific, but often inaccurate: we often lack precise credences. I argue, first, that a popular attempt to fix the Bayesian model by using sets of functions is also inaccurate, since it requires (...) us to have interval-valued credences with perfectly precise endpoints. We can see this problem as analogous to the problem of higher order vagueness. Ultimately, I argue, the only way to avoid these problems is to endorse Insurmountable Unclassifiability. This principle has some surprising and radical consequences. For example, it entails that the trade-off between accuracy and specificity is in-principle unavoidable: sometimes it is simply impossible to characterize an agent’s doxastic state in a way that is both fully accurate and maximally specific. What we can do, however, is improve on both the tripartite and existing Bayesian models. I construct a new model of belief—the minimal model—that allows us to characterize agents with much greater specificity than the tripartite model, and yet which remains, unlike existing Bayesian models, perfectly accurate. (shrink)
One recent topic of debate in Bayesian epistemology has been the question of whether imprecisecredences can be rational. I argue that one account of imprecisecredences, the orthodox treatment as defended by James M. Joyce, is untenable. Despite Joyce’s claims to the contrary, a puzzle introduced by Roger White shows that the orthodox account, when paired with Bas C. van Fraassen’s Reflection Principle, can lead to inconsistent beliefs. Proponents of imprecisecredences, then, must (...) either provide a compelling reason to reject Reflection or admit that the rational credences in White’s case are precise. (shrink)
The traditional solutions to the Sleeping Beauty problem say that Beauty should have either a sharp 1/3 or sharp 1/2 credence that the coin flip was heads when she wakes. But Beauty’s evidence is incomplete so that it doesn’t warrant a precise credence, I claim. Instead, Beauty ought to have a properly imprecise credence when she wakes. In particular, her representor ought to assign \(R(H\!eads)=[0,1/2]\) . I show, perhaps surprisingly, that this solution can account for the many of the (...) intuitions that motivate the traditional solutions. I also offer a new objection to Elga’s restricted version of the principle of indifference, which an opponent may try to use to collapse the imprecision. (shrink)
There is currently much discussion about how decision making should proceed when an agent's degrees of belief are imprecise; represented by a set of probability functions. I show that decision rules recently discussed by Sarah Moss, Susanna Rinard and Rohan Sud all suffer from the same defect: they all struggle to rationalize diachronic ambiguity aversion. Since ambiguity aversion is among the motivations for imprecise credence, this suggests that the search for an adequate imprecise decision rule is not (...) yet over. (shrink)
Impermissivists hold that an agent with a given body of evidence has at most one rationally permitted attitude that she should adopt towards any particular proposition. Permissivists deny this, often motivating permissivism by describing scenarios that pump our intuitions that the agent could reasonably take one of several attitudes toward some proposition. We criticize the following impermissivist response: while it seems like any of that range of attitudes is permissible, what is actually required is the single broad attitude that encompasses (...) all of these single attitudes. While this might seem like an easy way to win over permissivists, we argue that this impermissivist response leads to an indefensible epistemology; permissive intuitions are not so easily co-opted. (shrink)
Rational credence should be coherent in the sense that your attitudes should not leave you open to a sure loss. Rational credence should be such that you can learn when confronted with relevant evidence. Rational credence should not be sensitive to irrelevant differences in the presentation of the epistemic situation. We explore the extent to which orthodox probabilistic approaches to rational credence can satisfy these three desiderata and find them wanting. We demonstrate that an imprecise probability approach does better. (...) Along the way we shall demonstrate that the problem of “belief inertia” is not an issue for a large class of IP credences, and provide a solution to van Fraassen’s box factory puzzle. (shrink)
Towards the end of Decision Theory with a Human Face, Richard Bradley discusses various ways a rational yet human agent, who, due to lack of evidence, is unable to make some fine-grained credibility judgments, may nonetheless make systematic decisions. One proposal is that such an agent can simply “reach judgments” on the fly, as needed for decision making. In effect, she can adopt a precise probability function to serve as proxy for her imprecisecredences at the point of (...) decision, and then subsequently abandon the proxy as she proceeds to learn more about the world. Contra Bradley, I argue that an agent who employs this strategy does not necessarily act like a precise Bayesian, since she is not necessarily immune to sure loss in diachronic, as well as synchronic, settings. I go on to suggest a method for determining a proxy probability function whereby the agent does act like a precise Bayesian, so understood. (shrink)
Sometimes different partitions of the same space each seem to divide that space into propositions that call for equal epistemic treatment. Famously, equal treatment in the form of equal point-valued credence leads to incoherence. Some have argued that equal treatment in the form of equal interval-valued credence solves the puzzle. This paper shows that, once we rule out intervals with extreme endpoints, this proposal also leads to incoherence.
According to van Fraassen, inference to the best explanation is incompatible with Bayesianism. To argue to the contrary, many philosophers have suggested hybrid models of scientific reasoning with both explanationist and probabilistic elements. This paper offers another such model with two novel features. First, its Bayesian component is imprecise. Second, the domain of credence functions can be extended.
The basic Bayesian model of credence states, where each individual’s belief state is represented by a single probability measure, has been criticized as psychologically implausible, unable to represent the intuitive distinction between precise and imprecise probabilities, and normatively unjustifiable due to a need to adopt arbitrary, unmotivated priors. These arguments are often used to motivate a model on which imprecise credal states are represented by sets of probability measures. I connect this debate with recent work in Bayesian cognitive (...) science, where probabilistic models are typically provided with explicit hierarchical structure. Hierarchical Bayesian models are immune to many classic arguments against single-measure models. They represent grades of imprecision in probability assignments automatically, have strong psychological motivation, and can be normatively justified even when certain arbitrary decisions are required. In addition, hierarchical models show much more plausible learning behavior than flat representations in terms of sets of measures, which—on standard assumptions about update—rule out simple cases of learning from a starting point of total ignorance. (shrink)
In this paper I offer an alternative - the ‘dispositional account’ - to the standard account of imprecise probabilism. Whereas for the imprecise probabilist, an agent’s credal state is modelled by a set of credence functions, on the dispositional account an agent’s credal state is modelled by a set of sets of credence functions. On the face of it, the dispositional account looks less elegant than the standard account – so why should we be interested? I argue that (...) the dispositional account is actually simpler, because the dispositional choice behaviour that fixes an agent’s credal state is faithfully depicted in the model of that agent’s credal state. I explore some of the implications of the account, including a surprising implication for the debate over dilation. (shrink)
According to the traditional Bayesian view of credence, its structure is that of precise probability, its objects are descriptive propositions about the empirical world, and its dynamics are given by conditionalization. Each of the three essays that make up this thesis deals with a different variation on this traditional picture. The first variation replaces precise probability with sets of probabilities. The resulting imprecise Bayesianism is sometimes motivated on the grounds that our beliefs should not be more precise than the (...) evidence calls for. One known problem for this evidentially motivated imprecise view is that in certain cases, our imprecise credence in a particular proposition will remain the same no matter how much evidence we receive. In the first essay I argue that the problem is much more general than has been appreciated so far, and that it’s difficult to avoid without compromising the initial evidentialist motivation. The second variation replaces descriptive claims with moral claims as the objects of credence. I consider three standard arguments for probabilism with respect to descriptive uncertainty—representation theorem arguments, Dutch book arguments, and accuracy arguments—in order to examine whether such arguments can also be used to establish probabilism with respect to moral uncertainty. In the second essay, I argue that by and large they can, with some caveats. First, I don’t examine whether these arguments can be given sound non-cognitivist readings, and any conclusions therefore only hold conditional on cognitivism. Second, decision-theoretic representation theorems are found to be less convincing in the moral case, because there they implausibly commit us to thinking that intertheoretic comparisons of value are always possible. Third and finally, certain considerations may lead one to think that imprecise probabilism provides a more plausible model of moral epistemology. The third variation considers whether, in addition to conditionalization, agents may also change their minds by becoming aware of propositions they had not previously entertained, and therefore not previously assigned any probability. More specifically, I argue that if we wish to make room for reflective equilibrium in a probabilistic moral epistemology, we must allow for awareness growth. In the third essay, I sketch the outline of such a Bayesian account of reflective equilibrium. Given that this account gives a central place to awareness growth, and that the rationality constraints on belief change by awareness growth are much weaker than those on belief change by conditionalization, it follows that the rationality constraints on the credences of agents who are seeking reflective equilibrium are correspondingly weaker. (shrink)
Pragmatic factors encroach on epistemic predicates not solely because the threshold for actionable belief may shift with an epistemic agent’s context, as an evidential Bayesian might insist, but also because what our inductive policy should be may shift with that context. I argue for this thesis in the context of imprecise probabilities, maintaining that the choice of the defining hyperparameter for an Imprecise Dirichlet Model for nonparametric predictive inference may correspond to the choice of an eager versus reticent (...) inductive policy in a way that can be sensitive to stakes and other practical circumstances. (shrink)
Gordon Belot has recently developed a novel argument against Bayesianism. He shows that there is an interesting class of problems that, intuitively, no rational belief forming method is likely to get right. But a Bayesian agent’s credence, before the problem starts, that she will get the problem right has to be 1. This is an implausible kind of immodesty on the part of Bayesians. My aim is to show that while this is a good argument against traditional, precise Bayesians, the (...) argument doesn’t neatly extend to imprecise Bayesians. As such, Belot’s argument is a reason to prefer imprecise Bayesianism to precise Bayesianism. (shrink)
This paper focuses on radical pooling, or the question of how to aggregate credences when there is a fundamental disagreement about which is the relevant logical space for inquiry. The solution advanced is based on the notion of consensus as common ground, where agents can find it by suspending judgment on logical possibilities. This is exemplified with cases of scientific revolution. On a formal level, the proposal uses algebraic joins and imprecise probabilities; which is shown to be compatible (...) with the principles of marginalization, rigidity, reverse bayesianism, and minimum divergence commonly endorsed in these contexts. Furthermore, I extend results from previous work by to show that pooling sets of imprecise probabilities can satisfy important pooling axioms. (shrink)
This paper has two main parts. In the first part, we motivate a kind of indeterminate, suppositional credences by discussing the prospect for a subjective interpretation of a causal Bayesian network, an important tool for causal reasoning in artificial intelligence. A CBN consists of a causal graph and a collection of interventional probabilities. The subjective interpretation in question would take the causal graph in a CBN to represent the causal structure that is believed by an agent, and interventional probabilities (...) in a CBN to represent suppositional credences. We review a difficulty noted in the literature with such an interpretation, and suggest that a natural way to address the challenge is to go for a generalization of CBN that allows indeterminate credences. In the second part, we develop a decision-theoretic foundation for such indeterminate suppositional credences, by generalizing a theory of coherent choice functions to accommodate some form of act-state dependence. The upshot is a decision-theoretic framework that is not only rich enough to, so to speak, ground the probabilities in a subjectively interpreted causal network, but also interesting in its own right, in that it accommodates both act-state dependence and imprecise probabilities. (shrink)
Recently many have argued that agents must sometimes have credences that are imprecise, represented by a set of probability measures. But opponents claim that fans of imprecisecredences cannot provide a decision theory that protects agents who follow it from foregoing sure money. In particular, agents with imprecisecredences appear doomed to act irrationally in diachronic cases, where they are called to make decisions at earlier and later times. I respond to this claim on (...) behalf of imprecise credence fans. Once we appreciate the complexity of our intuitions about rational decision making, we can see that diachronic cases are in fact evidence for the essential claims motivating imprecise credence models. I argue that our decision theory for imprecise agents should mirror our decision theory for agents in moral dilemmas, and I develop permissive norms that explain our intuitions about both sorts of agents. (shrink)
Standard accuracy-based approaches to imprecisecredences have the consequence that it is rational to move between precise and imprecisecredences arbitrarily, without gaining any new evidence. Building on the Educated Guessing Framework of Horowitz (2019), we develop an alternative accuracy-based approach to imprecisecredences that does not have this shortcoming. We argue that it is always irrational to move from a precise state to an imprecise state arbitrarily, however it can be rational to (...) move from an imprecise state to a precise state arbitrarily. (shrink)
In several papers, John Norton has argued that Bayesianism cannot handle ignorance adequately due to its inability to distinguish between neutral and disconfirming evidence. He argued that this inability sows confusion in, e.g., anthropic reasoning in cosmology or the Doomsday argument, by allowing one to draw unwarranted conclusions from a lack of knowledge. Norton has suggested criteria for a candidate for representation of neutral support. Imprecisecredences (families of credal probability functions) constitute a Bayesian-friendly framework that allows us (...) to avoid inadequate neutral priors and better handle ignorance. The imprecise model generally agrees with Norton's representation of ignorance but requires that his criterion of self-duality be reformulated or abandoned. (shrink)
Dogmatism is sometimes thought to be incompatible with Bayesian models of rational learning. I show that the best model for updating imprecisecredences is compatible with dogmatism.
It is a prevalent, if not popular, thesis in the metaphysics of belief that facts about an agent’s beliefs depend entirely upon facts about that agent’s underlying credal state. Call this thesis ‘credal reductivism’ and any view that endorses this thesis a ‘credal reductivist view’. An adequate credal reductivist view will accurately predict both when belief occurs and which beliefs are held appropriately, on the basis of credal facts alone. Several well-known—and some lesser known—objections to credal reductivism turn on the (...) inability of standard credal reductivist views to get the latter, normative, results right. This paper presents and defends a novel credal reductivist view according to which belief is a type of “imprecise credence” that escapes these objections by including an extreme credence of 1. (shrink)
A review of some major topics of debate in normative decision theory from circa 2007 to 2019. Topics discussed include the ongoing debate between causal and evidential decision theory, decision instability, risk-weighted expected utility theory, decision-making with incomplete preferences, and decision-making with imprecisecredences.
On an attractive, naturalistically respectable theory of intentionality, mental contents are a form of measurement system for representing behavioral and psychological dispositions. This chapter argues that a consequence of this view is that the content/attitude distinction is measurement system relative. As a result, there is substantial arbitrariness in the content/attitude distinction. Whether some measurement of mental states counts as characterizing the content of mental states or the attitude is not a question of empirical discovery but of theoretical utility. If correct, (...) this observation has ramifications in the theory of rationality. Some epistemologists and decision theorists have argued that imprecisecredences are rationally impermissible, while others have argued that precise credences are rationally impermissible. If the measure theory of mental content is correct, however, then neither imprecisecredences nor precise credences can be rationally impermissible. (shrink)
Abstract: This paper defends a constraint that any satisfactory decision theory must satisfy. I show how this constraint is violated by all of the decision theories that have been endorsed in the literature that are designed to deal with cases in which opinions or values are represented by a set of functions rather than a single one. Such a decision theory is necessary to account for the existence of what Ruth Chang has called “parity” (as well as for cases in (...) which agents have incomplete preferences or imprecisecredences). The problem with the all of the decision theories that have been defended to account for parity is that they are committed to a claim I call unanimity: when all of the functions in the set agree that an agent ought to do A, then an agent ought to do A. A decision theory committed to unanimity violates the constraint I defend in this paper. Thus, if parity exists, a new approach to decision theory is necessary. (shrink)