About this topic
Summary Scoring rules play an important role in statistics, decision theory, and formal epistemology.  They underpin techniques for eliciting a person's credences in statistics.  And they have been exploited in epistemology to give arguments for various norms that are thought to govern credences, such as Probabilism, Conditionalization, the Reflection Principle, the Principal Principle, and Principles of Indifference, as well as accounts of peer disagreement and the Sleeping Beauty puzzle. A scoring rule is a function that assigns a penalty to an agent's credence (or partial belief or degree of belief) in a given proposition.  The penalty depends on whether the proposition is true or false.  Typically, if the proposition is true then the penalty increases as the credence decreases (the less confident you are in a true proposition, the more you will be penalised); and if the proposition is false then the penalty increases as the credence increases (the more confident you are in a false proposition, the more you will be penalised). In statistics and the theory of eliciting credences, we usually interpret the penalty assigned to a credence by a scoring rule as the monetary loss incurred by an agent with that credence.  In epistemology, we sometimes interpret it as the so-called 'gradational inaccuracy' of the agent's credence:  just as a full belief in a true proposition is more accurate than a full disbelief in that proposition, a higher credence in a true proposition is more accurate than a lower one; and just as a full disbelief in a false proposition is more accurate than a full belief, a lower credence in a false proposition is more accurate than a higher one.  Sometimes, in epistemology, we interpret the penalty given by a scoring rule more generally:  we take it to be the loss in so-called 'cognitive utility' incurred by an agent with that credence, where this is intended to incorporate a measure of the accuracy of the credence, but also measures of all other doxastic virtues it might have as well. Scoring rules assign losses or penalties to individual credences.  But we can use them to define loss or penalty functions for credence functions as well.  The loss assigned to a credence function is just the sum of the losses assigned to the individual credences it gives.  Using this, we can argue for such doxastic norms as Probabilism, Conditionalization, the Principal Principle, the Principle of Indifference, the Reflection Principle, norms for resolving peer disagreement, norms for responding to higher-order evidence, and so on.  For instance, for a large collection of scoring rules, the following holds:  If a credence function violates Probabilism, then there is a credence function that satisfies Probabilism that incurs a lower penalty regardless of how the world turns out.  That is, any non-probabilistic credence function is dominated by a probabilistic one.  Also, for the same large collection of scoring rules, the following holds:  If one's current credence function is a probability function, one will expect updating by conditionalization to incur a lower penalty than updating by any other rule.  There is a substantial and growing body of work on how scoring rules can be used to establish other doxastic norms.
Key works Leonard Savage (Savage 1971) and Bruno de Finetti (de Finetti 1970) introduced the notion of a scoring rule independently.  The notion was introduced into epistemology by Jim Joyce (Joyce 1998) and Graham Oddie (Oddie 1997).  Joyce used it to justify Probabilism; Oddie used it to justify Conditionalization.  Since then, authors have improved and generalized both arguments.  Improved arguments for Probabilism can be found in (Joyce 2009), (Leitgeb & Pettigrew 2010), (Leitgeb & Pettigrew 2010), (Predd et al 2009), (Schervish et al manuscript), (Pettigrew 2016).  Improved arguments for Conditionalization can be found in (Greaves & Wallace 2006), (Easwaran 2013), (Schoenfield 2017), (Pettigrew 2016).  Furthermore, other norms have been considered, such as the Principal Principle (Pettigrew 2012), (Pettigrew 2013), the Principle of Indifference (Pettigrew 2016), the Reflection Principle (Huttegger 2013), norms for resolving peer disagreement (Moss 2011), (Levinstein 2015), (Levinstein 2017), and norms for responding to higher-order evidence (Schoenfield 2018).
Introductions Pettigrew, Richard (2011) 'Epistemic Utility Arguments for Probabilism', Stanford Encyclopedia of Philosophy (Pettigrew 2011)
Related

Contents
193 found
Order:
1 — 50 / 193
  1. Geometric Pooling: A User's Guide.Richard Pettigrew & Jonathan Weisberg - forthcoming - British Journal for the Philosophy of Science.
    Much of our information comes to us indirectly, in the form of conclusions others have drawn from evidence they gathered. When we hear these conclusions, how can we modify our own opinions so as to gain the benefit of their evidence? In this paper we study the method known as geometric pooling. We consider two arguments in its favour, raising several objections to one, and proposing an amendment to the other.
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  2. What is justified credence?Richard Pettigrew - 2021 - Episteme 18 (1):16-30.
    In this paper, we seek a reliabilist account of justified credence. Reliabilism about justified beliefs comes in two varieties: process reliabilism (Goldman, 1979, 2008) and indicator reliabilism (Alston, 1988, 2005). Existing accounts of reliabilism about justified credence comes in the same two varieties: Jeff Dunn (2015) proposes a version of process reliabilism, while Weng Hong Tang (2016) offers a version of indicator reliabilism. As we will see, both face the same objection. If they are right about what justification is, it (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   13 citations  
  3. On the pragmatic and epistemic virtues of inference to the best explanation.Richard Pettigrew - 2021 - Synthese 199 (5-6):12407-12438.
    In a series of papers over the past twenty years, and in a new book, Igor Douven has argued that Bayesians are too quick to reject versions of inference to the best explanation that cannot be accommodated within their framework. In this paper, I survey their worries and attempt to answer them using a series of pragmatic and purely epistemic arguments that I take to show that Bayes’ Rule really is the only rational way to respond to your evidence.
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  4. Partial belief, full belief, and accuracy–dominance.Branden Fitelson & Kenny Easwaran - manuscript
    Arguments for probabilism aim to undergird/motivate a synchronic probabilistic coherence norm for partial beliefs. Standard arguments for probabilism are all of the form: An agent S has a non-probabilistic partial belief function b iff (⇐⇒) S has some “bad” property B (in virtue of the fact that their p.b.f. b has a certain kind of formal property F). These arguments rest on Theorems (⇒) and Converse Theorems (⇐): b is non-Pr ⇐⇒ b has formal property F.
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  5. Self-locating belief and the goal of accuracy.Richard Pettigrew - manuscript
    The goal of a partial belief is to be accurate, or close to the truth. By appealing to this norm, I seek norms for partial beliefs in self-locating and non-self-locating propositions. My aim is to find norms that are analogous to the Bayesian norms, which, I argue, only apply unproblematically to partial beliefs in non-self-locating propositions. I argue that the goal of a set of partial beliefs is to minimize the expected inaccuracy of those beliefs. However, in the self-locating framework, (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  6. A non-pragmatic dominance argument for conditionalization.Robert Williams - manuscript
    In this paper, I provide an accuracy-based argument for conditionalization (via reflection) that does not rely on norms of maximizing expected accuracy. -/- (This is a draft of a paper that I wrote in 2013. It stalled for no very good reason. I still believe the content is right).
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  7. The Foundations of Epistemic Decision Theory.Jason Konek & Ben Levinstein - 2017
    According to accuracy-first epistemology, accuracy is the fundamental epistemic good. Epistemic norms — Probabilism, Conditionalization, the Principal Principle, etc. — have their binding force in virtue of helping to secure this good. To make this idea precise, accuracy-firsters invoke Epistemic Decision Theory (EpDT) to determine which epistemic policies are the best means toward the end of accuracy. Hilary Greaves and others have recently challenged the tenability of this programme. Their arguments purport to show that EpDT encourages obviously epistemically irrational behavior. (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   19 citations  
  8. Epistemic Conservativity and Imprecise Credence.Jason Konek - forthcoming - Philosophy and Phenomenological Research.
    Unspecific evidence calls for imprecise credence. My aim is to vindicate this thought. First, I will pin down what it is that makes one's imprecise credences more or less epistemically valuable. Then I will use this account of epistemic value to delineate a class of reasonable epistemic scoring rules for imprecise credences. Finally, I will show that if we plump for one of these scoring rules as our measure of epistemic value or utility, then a popular family of decision rules (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   29 citations  
  9. Best Laid Plans: Idealization and the Rationality–Accuracy Bridge.Brett Topey - forthcoming - British Journal for the Philosophy of Science.
    Hilary Greaves and David Wallace argue that conditionalization maximizes expected accuracy and so is a rational requirement, but their argument presupposes a particular picture of the bridge between rationality and accuracy: the Best-Plan-to-Follow picture. And theorists such as Miriam Schoenfield and Robert Steel argue that it's possible to motivate an alternative picture—the Best-Plan-to-Make picture—that does not vindicate conditionalization. I show that these theorists are mistaken: it turns out that, if an update procedure maximizes expected accuracy on the Best-Plan-to-Follow picture, it's (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  10. Consequences of Calibration.Robert Williams & Richard Pettigrew - forthcoming - British Journal for the Philosophy of Science:14.
    Drawing on a passage from Ramsey's Truth and Probability, we formulate a simple, plausible constraint on evaluating the accuracy of credences: the Calibration Test. We show that any additive, continuous accuracy measure that passes the Calibration Test will be strictly proper. Strictly proper accuracy measures are known to support the touchstone results of accuracy-first epistemology, for example vindications of probabilism and conditionalization. We show that our use of Calibration is an improvement on previous such appeals by showing how it answers (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  11. Necessary and Sufficient Conditions for Domination Results for Proper Scoring Rules.Alexander R. Pruss - 2024 - Review of Symbolic Logic 17 (1):132-143.
    Scoring rules measure the deviation between a forecast, which assigns degrees of confidence to various events, and reality. Strictly proper scoring rules have the property that for any forecast, the mathematical expectation of the score of a forecast p by the lights of p is strictly better than the mathematical expectation of any other forecast q by the lights of p. Forecasts need not satisfy the axioms of the probability calculus, but Predd et al. [9] have shown that given a (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  12. Downwards Propriety in Epistemic Utility Theory.Alejandro Pérez Carballo - 2023 - Mind 132 (525):30-62.
    Epistemic Utility Theory is often identified with the project of *axiology-first epistemology*—the project of vindicating norms of epistemic rationality purely in terms of epistemic value. One of the central goals of axiology-first epistemology is to provide a justification of the central norm of Bayesian epistemology, Probabilism. The first part of this paper presents a new challenge to axiology first epistemology: I argue that in order to justify Probabilism in purely axiological terms, proponents of axiology first epistemology need to justify a (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  13. Generalized Immodesty Principles in Epistemic Utility Theory.Alejandro Pérez Carballo - 2023 - Ergo: An Open Access Journal of Philosophy 10 (31):874–907.
    Epistemic rationality is typically taken to be immodest at least in this sense: a rational epistemic state should always take itself to be doing at least as well, epistemically and by its own light, than any alternative epistemic state. If epistemic states are probability functions and their alternatives are other probability functions defined over the same collection of proposition, we can capture the relevant sense of immodesty by claiming that epistemic utility functions are (strictly) proper. In this paper I examine (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  14. When the (Bayesian) ideal is not ideal.Danilo Fraga Dantas - 2023 - Logos and Episteme 15 (3):271-298.
    Bayesian epistemologists support the norms of probabilism and conditionalization using Dutch book and accuracy arguments. These arguments assume that rationality requires agents to maximize practical or epistemic value in every doxastic state, which is evaluated from a subjective point of view (e.g., the agent’s expectancy of value). The accuracy arguments also presuppose that agents are opinionated. The goal of this paper is to discuss the assumptions of these arguments, including the measure of epistemic value. I have designed AI agents based (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  15. Epistemic Consequentialism, Veritism, and Scoring Rules.Marc-Kevin Daoust & Charles Côté-Bouchard - 2023 - Erkenntnis 88 (4):1741-1765.
    We argue that there is a tension between two monistic claims that are the core of recent work in epistemic consequentialism. The first is a form of monism about epistemic value, commonly known as veritism: accuracy is the sole final objective to be promoted in the epistemic domain. The other is a form of monism about a class of epistemic scoring rules: that is, strictly proper scoring rules are the only legitimate measures of inaccuracy. These two monisms, we argue, are (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  16. Meta-Inductive Probability Aggregation.Christian J. Feldbacher-Escamilla & Gerhard Schurz - 2023 - Theory and Decision 95 (4):663-689.
    There is a plurality of formal constraints for aggregating probabilities of a group of individuals. Different constraints characterise different families of aggregation rules. In this paper, we focus on the families of linear and geometric opinion pooling rules which consist in linear, respectively, geometric weighted averaging of the individuals’ probabilities. For these families, it is debated which weights exactly are to be chosen. By applying the results of the theory of meta-induction, we want to provide a general rationale, namely, optimality, (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  17. Updating without evidence.Yoaav Isaacs & Jeffrey Sanford Russell - 2023 - Noûs 57 (3):576-599.
    Sometimes you are unreliable at fulfilling your doxastic plans: for example, if you plan to be fully confident in all truths, probably you will end up being fully confident in some falsehoods by mistake. In some cases, there is information that plays the classical role of evidence—your beliefs are perfectly discriminating with respect to some possible facts about the world—and there is a standard expected‐accuracy‐based justification for planning to conditionalize on this evidence. This planning‐oriented justification extends to some cases where (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  18. A scoring rule and global inaccuracy measure for contingent varying importance.Pavel Janda - 2023 - Philosophical Studies 180 (12):3323-3352.
    Levinstein recently presented a challenge to accuracy-first epistemology. He claims that there is no strictly proper, truth-directed, additive, and differentiable scoring rule that recognises the contingency of varying importance, i.e., the fact that an agent might value the inaccuracy of her credences differently at different possible worlds. In my response, I will argue that accuracy-first epistemology can capture the contingency of varying importance while maintaining its commitment to additivity, propriety, truth-directedness, and differentiability. I will construct a scoring rule — a (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  19. On Accuracy and Coherence with Infinite Opinion Sets.Mikayla Kelley - 2023 - Philosophy of Science 90 (1):92-128.
    There is a well-known equivalence between avoiding accuracy dominance and having probabilistically coherent credences (see, e.g., de Finetti 1974, Joyce 2009, Predd et al. 2009, Pettigrew 2016). However, this equivalence has been established only when the set of propositions on which credence functions are defined is finite. In this paper, I establish connections between accuracy dominance and coherence when credence functions are defined on an infinite set of propositions. In particular, I establish the necessary results to extend the classic accuracy (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  20. Accuracy and infinity: a dilemma for subjective Bayesians.Mikayla Kelley & Sven Neth - 2023 - Synthese 201 (12):1-14.
    We argue that subjective Bayesians face a dilemma: they must offend against the spirit of their permissivism about rational credence or reject the principle that one should avoid accuracy dominance.
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  21. Accuracy, Deference, and Chance.Benjamin A. Levinstein - 2023 - Philosophical Review 132 (1):43-87.
    Chance both guides our credences and is an objective feature of the world. How and why we should conform our credences to chance depends on the underlying metaphysical account of what chance is. I use considerations of accuracy (how close your credences come to truth-values) to propose a new way of deferring to chance. The principle I endorse, called the Trust Principle, requires chance to be a good guide to the world, permits modest chances, tells us how to listen to (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  22. Evidence and the epistemic betterness.Ilho Park - 2023 - Synthese 202 (4):1-25.
    It seems intuitive that our credal states are improved if we obtain evidence favoring truth over any falsehood. In this regard, Fallis and Lewis have recently provided and discussed some formal versions of such an intuition, which they name ‘the Monotonicity Principle’ and ‘Elimination’. They argue, with those principles in hand, that the Brier rule, one of the most popular rules of accuracy, is not a good measure, and that accuracy-firsters cannot underwrite both probabilism and conditionalization. In this paper, I (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  23. Bayesian updating when what you learn might be false.Richard Pettigrew - 2023 - Erkenntnis 88 (1):309-324.
    Rescorla (Erkenntnis, 2020) has recently pointed out that the standard arguments for Bayesian Conditionalization assume that whenever I become certain of something, it is true. Most people would reject this assumption. In response, Rescorla offers an improved Dutch Book argument for Bayesian Conditionalization that does not make this assumption. My purpose in this paper is two-fold. First, I want to illuminate Rescorla’s new argument by giving a very general Dutch Book argument that applies to many cases of updating beyond those (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  24. The dialectics of accuracy arguments for probabilism.Alexander R. Pruss - 2023 - Synthese 201 (5):1-26.
    Scoring rules measure the deviation between a credence assignment and reality. Probabilism holds that only those credence assignments that satisfy the axioms of probability are rationally admissible. Accuracy-based arguments for probabilism observe that given certain conditions on a scoring rule, the score of any non-probability is dominated by the score of a probability. The conditions in the arguments we will consider include propriety: the claim that the expected accuracy of _p_ is not beaten by the expected accuracy of any other (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  25. Just As Planned: Bayesianism, Externalism, and Plan Coherence.Pablo Zendejas Medina - 2023 - Philosophers' Imprint 23.
    Two of the most influential arguments for Bayesian updating ("Conditionalization") -- Hilary Greaves' and David Wallace's Accuracy Argument and David Lewis' Diachronic Dutch Book Argument-- turn out to impose a strong and surprising limitation on rational uncertainty: that one can never be rationally uncertain of what one's evidence is. Many philosophers ("externalists") reject that claim, and now seem to face a difficult choice: either to endorse the arguments and give up Externalism, or to reject the arguments and lose some of (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  26. Degrees of incoherence, Dutch bookability & guidance value.Jason Konek - 2022 - Philosophical Studies 180 (2):395-428.
    Why is it good to be less, rather than more incoherent? Julia Staffel, in her excellent book “Unsettled Thoughts,” answers this question by showing that if your credences are incoherent, then there is some way of nudging them toward coherence that is guaranteed to make them more accurate and reduce the extent to which they are Dutch-bookable. This seems to show that such a nudge toward coherence makes them better fit to play their key epistemic and practical roles: representing the (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  27. On the Best Accuracy Arguments for Probabilism.Michael Nielsen - 2022 - Philosophy of Science 89 (3):621-630.
    In a recent paper, Pettigrew reports a generalization of the celebrated accuracy-dominance theorem due to Predd et al., but Pettigrew’s proof is incorrect. I will explain the mistakes and provide a correct proof.
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  28. Epistemic Risk and the Demands of Rationality.Richard Pettigrew - 2022 - Oxford, UK: Oxford University Press.
    How much does rationality constrain what we should believe on the basis of our evidence? According to this book, not very much. For most people and most bodies of evidence, there is a wide range of beliefs that rationality permits them to have in response to that evidence. The argument, which takes inspiration from William James' ideas in 'The Will to Believe', proceeds from two premises. The first is a theory about the basis of epistemic rationality. It's called epistemic utility (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  29. Accuracy-First Epistemology Without Additivity.Richard Pettigrew - 2022 - Philosophy of Science 89 (1):128-151.
    Accuracy arguments for the core tenets of Bayesian epistemology differ mainly in the conditions they place on the legitimate ways of measuring the inaccuracy of our credences. The best existing arguments rely on three conditions: Continuity, Additivity, and Strict Propriety. In this paper, I show how to strengthen the arguments based on these conditions by showing that the central mathematical theorem on which each depends goes through without assuming Additivity.
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  30. Higher-Order Evidence and the Dynamics of Self-Location: An Accuracy-Based Argument for Calibrationism.Brett Topey - 2022 - Erkenntnis 89 (4):1407-1433.
    The thesis that agents should calibrate their beliefs in the face of higher-order evidence—i.e., should adjust their first-order beliefs in response to evidence suggesting that the reasoning underlying those beliefs is faulty—is sometimes thought to be in tension with Bayesian approaches to belief update: in order to obey Bayesian norms, it’s claimed, agents must remain steadfast in the face of higher-order evidence. But I argue that this claim is incorrect. In particular, I motivate a minimal constraint on a reasonable treatment (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  31. Strict propriety is weak.Catrin Campbell-Moore & Benjamin A. Levinstein - 2021 - Analysis 81 (1):8-13.
    Considerations of accuracy – the epistemic good of having credences close to truth-values – have led to the justification of a host of epistemic norms. These arguments rely on specific ways of measuring accuracy. In particular, the accuracy measure should be strictly proper. However, the main argument for strict propriety supports only weak propriety. But strict propriety follows from weak propriety given strict truth directedness and additivity. So no further argument is necessary.
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   13 citations  
  32. Scoring, truthlikeness, and value.Igor Douven - 2021 - Synthese 199 (3-4):8281-8298.
    There is an ongoing debate about which rule we ought to use for scoring probability estimates. Much of this debate has been premised on scoring-rule monism, according to which there is exactly one best scoring rule. In previous work, I have argued against this position. The argument given there was based on purely a priori considerations, notably the intuition that scoring rules should be sensitive to truthlikeness relations if, and only if, such relations are present among whichever hypotheses are at (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  33. The value of cost-free uncertain evidence.Patryk Dziurosz-Serafinowicz & Dominika Dziurosz-Serafinowicz - 2021 - Synthese 199 (5-6):13313-13343.
    We explore the question of whether cost-free uncertain evidence is worth waiting for in advance of making a decision. A classical result in Bayesian decision theory, known as the value of evidence theorem, says that, under certain conditions, when you update your credences by conditionalizing on some cost-free and certain evidence, the subjective expected utility of obtaining this evidence is never less than the subjective expected utility of not obtaining it. We extend this result to a type of update method, (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  34. Symmetry and partial belief geometry.Stefan Lukits - 2021 - European Journal for Philosophy of Science 11 (3):1-24.
    When beliefs are quantified as credences, they are related to each other in terms of closeness and accuracy. The “accuracy first” approach in formal epistemology wants to establish a normative account for credences based entirely on the alethic properties of the credence: how close it is to the truth. To pull off this project, there is a need for a scoring rule. There is widespread agreement about some constraints on this scoring rule, but not whether a unique scoring rule stands (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  35. XIII—Dutch Book and Accuracy Theorems.Anna Mahtani - 2021 - Proceedings of the Aristotelian Society 120 (3):309-327.
    Dutch book and accuracy arguments are used to justify certain rationality constraints on credence functions. Underlying these Dutch book and accuracy arguments are associated theorems, and I show that the interpretation of these theorems can vary along a range of dimensions. Given that the theorems can be interpreted in a variety of different ways, what is the status of the associated arguments? I consider three possibilities: we could aggregate the results of the differently interpreted theorems in some way, and motivate (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  36. Accuracy-dominance and conditionalization.Michael Nielsen - 2021 - Philosophical Studies 178 (10):3217-3236.
    Epistemic decision theory produces arguments with both normative and mathematical premises. I begin by arguing that philosophers should care about whether the mathematical premises (1) are true, (2) are strong, and (3) admit simple proofs. I then discuss a theorem that Briggs and Pettigrew (2020) use as a premise in a novel accuracy-dominance argument for conditionalization. I argue that the theorem and its proof can be improved in a number of ways. First, I present a counterexample that shows that one (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  37. A deference model of epistemic authority.Sofia Ellinor Bokros - 2020 - Synthese 198 (12):12041-12069.
    How should we adjust our beliefs in light of the testimony of those who are in a better epistemic position than ourselves, such as experts and other epistemic superiors? In this paper, I develop and defend a deference model of epistemic authority. The paper attempts to resolve the debate between the preemption view and the total evidence view of epistemic authority by taking an accuracy-first approach to the issue of how we should respond to authoritative and expert testimony. I argue (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  38. An Accuracy‐Dominance Argument for Conditionalization.R. A. Briggs & Richard Pettigrew - 2020 - Noûs 54 (1):162-181.
    Epistemic decision theorists aim to justify Bayesian norms by arguing that these norms further the goal of epistemic accuracy—having beliefs that are as close as possible to the truth. The standard defense of Probabilism appeals to accuracy dominance: for every belief state that violates the probability calculus, there is some probabilistic belief state that is more accurate, come what may. The standard defense of Conditionalization, on the other hand, appeals to expected accuracy: before the evidence is in, one should expect (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   60 citations  
  39. Time-Slice Rationality and Self-Locating Belief.David Builes - 2020 - Philosophical Studies 177 (10):3033-3049.
    The epistemology of self-locating belief concerns itself with how rational agents ought to respond to certain kinds of indexical information. I argue that those who endorse the thesis of Time-Slice Rationality ought to endorse a particular view about the epistemology of self-locating belief, according to which ‘essentially indexical’ information is never evidentially relevant to non-indexical matters. I close by offering some independent motivations for endorsing Time-Slice Rationality in the context of the epistemology of self-locating belief.
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  40. Dilating and contracting arbitrarily.David Builes, Sophie Horowitz & Miriam Schoenfield - 2020 - Noûs 56 (1):3-20.
    Standard accuracy-based approaches to imprecise credences have the consequence that it is rational to move between precise and imprecise credences arbitrarily, without gaining any new evidence. Building on the Educated Guessing Framework of Horowitz (2019), we develop an alternative accuracy-based approach to imprecise credences that does not have this shortcoming. We argue that it is always irrational to move from a precise state to an imprecise state arbitrarily, however it can be rational to move from an imprecise state to a (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  41. Accuracy Monism and Doxastic Dominance: Reply to Steinberger.Matt Hewson - 2020 - Analysis 80 (3):450-456.
    Given the standard dominance conditions used in accuracy theories for outright belief, epistemologists must invoke epistemic conservatism if they are to avoid licensing belief in both a proposition and its negation. Florian Steinberger (2019) charges the committed accuracy monist — the theorist who thinks that the only epistemic value is accuracy — with being unable to motivate this conservatism. I show that the accuracy monist can avoid Steinberger’s charge by moving to a subtly different set of dominance conditions. Having done (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  42. What is conditionalization, and why should we do it?Richard Pettigrew - 2020 - Philosophical Studies 177 (11):3427-3463.
    Conditionalization is one of the central norms of Bayesian epistemology. But there are a number of competing formulations, and a number of arguments that purport to establish it. In this paper, I explore which formulations of the norm are supported by which arguments. In their standard formulations, each of the arguments I consider here depends on the same assumption, which I call Deterministic Updating. I will investigate whether it is possible to amend these arguments so that they no longer depend (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark   14 citations  
  43. An Accuracy Argument in Favor of Ranking Theory.Eric Raidl & Wolfgang Spohn - 2020 - Journal of Philosophical Logic 49 (2):283-313.
    Fitelson and McCarthy have proposed an accuracy measure for confidence orders which favors probability measures and Dempster-Shafer belief functions as accounts of degrees of belief and excludes ranking functions. Their accuracy measure only penalizes mistakes in confidence comparisons. We propose an alternative accuracy measure that also rewards correct confidence comparisons. Thus we conform to both of William James’ maxims: “Believe truth! Shun error!” We combine the two maxims, penalties and rewards, into one criterion that we call prioritized accuracy optimization. That (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  44. Proper scoring rules in epistemic decision theory.Maomei Wang - 2020 - Dissertation, Lingnan University
    Epistemic decision theory aims to defend a variety of epistemic norms in terms of their facilitation of epistemic ends. One of the most important components of EpDT is known as a scoring rule. This thesis addresses some problems about scoring rules in EpDT. I consider scoring rules both for precise credences and for imprecise credences. For scoring rules in the context of precise credences, I examine the rationale for requiring a scoring rule to be strictly proper, and argue that no (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  45. The uniqueness of local proper scoring rules: the logarithmic family.Jingni Yang - 2020 - Theory and Decision 88 (2):315-322.
    Local proper scoring rules provide convenient tools for measuring subjective probabilities. Savage, 783–801, 1971) has shown that the only local proper scoring rule for more than two exclusive events is the logarithmic family. We generalize Savage by relaxing the properness and the domain, and provide simpler proof.
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  46. A Theory of Epistemic Risk.Boris Babic - 2019 - Philosophy of Science 86 (3):522-550.
    I propose a general alethic theory of epistemic risk according to which the riskiness of an agent’s credence function encodes her relative sensitivity to different types of graded error. After motivating and mathematically developing this approach, I show that the epistemic risk function is a scaled reflection of expected inaccuracy. This duality between risk and information enables us to explore the relationship between attitudes to epistemic risk, the choice of scoring rules in epistemic utility theory, and the selection of priors (...)
    Remove from this list   Direct download (9 more)  
     
    Export citation  
     
    Bookmark   13 citations  
  47. Accuracy and Credal Imprecision.Dominik Berger & Nilanjan Das - 2019 - Noûs 54 (3):666-703.
    Many have claimed that epistemic rationality sometimes requires us to have imprecise credal states (i.e. credal states representable only by sets of credence functions) rather than precise ones (i.e. credal states representable by single credence functions). Some writers have recently argued that this claim conflicts with accuracy-centered epistemology, i.e., the project of justifying epistemic norms by appealing solely to the overall accuracy of the doxastic states they recommend. But these arguments are far from decisive. In this essay, we prove some (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  48. Accuracy and ur-prior conditionalization.Nilanjan Das - 2019 - Review of Symbolic Logic 12 (1):62-96.
    Recently, several epistemologists have defended an attractive principle of epistemic rationality, which we shall call Ur-Prior Conditionalization. In this essay, I ask whether we can justify this principle by appealing to the epistemic goal of accuracy. I argue that any such accuracy-based argument will be in tension with Evidence Externalism, i.e., the view that agent's evidence may entail non-trivial propositions about the external world. This is because any such argument will crucially require the assumption that, independently of all empirical evidence, (...)
    Remove from this list   Direct download (2 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  49. Lockeans Maximize Expected Accuracy.Kevin Dorst - 2019 - Mind 128 (509):175-211.
    The Lockean Thesis says that you must believe p iff you’re sufficiently confident of it. On some versions, the 'must' asserts a metaphysical connection; on others, it asserts a normative one. On some versions, 'sufficiently confident' refers to a fixed threshold of credence; on others, it varies with proposition and context. Claim: the Lockean Thesis follows from epistemic utility theory—the view that rational requirements are constrained by the norm to promote accuracy. Different versions of this theory generate different versions of (...)
    Remove from this list   Direct download (5 more)  
     
    Export citation  
     
    Bookmark   92 citations  
  50. Accuracy, conditionalization, and probabilism.Don Fallis & Peter J. Lewis - 2019 - Synthese 198 (5):4017-4033.
    Accuracy-based arguments for conditionalization and probabilism appear to have a significant advantage over their Dutch Book rivals. They rely only on the plausible epistemic norm that one should try to decrease the inaccuracy of one’s beliefs. Furthermore, conditionalization and probabilism apparently follow from a wide range of measures of inaccuracy. However, we argue that there is an under-appreciated diachronic constraint on measures of inaccuracy which limits the measures from which one can prove conditionalization, and none of the remaining measures allow (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
1 — 50 / 193