Richard Pettigrew offers an extended investigation into a particular way of justifying the rational principles that govern our credences. The main principles that he justifies are the central tenets of Bayesian epistemology, though many other related principles are discussed along the way. Pettigrew looks to decision theory in order to ground his argument. He treats an agent's credences as if they were a choice she makes between different options, gives an account of the purely epistemic utility enjoyed by different sets (...) of credences, and then appeals to the principles of decision theory to show that, when epistemic utility is measured in this way, the credences that violate the principles listed above are ruled out as irrational. The account of epistemic utility set out here is the veritist's: the sole fundamental source of epistemic utility for credences is their accuracy. Thus, Pettigrew conducts an investigation in the version of epistemic utility theory known as accuracy-first epistemology. (shrink)
One of the fundamental problems of epistemology is to say when the evidence in an agent’s possession justifies the beliefs she holds. In this paper and its sequel, we defend the Bayesian solution to this problem by appealing to the following fundamental norm: Accuracy An epistemic agent ought to minimize the inaccuracy of her partial beliefs. In this paper, we make this norm mathematically precise in various ways. We describe three epistemic dilemmas that an agent might face if she attempts (...) to follow Accuracy, and we show that the only inaccuracy measures that do not give rise to such dilemmas are the quadratic inaccuracy measures. In the sequel, we derive the main tenets of Bayesianism from the relevant mathematical versions of Accuracy to which this characterization of the legitimate inaccuracy measures gives rise, but we also show that Jeffrey conditionalization has to be replaced by a different method of update in order for Accuracy to be satisfied. (shrink)
One of the fundamental problems of epistemology is to say when the evidence in an agent’s possession justifies the beliefs she holds. In this paper and its prequel, we defend the Bayesian solution to this problem by appealing to the following fundamental norm: Accuracy An epistemic agent ought to minimize the inaccuracy of her partial beliefs. In the prequel, we made this norm mathematically precise; in this paper, we derive its consequences. We show that the two core tenets of Bayesianism (...) follow from the norm, while the characteristic claim of the Objectivist Bayesian follows from the norm along with an extra assumption. Finally, we consider Richard Jeffrey’s proposed generalization of conditionalization. We show not only that his rule cannot be derived from the norm, unless the requirement of Rigidity is imposed from the start, but further that the norm reveals it to be illegitimate. We end by deriving an alternative updating rule for those cases in which Jeffrey’s is usually supposed to apply. (shrink)
In Bayesian epistemology, the problem of the priors is this: How should we set our credences (or degrees of belief) in the absence of evidence? That is, how should we set our prior or initial credences, the credences with which we begin our credal life? David Lewis liked to call an agent at the beginning of her credal journey a superbaby. The problem of the priors asks for the norms that govern these superbabies. -/- The Principle of Indifference gives a (...) very restrictive answer. It demands that such an agent divide her credences equally over all possibilities. That is, according to the Principle of Indifference, only one initial credence function is permissible, namely, the uniform distribution. In this paper, we offer a novel argument for the Principle of Indifference. I call it the Argument from Accuracy. (shrink)
What we value, like, endorse, want, and prefer changes over the course of our lives. Richard Pettigrew presents a theory of rational decision making for agents who recognise that their values will change over time and whose decisions will affect those future times.
In this paper, we seek a reliabilist account of justified credence. Reliabilism about justified beliefs comes in two varieties: process reliabilism (Goldman, 1979, 2008) and indicator reliabilism (Alston, 1988, 2005). Existing accounts of reliabilism about justified credence comes in the same two varieties: Jeff Dunn (2015) proposes a version of process reliabilism, while Weng Hong Tang (2016) offers a version of indicator reliabilism. As we will see, both face the same objection. If they are right about what justification is, it (...) is mysterious why we care about justification, for neither of the accounts explains how justification is connected to anything of epistemic value. We will call this the Connection Problem. I begin by describing Dunn’s process reliabilism and Tang’s indicator reliabilism. I argue that, understood correctly, they are, in fact, extensionally equivalent. That is, Dunn and Tang reach the top of the same mountain, albeit by different routes. However, I argue that both face the Connection Problem. In response, I offer my own version of reliabilism, which is both process and indicator, and I argue that it solves that problem. Furthermore, I show that it is also extensionally equivalent to Dunn’s reliabilism and Tang’s. Thus, I reach the top of the same mountain as well. (shrink)
Our beliefs come in degrees. I'm 70% confident it will rain tomorrow, and 0.001% sure my lottery ticket will win. What's more, we think these degrees of belief should abide by certain principles if they are to be rational. For instance, you shouldn't believe that a person's taller than 6ft more strongly than you believe that they're taller than 5ft, since the former entails the latter. In Dutch Book arguments, we try to establish the principles of rationality for degrees of (...) belief by appealing to their role in guiding decisions. In particular, we show that degrees of belief that don't satisfy the principles will always guide action in some way that is bad or undesirable. In this Element, we present Dutch Book arguments for the principles of Probabilism, Conditionalization, and the Reflection Principle, among others, and we formulate and consider the most serious objections to them. (shrink)
In ‘A Non-Pragmatic Vindication of Probabilism’, Jim Joyce attempts to ‘depragmatize’ de Finetti’s prevision argument for the claim that our partial beliefs ought to satisfy the axioms of probability calculus. In this paper, I adapt Joyce’s argument to give a non-pragmatic vindication of various versions of David Lewis’ Principal Principle, such as the version based on Isaac Levi's account of admissibility, Michael Thau and Ned Hall's New Principle, and Jenann Ismael's Generalized Principal Principle. Joyce enumerates properties that must be had (...) by any measure of the distance from a set of partial beliefs to the set of truth values; he shows that, on any such measure, and for any set of partial beliefs that violates the probability axioms, there is a set that satisfies those axioms that is closer to every possible set of truth values. I replace truth values by objective chances in his argument; I show that for any set of partial beliefs that violates the probability axioms or a version of the Principal Principle, there is a set that satisfies them that is closer to every possible set of objective chances. (shrink)
We often ask for the opinion of a group of individuals. How strongly does the scientific community believe that the rate at which sea levels are rising has increased over the last 200 years? How likely does the UK Treasury think it is that there will be a recession if the country leaves the European Union? What are these group credences that such questions request? And how do they relate to the individual credences assigned by the members of the particular (...) group in question? According to the credal judgement aggregation principle, linear pooling, the credence function of a group should be a weighted average or linear pool of the credence functions of the individuals in the group. In this chapter, I give an argument for linear pooling based on considerations of accuracy. And I respond to two standard objections to the aggregation principle. (shrink)
According to certain normative theories in epistemology, rationality requires us to be logically omniscient. Yet this prescription clashes with our ordinary judgments of rationality. How should we resolve this tension? In this paper, I focus particularly on the logical omniscience requirement in Bayesian epistemology. Building on a key insight by Hacking :311–325, 1967), I develop a version of Bayesianism that permits logical ignorance. This includes: an account of the synchronic norms that govern a logically ignorant individual at any given time; (...) an account of how we reduce our logical ignorance by learning logical facts and how we should update our credences in response to such evidence; and an account of when logical ignorance is irrational and when it isn’t. At the end, I explain why the requirement of logical omniscience remains true of ideal agents with no computational, processing, or storage limitations. (shrink)
Accuracy arguments for the core tenets of Bayesian epistemology differ mainly in the conditions they place on the legitimate ways of measuring the inaccuracy of our credences. The best existing arguments rely on three conditions: Continuity, Additivity, and Strict Propriety. In this paper, I show how to strengthen the arguments based on these conditions by showing that the central mathematical theorem on which each depends goes through without assuming Additivity.
Conditionalization is one of the central norms of Bayesian epistemology. But there are a number of competing formulations, and a number of arguments that purport to establish it. In this paper, I explore which formulations of the norm are supported by which arguments. In their standard formulations, each of the arguments I consider here depends on the same assumption, which I call Deterministic Updating. I will investigate whether it is possible to amend these arguments so that they no longer depend (...) on it. As I show, whether this is possible depends on the formulation of the norm under consideration. (shrink)
Jim Joyce has presented an argument for Probabilism based on considerations of epistemic utility [Joyce, 1998]. In a recent paper, I adapted this argument to give an argument for Probablism and the Principal Principle based on similar considerations [Pettigrew, 2012]. Joyce’s argument assumes that a credence in a true proposition is better the closer it is to maximal credence, whilst a credence in a false proposition is better the closer it is to minimal credence. By contrast, my argument in that (...) paper assumed (roughly) that a credence in a proposition is better the closer it is to the objective chance of that proposition. In this paper, I present an epistemic utility argument for Probabilism and the Principal Principle that retains Joyce’s assumption rather than the alternative I endorsed in the earlier paper. I argue that this results in a superior argument for these norms. (shrink)
When is a belief justified? There are three families of arguments we typically use to support different accounts of justification: arguments from our intuitive responses to vignettes that involve the concept; arguments from the theoretical role we would like the concept to play in epistemology; and arguments from the practical, moral, and political uses to which we wish to put the concept. I focus particularly on the third sort, and specifically on arguments of this sort offered by Clayton Littlejohn in (...) Justification and the Truth-Connection and Amia Srinivasan in ‘Radical Externalism’ : 395–431, 2018) in favour of externalism. I counter Srinivasan’s argument in two ways: first, I show that the internalist’s concept of justification might figure just as easily in the sorts of structural explanation Srinivasan thinks our political goals require us to give; and I argue that the internalist’s concept is needed for a particular political task, namely, to help us build more effective defences against what I call epistemic weapons. I conclude that we should adopt an Alstonian pluralism about the concept of justification. (shrink)
The Dutch Book Argument for Probabilism assumes Ramsey's Thesis (RT), which purports to determine the prices an agent is rationally required to pay for a bet. Recently, a new objection to Ramsey's Thesis has emerged (Hedden 2013, Wronski & Godziszewski 2017, Wronski 2018)--I call this the Expected Utility Objection. According to this objection, it is Maximise Subjective Expected Utility (MSEU) that determines the prices an agent is required to pay for a bet, and this often disagrees with Ramsey's Thesis. I (...) suggest two responses to Hedden's objection. First, we might be permissive: agents are permitted to pay any price that is required or permitted by RT, and they are permitted to pay any price that is required or permitted by MSEU. This allows us to give a revised version of the Dutch Book Argument for Probabilism, which I call the Permissive Dutch Book Argument. Second, I suggest that even the proponent of the Expected Utility Objection should admit that RT gives the correct answer in certain very limited cases, and I show that, together with MSEU, this very restricted version of RT gives a new pragmatic argument for Probabilism, which I call the Bookless Pragmatic Argument. (shrink)
Questions about the relation between identity and discernibility are important both in philosophy and in model theory. We show how a philosophical question about identity and dis- cernibility can be ‘factorized’ into a philosophical question about the adequacy of a formal language to the description of the world, and a mathematical question about discernibility in this language. We provide formal definitions of various notions of discernibility and offer a complete classification of their logical relations. Some new and surprising facts are (...) proved; for instance, that weak dis- cernibility corresponds to discernibility in a language with constants for every object, and that weak discernibility is the most discerning nontrivial discernibility relation. (shrink)
A decision theory is self-recommending if, when you ask it which decision theory you should use, it considers itself to be among the permissible options. I show that many alternatives to expected utility theory are not self-recommending, and I argue that this tells against them.
Famously, William James held that there are two commandments that govern our epistemic life: Believe truth! Shun error! In this paper, I give a formal account of James' claim using the tools of epistemic utility theory. I begin by giving the account for categorical doxastic states – that is, full belief, full disbelief, and suspension of judgment. Then I will show how the account plays out for graded doxastic states – that is, credences. The latter part of the paper thus (...) answers a question left open in Pettigrew. View HTML Send article to KindleTo send this article to your Kindle, first ensure [email protected] is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about sending to your Kindle. Find out more about sending to your Kindle. Note you can select to send to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be sent to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply. Find out more about the Kindle Personal Document Service.JAMESIAN EPISTEMOLOGY FORMALISED: AN EXPLICATION OF ‘THE WILL TO BELIEVE’Volume 13, Issue 3Richard PettigrewDOI: https://doi.org/10.1017/epi.2015.44Your Kindle email address Please provide your Kindle [email protected]@kindle.com Available formats PDF Please select a format to send. By using this service, you agree that you will only keep articles for personal use, and will not openly distribute them via Dropbox, Google Drive or other file sharing services. Please confirm that you accept the terms of use. Cancel Send ×Send article to Dropbox To send this article to your Dropbox account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about sending content to Dropbox. JAMESIAN EPISTEMOLOGY FORMALISED: AN EXPLICATION OF ‘THE WILL TO BELIEVE’Volume 13, Issue 3Richard PettigrewDOI: https://doi.org/10.1017/epi.2015.44Available formats PDF Please select a format to send. By using this service, you agree that you will only keep articles for personal use, and will not openly distribute them via Dropbox, Google Drive or other file sharing services. Please confirm that you accept the terms of use. Cancel Send ×Send article to Google Drive To send this article to your Google Drive account, please select one or more formats and confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about sending content to Google Drive. JAMESIAN EPISTEMOLOGY FORMALISED: AN EXPLICATION OF ‘THE WILL TO BELIEVE’Volume 13, Issue 3Richard PettigrewDOI: https://doi.org/10.1017/epi.2015.44Available formats PDF Please select a format to send. By using this service, you agree that you will only keep articles for personal use, and will not openly distribute them via Dropbox, Google Drive or other file sharing services. Please confirm that you accept the terms of use. Cancel Send ×Export citation Request permission. (shrink)
Beliefs come in different strengths. An agent's credence in a proposition is a measure of the strength of her belief in that proposition. Various norms for credences have been proposed. Traditionally, philosophers have tried to argue for these norms by showing that any agent who violates them will be lead by her credences to make bad decisions. In this article, we survey a new strategy for justifying these norms. The strategy begins by identifying an epistemic utility function and a decision-theoretic (...) norm; we then show that the decision-theoretic norm applied to the epistemic utility function yields the norm for credences that we wish to justify. We survey results already obtained using this strategy, and we suggest directions for future research. (shrink)
If numbers were identified with any of their standard set-theoretic realizations, then they would have various non-arithmetical properties that mathematicians are reluctant to ascribe to them. Dedekind and later structuralists conclude that we should refrain from ascribing to numbers such ‘foreign’ properties. We first rehearse why it is hard to provide an acceptable formulation of this conclusion. Then we investigate some forms of abstraction meant to purge mathematical objects of all ‘foreign’ properties. One form is inspired by Frege; the other (...) by Dedekind. We argue that both face problems. (shrink)
Veritism says that the fundamental source of epistemic value for a doxastic state is the extent to which it represents the world correctly: that is, its fundamental epistemic value is deter...
There are decision problems where the preferences that seem rational to many people cannot be accommodated within orthodox decision theory in the natural way. In response, a number of alternatives to the orthodoxy have been proposed. In this paper, I offer an argument against those alternatives and in favour of the orthodoxy. I focus on preferences that seem to encode sensitivity to risk. And I focus on the alternative to the orthodoxy proposed by Lara Buchak’s risk-weighted expected utility theory. I (...) will show that the orthodoxy can be made to accommodate all of the preferences that Buchak’s theory can accommodate. (shrink)
In a recent paper in this journal, James Hawthorne, Jürgen Landes, Christian Wallmann, and Jon Williamson argue that the principal principle entails the principle of indifference. In this article, I argue that it does not. Lewis’s version of the principal principle notoriously depends on a notion of admissibility, which Lewis uses to restrict its application. HLWW base their argument on certain intuitions concerning when one proposition is admissible for another: Conditions 1 and 2. There are two ways of reading their (...) argument, depending on how you understand the status of these conditions. Reading 1: The correct account of admissibility is determined independently of these two principles, and yet these two principles follow from that correct account. Reading 2: The correct account of admissibility is determined in part by these two principles, so that the principles follow from that account but only because the correct account is constrained so that it must satisfy them. HLWW show that given an account of admissibility on which Conditions 1 and 2 hold, the principal principle entails the principle of indifference. I argue that on either reading of the argument, it fails. First, I argue that there is a plausible account of admissibility on which Conditions 1 and 2 are false. That defeats Reading 1. Next, I argue that the intuitions that lead us to assent to Condition 2 also lead us to assent to other very closely related principles that are inconsistent with Condition 2. This, I claim, casts doubt on the reliability of those intuitions, and thus removes our justification for Condition 2. This defeats Reading 2 of the HLWW argument. Thus, the argument fails. 1Introduction 2Introducing the Principal Principle 3Introducing the Principle of Indifference 4The HLWW Argument 4.1Reading 1: Admissibility justifies Conditions 1 and 2 4.2Reading 2: Conditions 1 and 2 constrain admissibility 5Conclusion. (shrink)
Consider Phoebe and Daphne. Phoebe has credences in 1 million propositions. Daphne, on the other hand, has credences in all of these propositions, but she's also got credences in 999 million other propositions. Phoebe's credences are all very accurate. Each of Daphne's credences, in contrast, are not very accurate at all; each is a little more accurate than it is inaccurate, but not by much. Whose doxastic state is better, Phoebe's or Daphne's? It is clear that this question is analogous (...) to a question that has exercised ethicists over the past thirty years. How do we weigh a population consisting of some number of exceptionally happy and satisfied individuals against another population consisting of a much greater number of people whose lives are only just worth living? This is the question that occasions population ethics. In this paper, I go in search of the correct population ethics for credal states. (shrink)
In “A Nonpragmatic Vindication of Probabilism”, Jim Joyce argues that our credences should obey the axioms of the probability calculus by showing that, if they don't, there will be alternative credences that are guaranteed to be more accurate than ours. But it seems that accuracy is not the only goal of credences: there is also the goal of matching one's credences to one's evidence. I will consider four ways in which we might make this latter goal precise: on the first, (...) the norms to which this goal gives rise act as ‘side constraints’ on our choice of credences; on the second, matching credences to evidence is a goal that is weighed against accuracy to give the overall cognitive value of credences; on the third, as on the second, proximity to the evidential goal and proximity to the goal of accuracy are both sources of value, but this time they are incomparable; on the fourth, the evidential goal is not an independent goal at all, but rather a byproduct of the goal of accuracy. All but the fourth way of making the evidential goal precise are pluralist about credal virtue: there is the virtue of being accurate and there is the virtue of matching the evidence and neither reduces to the other. The fourth way is monist about credal virtue: there is just the virtue of being accurate. The pluralist positions lead to problems for Joyce's argument; the monist position avoids them. I endorse the latter. (shrink)
Rescorla (Erkenntnis, 2020) has recently pointed out that the standard arguments for Bayesian Conditionalization assume that whenever I become certain of something, it is true. Most people would reject this assumption. In response, Rescorla offers an improved Dutch Book argument for Bayesian Conditionalization that does not make this assumption. My purpose in this paper is two-fold. First, I want to illuminate Rescorla’s new argument by giving a very general Dutch Book argument that applies to many cases of updating beyond those (...) covered by Conditionalization, and then showing how Rescorla’s version follows as a special case of that. Second, I want to show how to generalise R. A. Briggs and Richard Pettigrew’s Accuracy Dominance argument to avoid the assumption that Rescorla has identified (Briggs and Pettigrew in Noûs, 2018). In both cases, these arguments proceed by first establishing a very general reflection principle. (shrink)
How does logic relate to rational belief? Is logic normative for belief, as some say? What, if anything, do facts about logical consequence tell us about norms of doxastic rationality? In this paper, we consider a range of putative logic-rationality bridge principles. These purport to relate facts about logical consequence to norms that govern the rationality of our beliefs and credences. To investigate these principles, we deploy a novel approach, namely, epistemic utility theory. That is, we assume that doxastic attitudes (...) have different epistemic value depending on how accurately they represent the world. We then use the principles of decision theory to determine which of the putative logic-rationality bridge principles we can derive from considerations of epistemic utility. (shrink)
(This is for the series Elements of Decision Theory published by Cambridge University Press and edited by Martin Peterson) -/- Our beliefs come in degrees. I believe some things more strongly than I believe others. I believe very strongly that global temperatures will continue to rise during the coming century; I believe slightly less strongly that the European Union will still exist in 2029; and I believe much less strongly that Cardiff is east of Edinburgh. My credence in something is (...) a measure of the strength of my belief in it; it represents my level of confidence in it. These are the states of mind we report when we say things like ‘I’m 20% confident I switch off the gas before I left' or ‘I’m 99.9% confident that it is raining outside'. -/- There are laws that govern these credences. For instance, I shouldn't be more confident that sea levels will rise by over 2 metres in the next 100 years than I am that they'll rise by over 1 metre, since the latter is true if the former is. This book is about a particular way we might try to establish these laws of credence: the Dutch Book arguments (For briefer overviews of these arguments, see Alan Hájek’s entry in the Oxford Handbook of Rational and Social Choice and Susan Vineberg’s entry in the Stanford Encyclopaedia.) -/- We begin, in Chapter 2, with the standard formulation of the various Dutch Book arguments that we'll consider: arguments for Probabilism, Countable Additivity, Regularity, and the Principal Principle. In Chapter 3, we subject this standard formulation to rigorous stress-testing, and make some small adjustments so that it can withstand various objections. What we are left with is still recognisably the orthodox Dutch Book argument. In Chapter 4, we set out the Dutch Strategy argument for Conditionalization. In Chapters 5 and 6, we consider two objections to Dutch Book arguments that cannot be addressed by making small adjustments. Instead, we must completely redesign those arguments, replacing them with ones that share a general approach but few specific details. In Chapter 7, we consider a further objection to which I do not have a response. In Chapter 8, we'll ask what happens to the Dutch Book arguments if we change certain features of the basic framework in which we've been working: first, we ask how Dutch Book arguments fare when we consider credences in self-locating propositions, such as It is Monday; second, we lift the assumption that the background logic is classical and explore Dutch Book arguments for non-classical logics; third, we lift the assumption that an agent's credal state can be represented by a single assignment of numerical values to the propositions she considers. In Chapter 9, we present the mathematical results that underpin these arguments. (shrink)
In a series of papers over the past twenty years, and in a new book, Igor Douven has argued that Bayesians are too quick to reject versions of inference to the best explanation that cannot be accommodated within their framework. In this paper, I survey their worries and attempt to answer them using a series of pragmatic and purely epistemic arguments that I take to show that Bayes’ Rule really is the only rational way to respond to your evidence.
Probabilism says an agent is rational only if her credences are probabilistic. This paper is concerned with the so-called Accuracy Dominance Argument for Probabilism. This argument begins with the claim that the sole fundamental source of epistemic value for a credence is its accuracy. It then shows that, however we measure accuracy, any non-probabilistic credences are accuracy-dominated: that is, there are alternative credences that are guaranteed to be more accurate than them. It follows that non-probabilistic credences are irrational. In this (...) paper, I identify and explore a lacuna in this argument. I grant that, if the only doxastic attitudes are credal attitudes, the argument succeeds. But many philosophers say that, alongside credences, there are other doxastic attitudes, such as full beliefs. What's more, those philosophers typically claim, these other doxastic attitudes are closely connected to credences, either as a matter of necessity or normatively. Now, since full beliefs are also doxastic attitudes, it seems that, like credences, the sole source of epistemic value for them is their accuracy. Thus, if we wish to measure the epistemic value of an agent's total doxastic state, we must include not only the accuracy of her credences, but also the accuracy of her full beliefs. However, if this is the case, there is a problem for the Accuracy Dominance Argument for Probabilism. For all the argument says, there might be non-probabilistic credences such that there is no total doxastic state that accuracy-dominates the total doxastic state to which those credences belong. (shrink)
Does category theory provide a foundation for mathematics that is autonomous with respect to the orthodox foundation in a set theory such as ZFC? We distinguish three types of autonomy: logical, conceptual, and justificatory. Focusing on a categorical theory of sets, we argue that a strong case can be made for its logical and conceptual autonomy. Its justificatory autonomy turns on whether the objects of a foundation for mathematics should be specified only up to isomorphism, as is customary in other (...) branches of contemporary mathematics. If such a specification suffices, then a category-theoretical approach will be highly appropriate. But if sets have a richer `nature' than is preserved under isomorphism, then such an approach will be inadequate. (shrink)
There are many kinds of epistemic experts to which we might wish to defer in setting our credences. These include: highly rational agents, objective chances, our own future credences, our own current credences, and evidential probabilities. But exactly what constraint does a deference requirement place on an agent's credences? In this paper we consider three answers, inspired by three principles that have been proposed for deference to objective chances. We consider how these options fare when applied to the other kinds (...) of epistemic experts mentioned above. Of the three deference principles we consider, we argue that two of the options face insuperable difficulties. The third, on the other hand, fares well|at least when it is applied in a particular way. (shrink)
With his Humean thesis on belief, Leitgeb seeks to say how beliefs and credences ought to interact with one another. To argue for this thesis, he enumerates the roles beliefs must play and the properties they must have if they are to play them, together with norms that beliefs and credences intuitively must satisfy. He then argues that beliefs can play these roles and satisfy these norms if, and only if, they are related to credences in the way set out (...) in the Humean thesis. I begin by raising questions about the roles that Leitgeb takes beliefs to play and the properties he thinks they must have if they are to play them successfully. After that, I question the assumption that, if there are categorical doxastic states at all, then there is just one kind of them—to wit, beliefs—such that the states of that kind must play all of these roles and conform to all of these norms. Instead, I will suggest, if there are categorical doxastic states, there may be many different kinds of such state such that, for each kind, the states of that type play some of the roles Leitgeb takes belief to play and each of which satisfies some of the norms he lists. As I will argue, the usual reasons for positing categorical doxastic states alongside credences all tell equally in favour of accepting a plurality of kinds of them. This is the thesis I dub pluralism about belief states. (shrink)
In formal epistemology, we use mathematical methods to explore the questions of epistemology and rational choice. What can we know? What should we believe and how strongly? How should we act based on our beliefs and values? We begin by modelling phenomena like knowledge, belief, and desire using mathematical machinery, just as a biologist might model the fluctuations of a pair of competing populations, or a physicist might model the turbulence of a fluid passing through a small aperture. Then, we (...) explore, discover, and justify the laws governing those phenomena, using the precision that mathematical machinery affords. For example, we might represent a person by the strengths of their beliefs, and we might measure these using real numbers, which we call credences. Having done this, we might ask what the norms are that govern that person when we represent them in that way. How should those credences hang together? How should the credences change in response to evidence? And how should those credences guide the person’s actions? This is the approach of the first six chapters of this handbook. In the second half, we consider different representations—the set of propositions a person believes; their ranking of propositions by their plausibility. And in each case we ask again what the norms are that govern a person so represented. Or, we might represent them as having both credences and full beliefs, and then ask how those two representations should interact with one another. This handbook is incomplete, as such ventures often are. Formal epistemology is a much wider topic than we present here. One omission, for instance, is social epistemology, where we consider not only individual believers but also the epistemic aspects of their place in a social world. Michael Caie’s entry on doxastic logic touches on one part of this topic, but there is much more. Relatedly, there is no entry on epistemic logic, nor any on knowledge more generally. There are still more gaps. These omissions should not be taken as ideological choices. This material is missing, not because it is any less valuable or interesting, but because we v failed to secure it in time. Rather than delay publication further, we chose to go ahead with what is already a substantial collection. We anticipate a further volume in the future that will cover more ground. Why an open access handbook on this topic? A number of reasons. The topics covered here are large and complex and need the space allowed by the sort of 50 page treatment that many of the authors give. We also wanted to show that, using free and open software, one can overcome a major hurdle facing open access publishing, even on topics with complex typesetting needs. With the right software, one can produce attractive, clear publications at reasonably low cost. Indeed this handbook was created on a budget of exactly £0 (≈ $0). Our thanks to PhilPapers for serving as publisher, and to the authors: we are enormously grateful for the effort they put into their entries. (shrink)
In a recent paper in this journal, James Hawthorne, Jürgen Landes, Christian Wallmann, and Jon Williamson argue that the principal principle entails the principle of indifference. In this paper, I argue that it does not. Lewis’s version of the principal principle notoriously depends on a notion of admissibility, which Lewis uses to restrict its application. HLWW base their argument on certain intuitions concerning when one proposition is admissible for another: Conditions 1 and 2. There are two ways of reading their (...) argument, depending on how you understand the status of these conditions. Reading 1: The correct account of admissibility is determined independently of these two principles, and yet these two principles follow from that correct account. Reading 2: The correct account of admissibility is determined in part by these two principles, so that the principles follow from that account but only because the correct account is constrained so that it must satisfy them. HLWWshow that, given an account of admissibility on which Conditions 1 and 2 hold, the principal principle entails the principle of indifference. I argue that, on either reading of the argument, it fails. First, I argue that there is a plausible account of admissibility on which Conditions 1 and 2 are false. That defeats reading 1. Next, I argue that the intuitions that lead us to assent to Condition 2 also lead us to assent to other very closely related principles that are inconsistent with Condition 2. This, I claim, casts doubt on the reliability of those intuitions, and thus removes our justification for Condition 2. This defeats the second reading of the HLWW argument. Thus, the argument fails. (shrink)
How should your opinion change in response to the opinion of an epistemic peer? We show that the pooling rule known as "upco" is the unique answer satisfying some natural desiderata. If your revised opinion will influence your opinions on other matters by Jeffrey conditionalization, then upco is the only standard pooling rule that ensures the order in which peers are consulted makes no difference. Popular proposals like linear pooling, geometric pooling, and harmonic pooling cannot boast the same. In fact, (...) no alternative to upco can if it possesses four minimal properties which these proposals share. (shrink)
Philosophers of mathematics agree that the only interpretation of arithmetic that takes that discourse at 'face value' is one on which the expressions 'N', '0', '1', '+', and 'x' are treated as proper names. I argue that the interpretation on which these expressions are treated as akin to free variables has an equal claim to be the default interpretation of arithmetic. I show that no purely syntactic test can distinguish proper names from free variables, and I observe that any semantic (...) test that can must beg the question. I draw the same conclusion concerning areas of mathematics beyond arithmetic. This paper is a greatly extended version of my response to Stewart Shapiro's paper in the conference 'Structuralism in physics and mathematics' held in Bristol on 2–3 December, 2006. (shrink)
to appear in Lambert, E. and J. Schwenkler (eds.) Transformative Experience (OUP) -/- L. A. Paul (2014, 2015) argues that the possibility of epistemically transformative experiences poses serious and novel problems for the orthodox theory of rational choice, namely, expected utility theory — I call her argument the Utility Ignorance Objection. In a pair of earlier papers, I responded to Paul’s challenge (Pettigrew 2015, 2016), and a number of other philosophers have responded in similar ways (Dougherty, et al. 2015, Harman (...) 2015) — I call our argument the Fine-Graining Response. Paul has her own reply to this response, which we might call the Authenticity Reply. But Sarah Moss has recently offered an alternative reply to the Fine-Graining Response on Paul’s behalf (Moss 2017) — we’ll call it the No Knowledge Reply. This appeals to the knowledge norm of action, together with Moss’ novel and intriguing account of probabilistic knowledge. In this paper, I consider Moss’ reply and argue that it fails. I argue first that it fails as a reply made on Paul’s behalf, since it forces us to abandon many of the features of Paul’s challenge that make it distinctive and with which Paul herself is particularly concerned. Then I argue that it fails as a reply independent of its fidelity to Paul’s intentions. (shrink)
In this paper, we explore how we should aggregate the degrees of belief of a group of agents to give a single coherent set of degrees of belief, when at least some of those agents might be probabilistically incoherent. There are a number of ways of aggregating degrees of belief, and there are a number of ways of fixing incoherent degrees of belief. When we have picked one of each, should we aggregate first and then fix, or fix first and (...) then aggregate? Or should we try to do both at once? And when do these different procedures agree with one another? In this paper, we focus particularly on the final question. (shrink)
We often learn the credences of others without getting to hear the evidence on which they’re based. And, in these cases, it is often unfeasible or overly onerous to update on this social evidence by conditionalizing on it. How, then, should we respond to it? We consider four methods for aggregating your credences with the credences of others: arithmetic, geometric, multiplicative, and harmonic pooling. Each performs well for some purposes and poorly for others. We describe these in Sections 1-4. In (...) Section 5, we explore three specific applications of our general results: How should we understand cases in which each individual raises their credences in response to learning the credences of the others (Section 5.1)? How do the updating rules used by individuals affect the epistemic performance of the group as a whole (Section 5.2)? How does a population that obeys the Uniqueness Thesis perform compared to one that doesn’t (Section 5.3)? (shrink)
Hierarchical Bayesian models provide an account of Bayesian inference in a hierarchically structured hypothesis space. Scientific theories are plausibly regarded as organized into hierarchies in many cases, with higher levels sometimes called ‘paradigms’ and lower levels encoding more specific or concrete hypotheses. Therefore, HBMs provide a useful model for scientific theory change, showing how higher-level theory change may be driven by the impact of evidence on lower levels. HBMs capture features described in the Kuhnian tradition, particularly the idea that higher-level (...) theories guide learning at lower levels. In addition, they help resolve certain issues for Bayesians, such as scientific preference for simplicity and the problem of new theories. (shrink)
This paper examines the relationship between the KK principle and the epistemological theses of externalism and internalism. In particular we examine arguments from Okasha :80–86, 2013) and Greco :169–197, 2014) which deny that we can derive the denial of the KK principle from externalism.
In the first half of this paper, I argue that group belief ascriptions are highly ambiguous. What's more, in many cases, neither the available contextual factors nor known pragmatic considerations are sufficient to allow the audience to identify which of the many possible meanings is intended. In the second half, I argue that this ambiguity often has bad consequences when a group belief ascription is heard and taken as testimony. And indeed it has these consequences even when the ascription is (...) true on the speaker's intended interpretation, when the speaker does not intend to mislead and indeed intends to cooperatively inform, and when the audience incorporates the evidence from the testimony as they should. I conclude by arguing that these consequences should lead us to stop using such ascriptions. (shrink)