to appear in Lambert, E. and J. Schwenkler (eds.) Transformative Experience (OUP) -/- L. A. Paul (2014, 2015) argues that the possibility of epistemically transformative experiences poses serious and novel problems for the orthodox theory of rational choice, namely, expected utility theory — I call her argument the Utility Ignorance Objection. In a pair of earlier papers, I responded to Paul’s challenge (Pettigrew 2015, 2016), and a number of other philosophers have responded in similar ways (Dougherty, et al. 2015, (...) Harman 2015) — I call our argument the Fine-Graining Response. Paul has her own reply to this response, which we might call the Authenticity Reply. But Sarah Moss has recently offered an alternative reply to the Fine-Graining Response on Paul’s behalf (Moss 2017) — we’ll call it the No Knowledge Reply. This appeals to the knowledge norm of action, together with Moss’ novel and intriguing account of probabilistic knowledge. In this paper, I consider Moss’ reply and argue that it fails. I argue first that it fails as a reply made on Paul’s behalf, since it forces us to abandon many of the features of Paul’s challenge that make it distinctive and with which Paul herself is particularly concerned. Then I argue that it fails as a reply independent of its fidelity to Paul’s intentions. (shrink)
Descartes says that the Meditations contains the foundations of his physics. But how does the work advance his geometrical view of the corporeal world? His argument for this view of matter is often taken to be concluded with the proof of the existence of bodies in the Sixth Meditation. This paper focuses on the work that follows the proof, where Descartes pursues the question of what we should think about qualities such as light, sound and pain, as well as the (...) size and shape of particular bodies. His inquiry makes crucial use of the notion of a teaching of nature originating from God, as contrasted with an apparent teaching of nature originating from habit. I attempt to reconstruct Descartes's use of these notions in order to clarify the way in which he makes space for his geometrical conception of the corporeal world. (shrink)
We have been teaching gender issues and feminist theory for many years, and we know that there is certainly a diversity of views among women, and men, about what counts as feminist or as good for women. Some may see a competent woman running for V.P as inevitably a step forward for women's equality. But consider this.
Jim Joyce has presented an argument for Probabilism based on considerations of epistemic utility [Joyce, 1998]. In a recent paper, I adapted this argument to give an argument for Probablism and the Principal Principle based on similar considerations [Pettigrew, 2012]. Joyce’s argument assumes that a credence in a true proposition is better the closer it is to maximal credence, whilst a credence in a false proposition is better the closer it is to minimal credence. By contrast, my argument in (...) that paper assumed (roughly) that a credence in a proposition is better the closer it is to the objective chance of that proposition. In this paper, I present an epistemic utility argument for Probabilism and the Principal Principle that retains Joyce’s assumption rather than the alternative I endorsed in the earlier paper. I argue that this results in a superior argument for these norms. (shrink)
Richard Pettigrew offers an extended investigation into a particular way of justifying the rational principles that govern our credences. The main principles that he justifies are the central tenets of Bayesian epistemology, though many other related principles are discussed along the way. Pettigrew looks to decision theory in order to ground his argument. He treats an agent's credences as if they were a choice she makes between different options, gives an account of the purely epistemic utility enjoyed by (...) different sets of credences, and then appeals to the principles of decision theory to show that, when epistemic utility is measured in this way, the credences that violate the principles listed above are ruled out as irrational. The account of epistemic utility set out here is the veritist's: the sole fundamental source of epistemic utility for credences is their accuracy. Thus, Pettigrew conducts an investigation in the version of epistemic utility theory known as accuracy-first epistemology. (shrink)
In Bayesian epistemology, the problem of the priors is this: How should we set our credences (or degrees of belief) in the absence of evidence? That is, how should we set our prior or initial credences, the credences with which we begin our credal life? David Lewis liked to call an agent at the beginning of her credal journey a superbaby. The problem of the priors asks for the norms that govern these superbabies. -/- The Principle of Indifference gives a (...) very restrictive answer. It demands that such an agent divide her credences equally over all possibilities. That is, according to the Principle of Indifference, only one initial credence function is permissible, namely, the uniform distribution. In this paper, we offer a novel argument for the Principle of Indifference. I call it the Argument from Accuracy. (shrink)
Der protestantische Theologe Karl Girgensohn ist 1903 mit seinem frühen Werk über das Wesen der Religion an die Öffentlichkeit getreten, welches einen starken religionsphilosophischen Standpunkt zum Ausdruck bringt. Kernüberlegung ist hierbei eine kognitive Theorie des Religiösen, in der die Gottesidee zentral ist. Unter Berücksichtigung der Biographie Girgensohns geht der vorliegende Beitrag auf diese frühe Studie zum Wesen der Religion ein und skizziert den Übergang des Autors von einem philosophischen zu einem experimentell-introspektiven Ansatz der Religiositätsforschung, welcher dann zum Fundament für die (...) Dorpater religionspsychologische Schule wurde. Basierend auf Girgensohns frühem Werk werden abschließend Implikationen für die heutige empirische Theologie vorgeschlagen.The Protestant theologian Karl Girgensohn came to the public in 1903 with his early work on the nature of religion, which expresses a strong religious-philosophical standpoint. The core consideration here is a cognitive theory of the religious, in which the idea of God is central. Taking into account Girgensohn’s biography, the present contribution addresses this early study on the nature of religion and outlines the author’s transition from a philosophical to an experimental-introspective approach to religious research, which then became the foundation for the Dorpat School of the psychology of religion. Based on Girgensohn’s early work, implications for contemporary empirical theology are finally proposed. (shrink)
In ‘A Non-Pragmatic Vindication of Probabilism’, Jim Joyce attempts to ‘depragmatize’ de Finetti’s prevision argument for the claim that our partial beliefs ought to satisfy the axioms of probability calculus. In this paper, I adapt Joyce’s argument to give a non-pragmatic vindication of various versions of David Lewis’ Principal Principle, such as the version based on Isaac Levi's account of admissibility, Michael Thau and Ned Hall's New Principle, and Jenann Ismael's Generalized Principal Principle. Joyce enumerates properties that must be had (...) by any measure of the distance from a set of partial beliefs to the set of truth values; he shows that, on any such measure, and for any set of partial beliefs that violates the probability axioms, there is a set that satisfies those axioms that is closer to every possible set of truth values. I replace truth values by objective chances in his argument; I show that for any set of partial beliefs that violates the probability axioms or a version of the Principal Principle, there is a set that satisfies them that is closer to every possible set of objective chances. (shrink)
In this unconventional article, Sarah Banet-Weiser, Rosalind Gill and Catherine Rottenberg conduct a three-way ‘conversation’ in which they all take turns outlining how they understand the relationship among postfeminism, popular feminism and neoliberal feminism. It begins with a short introduction, and then Ros, Sarah and Catherine each define the term they have become associated with. This is followed by another round in which they discuss the overlaps, similarities and disjunctures among the terms, and the article ends with how (...) each one understands the current mediated feminist landscape. (shrink)
Michael Rescorla (2020) has recently pointed out that the standard arguments for Bayesian Conditionalization assume that whenever you take yourself to learn something with certainty, it's true. Most people would reject this assumption. In response, Rescorla offers an improved Dutch Book argument for Bayesian Conditionalization that does not make this assumption. My purpose in this paper is two-fold. First, I want to illuminate Rescorla's new argument by giving a very general Dutch Book argument that applies to many cases of updating (...) beyond those covered by Conditionalization, and then showing how Rescorla's version follows as a special case of that. Second, I want to show how to generalise Briggs and Pettigrew's Accuracy Dominance argument to avoid the assumption that Rescorla has identified (Briggs & Pettigrew 2018). (shrink)
What we value, like, endorse, want, and prefer changes over the course of our lives. Richard Pettigrew presents a theory of rational decision making for agents who recognise that their values will change over time and whose decisions will affect those future times.
In this paper, we seek a reliabilist account of justified credence. Reliabilism about justified beliefs comes in two varieties: process reliabilism (Goldman, 1979, 2008) and indicator reliabilism (Alston, 1988, 2005). Existing accounts of reliabilism about justified credence comes in the same two varieties: Jeff Dunn (2015) proposes a version of process reliabilism, while Weng Hong Tang (2016) offers a version of indicator reliabilism. As we will see, both face the same objection. If they are right about what justification is, it (...) is mysterious why we care about justification, for neither of the accounts explains how justification is connected to anything of epistemic value. We will call this the Connection Problem. I begin by describing Dunn’s process reliabilism and Tang’s indicator reliabilism. I argue that, understood correctly, they are, in fact, extensionally equivalent. That is, Dunn and Tang reach the top of the same mountain, albeit by different routes. However, I argue that both face the Connection Problem. In response, I offer my own version of reliabilism, which is both process and indicator, and I argue that it solves that problem. Furthermore, I show that it is also extensionally equivalent to Dunn’s reliabilism and Tang’s. Thus, I reach the top of the same mountain as well. (shrink)
In “A Nonpragmatic Vindication of Probabilism”, Jim Joyce argues that our credences should obey the axioms of the probability calculus by showing that, if they don't, there will be alternative credences that are guaranteed to be more accurate than ours. But it seems that accuracy is not the only goal of credences: there is also the goal of matching one's credences to one's evidence. I will consider four ways in which we might make this latter goal precise: on the first, (...) the norms to which this goal gives rise act as ‘side constraints’ on our choice of credences; on the second, matching credences to evidence is a goal that is weighed against accuracy to give the overall cognitive value of credences; on the third, as on the second, proximity to the evidential goal and proximity to the goal of accuracy are both sources of value, but this time they are incomparable; on the fourth, the evidential goal is not an independent goal at all, but rather a byproduct of the goal of accuracy. All but the fourth way of making the evidential goal precise are pluralist about credal virtue: there is the virtue of being accurate and there is the virtue of matching the evidence and neither reduces to the other. The fourth way is monist about credal virtue: there is just the virtue of being accurate. The pluralist positions lead to problems for Joyce's argument; the monist position avoids them. I endorse the latter. (shrink)
Traditional philosophical discussions of knowledge have focused on the epistemic status of full beliefs. In this book, Moss argues that in addition to full beliefs, credences can constitute knowledge. For instance, your .4 credence that it is raining outside can constitute knowledge, in just the same way that your full beliefs can. In addition, you can know that it might be raining, and that if it is raining then it is probably cloudy, where this knowledge is not knowledge of propositions, (...) but of probabilistic contents. -/- The notion of probabilistic content introduced in this book plays a central role not only in epistemology, but in the philosophy of mind and language as well. Just as tradition holds that you believe and assert propositions, you can believe and assert probabilistic contents. Accepting that we can believe, assert, and know probabilistic contents has significant consequences for many philosophical debates, including debates about the relationship between full belief and credence, the semantics of epistemic modals and conditionals, the contents of perceptual experience, peer disagreement, pragmatic encroachment, perceptual dogmatism, and transformative experience. In addition, accepting probabilistic knowledge can help us discredit negative evaluations of female speech, explain why merely statistical evidence is insufficient for legal proof, and identify epistemic norms violated by acts of racial profiling. Hence the central theses of this book not only help us better understand the nature of our own mental states, but also help us better understand the nature of our responsibilities to each other. (shrink)
Accuracy arguments for the core tenets of Bayesian epistemology differ mainly in the conditions they place on the legitimate ways of measuring the inaccuracy of our credences. The best existing arguments rely on three conditions: Continuity, Additivity, and Strict Propriety. In this paper, I show how to strengthen the arguments based on these conditions by showing that the central mathematical theorem on which each depends goes through without assuming Additivity.
One of the fundamental problems of epistemology is to say when the evidence in an agent’s possession justifies the beliefs she holds. In this paper and its sequel, we defend the Bayesian solution to this problem by appealing to the following fundamental norm: Accuracy An epistemic agent ought to minimize the inaccuracy of her partial beliefs. In this paper, we make this norm mathematically precise in various ways. We describe three epistemic dilemmas that an agent might face if she attempts (...) to follow Accuracy, and we show that the only inaccuracy measures that do not give rise to such dilemmas are the quadratic inaccuracy measures. In the sequel, we derive the main tenets of Bayesianism from the relevant mathematical versions of Accuracy to which this characterization of the legitimate inaccuracy measures gives rise, but we also show that Jeffrey conditionalization has to be replaced by a different method of update in order for Accuracy to be satisfied. (shrink)
Leitgeb and Pettigrew argue that (1) agents should minimize the expected inaccuracy of their beliefs and (2) inaccuracy should be measured via the Brier score. They show that in certain diachronic cases, these claims require an alternative to Jeffrey Conditionalization. I claim that this alternative is an irrational updating procedure and that the Brier score, and quadratic scoring rules generally, should be rejected as legitimate measures of inaccuracy.
We often ask for the opinion of a group of individuals. How strongly does the scientific community believe that the rate at which sea levels are rising has increased over the last 200 years? How likely does the UK Treasury think it is that there will be a recession if the country leaves the European Union? What are these group credences that such questions request? And how do they relate to the individual credences assigned by the members of the particular (...) group in question? According to the credal judgement aggregation principle, linear pooling, the credence function of a group should be a weighted average or linear pool of the credence functions of the individuals in the group. In this chapter, I give an argument for linear pooling based on considerations of accuracy. And I respond to two standard objections to the aggregation principle. (shrink)
According to certain normative theories in epistemology, rationality requires us to be logically omniscient. Yet this prescription clashes with our ordinary judgments of rationality. How should we resolve this tension? In this paper, I focus particularly on the logical omniscience requirement in Bayesian epistemology. Building on a key insight by Hacking :311–325, 1967), I develop a version of Bayesianism that permits logical ignorance. This includes: an account of the synchronic norms that govern a logically ignorant individual at any given time; (...) an account of how we reduce our logical ignorance by learning logical facts and how we should update our credences in response to such evidence; and an account of when logical ignorance is irrational and when it isn’t. At the end, I explain why the requirement of logical omniscience remains true of ideal agents with no computational, processing, or storage limitations. (shrink)
Beliefs come in different strengths. An agent's credence in a proposition is a measure of the strength of her belief in that proposition. Various norms for credences have been proposed. Traditionally, philosophers have tried to argue for these norms by showing that any agent who violates them will be lead by her credences to make bad decisions. In this article, we survey a new strategy for justifying these norms. The strategy begins by identifying an epistemic utility function and a decision-theoretic (...) norm; we then show that the decision-theoretic norm applied to the epistemic utility function yields the norm for credences that we wish to justify. We survey results already obtained using this strategy, and we suggest directions for future research. (shrink)
One of the fundamental problems of epistemology is to say when the evidence in an agent’s possession justifies the beliefs she holds. In this paper and its prequel, we defend the Bayesian solution to this problem by appealing to the following fundamental norm: Accuracy An epistemic agent ought to minimize the inaccuracy of her partial beliefs. In the prequel, we made this norm mathematically precise; in this paper, we derive its consequences. We show that the two core tenets of Bayesianism (...) follow from the norm, while the characteristic claim of the Objectivist Bayesian follows from the norm along with an extra assumption. Finally, we consider Richard Jeffrey’s proposed generalization of conditionalization. We show not only that his rule cannot be derived from the norm, unless the requirement of Rigidity is imposed from the start, but further that the norm reveals it to be illegitimate. We end by deriving an alternative updating rule for those cases in which Jeffrey’s is usually supposed to apply. (shrink)
Conditionalization is one of the central norms of Bayesian epistemology. But there are a number of competing formulations, and a number of arguments that purport to establish it. In this paper, I explore which formulations of the norm are supported by which arguments. In their standard formulations, each of the arguments I consider here depends on the same assumption, which I call Deterministic Updating. I will investigate whether it is possible to amend these arguments so that they no longer depend (...) on it. As I show, whether this is possible depends on the formulation of the norm under consideration. (shrink)
In a series of papers over the past twenty years, and in a new book, Igor Douven has argued that Bayesians are too quick to reject versions of inference to the best explanation that cannot be accommodated within their framework. In this paper, I survey their worries and attempt to answer them using a series of pragmatic and purely epistemic arguments that I take to show that Bayes’ Rule really is the only rational way to respond to your evidence.
Since Mill's seminal work On Liberty, philosophers and political theorists have accepted that we should respect the decisions of individual agents when those decisions affect no one other than themselves. Indeed, to respect autonomy is often understood to be the chief way to bear witness to the intrinsic value of persons. In this book, Sarah Conly rejects the idea of autonomy as inviolable. Drawing on sources from behavioural economics and social psychology, she argues that we are so often irrational (...) in making our decisions that our autonomous choices often undercut the achievement of our own goals. Thus in many cases it would advance our goals more effectively if government were to prevent us from acting in accordance with our decisions. Her argument challenges widely held views of moral agency, democratic values and the public/private distinction, and will interest readers in ethics, political philosophy, political theory and philosophy of law. (shrink)
The Dutch Book Argument for Probabilism assumes Ramsey's Thesis (RT), which purports to determine the prices an agent is rationally required to pay for a bet. Recently, a new objection to Ramsey's Thesis has emerged (Hedden 2013, Wronski & Godziszewski 2017, Wronski 2018)--I call this the Expected Utility Objection. According to this objection, it is Maximise Subjective Expected Utility (MSEU) that determines the prices an agent is required to pay for a bet, and this often disagrees with Ramsey's Thesis. I (...) suggest two responses to Hedden's objection. First, we might be permissive: agents are permitted to pay any price that is required or permitted by RT, and they are permitted to pay any price that is required or permitted by MSEU. This allows us to give a revised version of the Dutch Book Argument for Probabilism, which I call the Permissive Dutch Book Argument. Second, I suggest that even the proponent of the Expected Utility Objection should admit that RT gives the correct answer in certain very limited cases, and I show that, together with MSEU, this very restricted version of RT gives a new pragmatic argument for Probabilism, which I call the Bookless Pragmatic Argument. (shrink)
In this paper, we explore how we should aggregate the degrees of belief of a group of agents to give a single coherent set of degrees of belief, when at least some of those agents might be probabilistically incoherent. There are a number of ways of aggregating degrees of belief, and there are a number of ways of fixing incoherent degrees of belief. When we have picked one of each, should we aggregate first and then fix, or fix first and (...) then aggregate? Or should we try to do both at once? And when do these different procedures agree with one another? In this paper, we focus particularly on the final question. (shrink)
Our beliefs come in degrees. I'm 70% confident it will rain tomorrow, and 0.001% sure my lottery ticket will win. What's more, we think these degrees of belief should abide by certain principles if they are to be rational. For instance, you shouldn't believe that a person's taller than 6ft more strongly than you believe that they're taller than 5ft, since the former entails the latter. In Dutch Book arguments, we try to establish the principles of rationality for degrees of (...) belief by appealing to their role in guiding decisions. In particular, we show that degrees of belief that don't satisfy the principles will always guide action in some way that is bad or undesirable. In this Element, we present Dutch Book arguments for the principles of Probabilism, Conditionalization, and the Reflection Principle, among others, and we formulate and consider the most serious objections to them. (shrink)
Generic generalizations such as ‘mosquitoes carry the West Nile virus’ or ‘sharks attack bathers’ are often accepted by speakers despite the fact that very few members of the kinds in question have the predicated property. Previous work suggests that such low-prevalence generalizations may be accepted when the properties in question are dangerous, harmful, or appalling. This paper argues that the study of such generic generalizations sheds light on a particular class of prejudiced social beliefs, and points to new ways in (...) which those beliefs might be undermined and combatted. (shrink)
In a recent paper in this journal, James Hawthorne, Jürgen Landes, Christian Wallmann, and Jon Williamson argue that the principal principle entails the principle of indifference. In this article, I argue that it does not. Lewis’s version of the principal principle notoriously depends on a notion of admissibility, which Lewis uses to restrict its application. HLWW base their argument on certain intuitions concerning when one proposition is admissible for another: Conditions 1 and 2. There are two ways of reading their (...) argument, depending on how you understand the status of these conditions. Reading 1: The correct account of admissibility is determined independently of these two principles, and yet these two principles follow from that correct account. Reading 2: The correct account of admissibility is determined in part by these two principles, so that the principles follow from that account but only because the correct account is constrained so that it must satisfy them. HLWW show that given an account of admissibility on which Conditions 1 and 2 hold, the principal principle entails the principle of indifference. I argue that on either reading of the argument, it fails. First, I argue that there is a plausible account of admissibility on which Conditions 1 and 2 are false. That defeats Reading 1. Next, I argue that the intuitions that lead us to assent to Condition 2 also lead us to assent to other very closely related principles that are inconsistent with Condition 2. This, I claim, casts doubt on the reliability of those intuitions, and thus removes our justification for Condition 2. This defeats Reading 2 of the HLWW argument. Thus, the argument fails. 1Introduction 2Introducing the Principal Principle 3Introducing the Principle of Indifference 4The HLWW Argument 4.1Reading 1: Admissibility justifies Conditions 1 and 2 4.2Reading 2: Conditions 1 and 2 constrain admissibility 5Conclusion. (shrink)
Consider Phoebe and Daphne. Phoebe has credences in 1 million propositions. Daphne, on the other hand, has credences in all of these propositions, but she's also got credences in 999 million other propositions. Phoebe's credences are all very accurate. Each of Daphne's credences, in contrast, are not very accurate at all; each is a little more accurate than it is inaccurate, but not by much. Whose doxastic state is better, Phoebe's or Daphne's? It is clear that this question is analogous (...) to a question that has exercised ethicists over the past thirty years. How do we weigh a population consisting of some number of exceptionally happy and satisfied individuals against another population consisting of a much greater number of people whose lives are only just worth living? This is the question that occasions population ethics. In this paper, I go in search of the correct population ethics for credal states. (shrink)
The sharpest corner of the cutting edge of recent epistemology is to be found in Richard Pettigrew’s Accuracy and the Laws of Credence. In this fine book Pettigrew argues that a certain kind of accuracy-based value monism entails that rational credence manifests a host of features emphasized by anti-externalists in epistemology. Specifically, he demonstrates how a particular version of accuracy-based value monism—to be discussed at length below—when placed with some not implausible views about how epistemic value and rationality (...) relate to one another, ensures that rational credence manifests many of the structural properties emphasized by those who give evidence pride of place in the theory of rationality. A major goal of Pettigrew’s book, then, is to make clear how accuracy-based value monism fits together with the phenomena used by those who argue against accuracy-based externalism.2 2. (shrink)
In this incisive study Sarah Broadie gives an argued account of the main topics of Aristotle's ethics: eudaimonia, virtue, voluntary agency, practical reason, akrasia, pleasure, and the ethical status of theoria. She explores the sense of "eudaimonia," probes Aristotle's division of the soul and its virtues, and traces the ambiguities in "voluntary." Fresh light is shed on his comparison of practical wisdom with other kinds of knowledge, and a realistic account is developed of Aristototelian deliberation. The concept of pleasure (...) as value-judgment is expounded, and the problem of akrasia is argued to be less of a problem to Aristotle than to his modern interpreters. Showing that the theoretic ideal of Nicomachean Ethics X is in step with the earlier emphasis on practice, as well as with the doctrine of the Eudemian Ethics, this work makes a major contribution towards the understanding of Aristotle's ethics. (shrink)
This paper examines the relationship between the KK principle and the epistemological theses of externalism and internalism. In particular we examine arguments from Okasha :80–86, 2013) and Greco :169–197, 2014) which deny that we can derive the denial of the KK principle from externalism.
There are decision problems where the preferences that seem rational to many people cannot be accommodated within orthodox decision theory in the natural way. In response, a number of alternatives to the orthodoxy have been proposed. In this paper, I offer an argument against those alternatives and in favour of the orthodoxy. I focus on preferences that seem to encode sensitivity to risk. And I focus on the alternative to the orthodoxy proposed by Lara Buchak’s risk-weighted expected utility theory. I (...) will show that the orthodoxy can be made to accommodate all of the preferences that Buchak’s theory can accommodate. (shrink)
How does logic relate to rational belief? Is logic normative for belief, as some say? What, if anything, do facts about logical consequence tell us about norms of doxastic rationality? In this paper, we consider a range of putative logic-rationality bridge principles. These purport to relate facts about logical consequence to norms that govern the rationality of our beliefs and credences. To investigate these principles, we deploy a novel approach, namely, epistemic utility theory. That is, we assume that doxastic attitudes (...) have different epistemic value depending on how accurately they represent the world. We then use the principles of decision theory to determine which of the putative logic-rationality bridge principles we can derive from considerations of epistemic utility. (shrink)