One popular approach to statistical mechanics understands statistical mechanical probabilities as measures of rational indifference. Naive formulations of this ``indifference approach'' face reversibility worries - while they yield the right prescriptions regarding future events, they yield the wrong prescriptions regarding past events. This paper begins by showing how the indifference approach can overcome the standard reversibility worries by appealing to the Past Hypothesis. But, the paper argues, positing a Past Hypothesis doesn't free the indifference approach from all reversibility worries. For (...) while appealing to the Past Hypothesis allows it to escape one kind of reversibility worry, it makes it susceptible to another - the Meta-Reversibility Objection. And there is no easy way for the indifference approach to escape the Meta-Reversibility Objection. As a result, reversibility worries pose a steep challenge to the viability of the indifference approach. (shrink)
This paper examines the debate between permissive and impermissive forms of Bayesianism. It briefly discusses some considerations that might be offered by both sides of the debate, and then replies to some new arguments in favor of impermissivism offered by Roger White. First, it argues that White’s defense of Indifference Principles is unsuccessful. Second, it contends that White’s arguments against permissive views do not succeed.
This paper examines three accounts of the sleeping beauty case: an account proposed by Adam Elga, an account proposed by David Lewis, and a third account defended in this paper. It provides two reasons for preferring the third account. First, this account does a good job of capturing the temporal continuity of our beliefs, while the accounts favored by Elga and Lewis do not. Second, Elga’s and Lewis’ treatments of the sleeping beauty case lead to highly counterintuitive consequences. The proposed (...) account also leads to counterintuitive consequences, but they’re not as bad as those of Elga’s account, and no worse than those of Lewis’ account. (shrink)
Representation theorems are often taken to provide the foundations for decision theory. First, they are taken to characterize degrees of belief and utilities. Second, they are taken to justify two fundamental rules of rationality: that we should have probabilistic degrees of belief and that we should act as expected utility maximizers. We argue that representation theorems cannot serve either of these foundational purposes, and that recent attempts to defend the foundational importance of representation theorems are unsuccessful. As a result, we (...) should reject these claims, and lay the foundations of decision theory on firmer ground. (shrink)
Deference principles are principles that describe when, and to what extent, it’s rational to defer to others. Recently, some authors have used such principles to argue for Evidential Uniqueness, the claim that for every batch of evidence, there’s a unique doxastic state that it’s permissible for subjects with that total evidence to have. This paper has two aims. The first aim is to assess these deference-based arguments for Evidential Uniqueness. I’ll show that these arguments only work given a particular kind (...) of deference principle, and I’ll argue that there are reasons to reject these kinds of principles. The second aim of this paper is to spell out what a plausible generalized deference principle looks like. I’ll start by offering a principled rationale for taking deference to constrain rational belief. Then I’ll flesh out the kind of deference principle suggested by this rationale. Finally, I’ll show that this principle is both more plausible and more general than the principles used in the deference-based arguments for Evidential Uniqueness. (shrink)
Conditionalization is a widely endorsed rule for updating one’s beliefs. But a sea of complaints have been raised about it, including worries regarding how the rule handles error correction, changing desiderata of theory choice, evidence loss, self-locating beliefs, learning about new theories, and confirmation. In light of such worries, a number of authors have suggested replacing Conditionalization with a different rule — one that appeals to what I’ll call “ur-priors”. But different authors have understood the rule in different ways, and (...) these different understandings solve different problems. In this paper, I aim to map out the terrain regarding these issues. I survey the different problems that might motivate the adoption of such a rule, flesh out the different understandings of the rule that have been proposed, and assess their pros and cons. I conclude by suggesting that one particular batch of proposals, proposals that appeal to what I’ll call “loaded evidential standards”, are especially promising. (shrink)
This paper examines two mistakes regarding David Lewis’ Principal Principle that have appeared in the recent literature. These particular mistakes are worth looking at for several reasons: The thoughts that lead to these mistakes are natural ones, the principles that result from these mistakes are untenable, and these mistakes have led to significant misconceptions regarding the role of admissibility and time. After correcting these mistakes, the paper discusses the correct roles of time and admissibility. With these results in hand, the (...) paper concludes by showing that one way of formulating the chance–credence relation has a distinct advantage over its rivals. (shrink)
Several variants of Lewis's Best System Account of Lawhood have been proposed that avoid its commitment to perfectly natural properties. There has been little discussion of the relative merits of these proposals, and little discussion of how one might extend this strategy to provide natural property-free variants of Lewis's other accounts, such as his accounts of duplication, intrinsicality, causation, counterfactuals, and reference. We undertake these projects in this paper. We begin by providing a framework for classifying and assessing the variants (...) of the Best System Account. We then evaluate these proposals, and identify the most promising candidates. We go on to develop a proposal for systematically modifying Lewis's other accounts so that they, too, avoid commitment to perfectly natural properties. We conclude by briefly considering a different route one might take to developing natural property-free versions of Lewis's other accounts, drawing on recent work by Williams. (shrink)
In Reasons and Persons, Parfit (1984) posed a challenge: provide a satisfying normative account that solves the Non-Identity Problem, avoids the Repugnant and Absurd Conclusions, and solves the Mere-Addition Paradox. In response, some have suggested that we look toward person-affecting views of morality for a solution. But the person-affecting views that have been offered so far have been unable to satisfy Parfit's four requirements, and these views have been subject to a number of independent complaints. This paper describes a person-affecting (...) account which meets Parfit's challenge. The account satisfies Parfit's four requirements, and avoids many of the criticisms that have been raised against person-affecting views. (shrink)
At the heart of the Bayesianism is a rule, Conditionalization, which tells us how to update our beliefs. Typical formulations of this rule are underspecified. This paper considers how, exactly, this rule should be formulated. It focuses on three issues: when a subject’s evidence is received, whether the rule prescribes sequential or interval updates, and whether the rule is narrow or wide scope. After examining these issues, it argues that there are two distinct and equally viable versions of Conditionalization to (...) choose from. And which version we choose has interesting ramifications, bearing on issues such as whether Conditionalization can handle continuous evidence, and whether Jeffrey Conditionalization is really a generalization of Conditionalization. (shrink)
A number of cases involving self-locating beliefs have been discussed in the Bayesian literature. I suggest that many of these cases, such as the sleeping beauty case, are entangled with issues that are independent of self-locating beliefs per se. In light of this, I propose a division of labor: we should address each of these issues separately before we try to provide a comprehensive account of belief updating. By way of example, I sketch some ways of extending Bayesianism in order (...) to accommodate these issues. Then, putting these other issues aside, I sketch some ways of extending Bayesianism in order to accommodate self-locating beliefs. Finally, I propose a constraint on updating rules, the "Learning Principle", which rules out certain kinds of troubling belief changes, and I use this principle to assess some of the available options. (shrink)
In “Bayesianism, Infinite Decisions, and Binding”, Arntzenius et al. (Mind 113:251–283, 2004 ) present cases in which agents who cannot bind themselves are driven by standard decision theory to choose sequences of actions with disastrous consequences. They defend standard decision theory by arguing that if a decision rule leads agents to disaster only when they cannot bind themselves, this should not be taken to be a mark against the decision rule. I show that this claim has surprising implications for a (...) number of other debates in decision theory. I then assess the plausibility of this claim, and suggest that it should be rejected. (shrink)
I argue that the theory of chance proposed by David Lewis has three problems: (i) it is time asymmetric in a manner incompatible with some of the chance theories of physics, (ii) it is incompatible with statistical mechanical chances, and (iii) the content of Lewis's Principal Principle depends on how admissibility is cashed out, but there is no agreement as to what admissible evidence should be. I proposes two modifications of Lewis's theory which resolve these difficulties. I conclude by tentatively (...) proposing a third modification of Lewis's theory, one which explains many of the common features shared by the chance theories of physics. (shrink)
Some of the most interesting recent work in formal epistemology has focused on developing accuracy-based approaches to justifying Bayesian norms. These approaches are interesting not only because they offer new ways to justify these norms, but because they potentially offer a way to justify all of these norms by appeal to a single, attractive epistemic goal: having accurate beliefs. Recently, Easwaran & Fitelson (2012) have raised worries regarding whether such “all-accuracy” or “purely alethic” approaches can accommodate and justify evidential Bayesian (...) norms. In response, proponents of purely alethic approaches, such as Pettigrew (2013b) and Joyce (2016), have argued that scoring rule arguments provide us with compatible and purely alethic justifications for the traditional Bayesian norms, including evidential norms. In this paper I raise several challenges to this claim. First, I argue that many of the justifications these scoring rule arguments provide are not compatible. Second, I raise worries for the claim that these scoring rule arguments provide purely alethic justifications. Third, I turn to assess the more general question of whether purely alethic justifications for evidential norms are even possible, and argue that, without making some contentious assumptions, they are not. Fourth, I raise some further worries for the possibility of providing purely alethic justifications for content-sensitive evidential norms, like the Principal Principle. (shrink)
This pair of articles provides a critical commentary on contemporary approaches to statistical mechanical probabilities. These articles focus on the two ways of understanding these probabilities that have received the most attention in the recent literature: the epistemic indifference approach, and the Lewis-style regularity approach. These articles describe these approaches, highlight the main points of contention, and make some attempts to advance the discussion. The first of these articles provides a brief sketch of statistical mechanics, and discusses the indifference approach (...) to statistical mechanical probabilities. (shrink)
Standard decision theory has trouble handling cases involving acts without finite expected values. This paper has two aims. First, building on earlier work by Colyvan (2008), Easwaran (2014), and Lauwers and Vallentyne (2016), it develops a proposal for dealing with such cases, Difference Minimizing Theory. Difference Minimizing Theory provides satisfactory verdicts in a broader range of cases than its predecessors. And it vindicates two highly plausible principles of standard decision theory, Stochastic Equivalence and Stochastic Dominance. The second aim is to (...) assess some recent arguments against Stochastic Equivalence and Stochastic Dominance. If successful, these arguments refute Difference Minimizing Theory. This paper contends that these arguments are not successful. (shrink)
Theories that use expected utility maximization to evaluate acts have difficulty handling cases with infinitely many utility contributions. In this paper I present and motivate a way of modifying such theories to deal with these cases, employing what I call “Direct Difference Taking”. This proposal has a number of desirable features: it’s natural and well-motivated, it satisfies natural dominance intuitions, and it yields plausible prescriptions in a wide range of cases. I then compare my account to the most plausible alternative, (...) a proposal offered by Arntzenius :31–58, 2014). I argue that while Arntzenius’s proposal has many attractive features, it runs into a number of problems which Direct Difference Taking avoids. (shrink)
In recent work, Callender and Cohen (2009) and Hoefer (2007) have proposed variants of the account of chance proposed by Lewis (1994). One of the ways in which these accounts diverge from Lewis’s is that they allow special sciences and the macroscopic realm to have chances that are autonomous from those of physics and the microscopic realm. A worry for these proposals is that autonomous chances may place incompatible constraints on rational belief. I examine this worry, and attempt to determine (...) (i) what kinds of conflicts would be problematic, and (ii) whether these proposals lead to problematic conflicts. After working through a pair of cases, I conclude that these proposals do give rise to problematic conflicts. (shrink)
Clark and Shackel have recently argued that previous attempts to resolve the two-envelope paradox fail, and that we must look to symmetries of the relevant expected-value calculations for a solution. Clark and Shackel also argue for a novel solution to the peeking case, a variant of the two-envelope scenario in which you are allowed to look in your envelope before deciding whether or not to swap. Whatever the merits of these solutions, they go beyond accepted decision theory, even contradicting it (...) in the peeking case. Thus if we are to take their solutions seriously, we must understand Clark and Shackel to be proposing a revision of standard decision theory. Understood as such, we will argue, their proposal is both implausible and unnecessary. (shrink)
This pair of articles provides a critical commentary on contemporary approaches to statistical mechanical probabilities. These articles focus on the two ways of understanding these probabilities that have received the most attention in the recent literature: the epistemic indifference approach, and the Lewis-style regularity approach. These articles describe these approaches, highlight the main points of contention, and make some attempts to advance the discussion. The second of these articles discusses the regularity approach to statistical mechanical probabilities, and describes some areas (...) where further research is needed. (shrink)
Evidential Uniqueness is the thesis that, for any batch of evidence, there’s a unique doxastic state that a subject with that evidence should have. One of the most common kinds of objections to views that violate Evidential Uniqueness are arbitrariness objections – objections to the effect that views that don’t satisfy Evidential Uniqueness lead to unacceptable arbitrariness. The goal of this paper is to examine a variety of arbitrariness objections that have appeared in the literature, and to assess the extent (...) to which these objections bolster the case for Evidential Uniqueness. After examining a number of different arbitrariness objections, I’ll conclude that, by and large, these objections do little to bolster the case for Evidential Uniqueness. (shrink)
In this paper we apply the popular Best System Account of laws to typical eternal worlds – both classical eternal worlds and eternal worlds of the kind posited by popular contemporary cosmological theories. We show that, according to the Best System Account, such worlds will have no laws that meaningfully constrain boundary conditions. It’s generally thought that lawful constraints on boundary conditions are required to avoid skeptical arguments. Thus the lack of such laws given the Best System Account may seem (...) like a severe problem for the view. We show, however, that at eternal worlds, lawful constraints on boundary conditions do little to help fend off skeptical worries. So with respect to handling these skeptical worries, the proponent of the Best System Account is no worse off than their competitors. (shrink)
How should our beliefs change over time? The standard answer to this question is the Bayesian one. But while the Bayesian account works well with respect to beliefs about the world, it breaks down when applied to self-locating or de se beliefs. In this work I explore ways to extend Bayesianism in order to accommodate de se beliefs. I begin by assessing, and ultimately rejecting, attempts to resolve these issues by appealing to Dutch books and chance-credence principles. I then propose (...) and examine several accounts of the dynamics of de se beliefs. These examinations suggest that an extension of Bayesianism to de se beliefs will require some uncomfortable choices. I conclude by laying out the options available, and assessing the prospects of each. (shrink)
A number of criticisms of Utilitarianism – such as “nearest and dearest” objections, “demandingness” objections, and “altruistic” objections – arise because Utilitarianism doesn’t permit partially or wholly disregarding the utility of certain subjects. A number of authors, including Sider, Portmore and Vessel, have responded to these objections by suggesting we adopt “dual-maximizing” theories which provide a way to incorporate disregarding. And in response to “altruistic” objections in particular – objections noting that it seems permissible to make utility- decreasing sacrifices – (...) these authors have suggested adopting a dual-maximizing theory that permits disregarding one’s own utility. In this paper I’ll defend two claims. First, I’ll argue that dual-maximizing theories are a poor way to incorporate disregarding. Instead, I’ll suggest that “variable- disregarding” theories provide a more attractive way to incorporate disregarding. Second, I’ll argue that the right way to handle these “altruistic” objections isn’t to permit disregarding one’s own utility, it’s to permit disregarding the utility of those who consent. Together, these two claims entail that the best way to modify Utilitarianism to handle “altruistic” objections is to adopt a variable-disregarding view that disregards the utility of those who consent. (shrink)
An adequate account of laws should satisfy at least ﬁve desiderata: it should provide a uniﬁed account of laws and chances, it should yield plausible relations between laws and chances, it should vindicate numerical chance assignments, it should accommodate dynamical and non-dynamical chances, and it should accommodate a plausible range of nomic possibilities. No extant account of laws satisﬁes these desiderata. This paper presents a non-Humean account of laws, the Nomic Likelihood Account, that does.
This is the editors' introduction to an edited volume devoted to the relation between phenomenology and naturalism across several philosophical domains, including: epistemology, metaphysics, history of philosophy, and philosophy of science and ethics.
Die in Band 4 versammelten Briefe zeigen Gottsched auf dem Gipfel seines Ruhmes und seiner Anerkennung als Dichtungstheoretiker, Sprachwissenschaftler, Philosoph, Theaterreformer und Publizist. Wiederkehrende Themen in der Korrespondenz sind neben der Einfuhrung des deutschen Sprachunterrichts an Gymnasien Fragen zur Dichtungstheorie, zur Ubersetzung fremdsprachiger Bucher und zur Drucklegung von Werken Gottscheds und seiner Briefpartner. Zu einem grossen, seine berufliche Existenz gefahrdenden Problem wird fur Gottsched zunehmend die Auseinandersetzung mit Vertretern der lutherischen Orthodoxie, von der die Briefe detailliert Zeugnis ablegen.".
In the years 1738/39, Gottsched was mostly concerned with two events: his departure from the Deutsche Gesellschaft which he had been heading and the resulting developments, and the continuation of his disputes on the philosophy of Christian Wolff which he had been conducting with the Lutheran-Orthodox theologians. Through the support of the influential Imperial Count Ernst von Manteuffel, Gottsched now acquired strong political backing. This is documented by 52 of the total of 204 letters published in this volume, a correspondence (...) in whichMrs Gottsched also soon became involved. The letters of other correspondents also deal with Wolff s rationalist philosophy, as well as other very varied themes such as theater, teaching of the German language in schools, the problems of Leipzig students, newspaper polemics, planned translation projects and the competing editions of the writings of Martin Opitz, the father of German poetry, that were undertaken in Leipzig and Zurich.". (shrink)
Philosophers from Hume, Kant, and Wittgenstein to the recent realists and antirealists have sought to answer the question, What are concepts? This book provides a detailed, systematic, and accessible introduction to an original philosophical theory of concepts that Christopher Peacocke has developed in recent years to explain facts about the nature of thought, including its systematic character, its relations to truth and reference, and its normative dimension. Particular concepts are also treated within the general framework: perceptual concepts, logical concepts, (...) and the concept of belief are discussed in detail. The general theory is further applied in answering the question of how the ontology of concepts can be of use in classifying mental states, and in discussing the proper relation between philosophical and psychological theories of concepts. Finally, the theory of concepts is used to motivate a nonverificationist theory of the limits of intelligible thought. Peacocke treats content as broad rather than narrow, and his account is nonreductive and non-Quinean. Yet Peacocke also argues for an interactive relationship between philosophical and psychological theories of concepts, and he plots many connections with work in cognitive psychology. (shrink)
Mathematics plays a central role in much of contemporary science, but philosophers have struggled to understand what this role is or how significant it might be for mathematics and science. In this book Christopher Pincock tackles this perennial question in a new way by asking how mathematics contributes to the success of our best scientific representations. In the first part of the book this question is posed and sharpened using a proposal for how we can determine the content of (...) a scientific representation. Several different sorts of contributions from mathematics are then articulated. Pincock argues that each contribution can be understood as broadly epistemic, so that what mathematics ultimately contributes to science is best connected with our scientific knowledge. In the second part of the book, Pincock critically evaluates alternative approaches to the role of mathematics in science. These include the potential benefits for scientific discovery and scientific explanation. A major focus of this part of the book is the indispensability argument for mathematical platonism. Using the results of part one, Pincock argues that this argument can at best support a weak form of realism about the truth-value of the statements of mathematics. The book concludes with a chapter on pure mathematics and the remaining options for making sense of its interpretation and epistemology. Thoroughly grounded in case studies drawn from scientific practice, this book aims to bring together current debates in both the philosophy of mathematics and the philosophy of science and to demonstrate the philosophical importance of applications of mathematics. (shrink)
This book revives the study of conventional implicatures in natural language semantics. H. Paul Grice first defined the concept. Since then his definition has seen much use and many redefinitions, but it has never enjoyed a stable place in linguistic theory. Christopher Potts returns to the original and uses it as a key into two presently under-studied areas of natural language: supplements and expressives. The account of both depends on a theory in which sentence meanings can be multidimensional. The (...) theory is logically and intuitively compositional, and it minimally extends a familiar kind of intensional logic, thereby providing an adaptable, highly useful tool for semantic analysis. The result is a linguistic theory that is accessible not only to linguists of all stripes, but also philosophers of language, logicians, and computer scientists who have linguistic applications in mind. (shrink)
We live in a morally flawed world. Our lives are complicated by what other people do, and by the harms that flow from our social, economic and political institutions. Our relations as individuals to these collective harms constitute the domain of complicity. This book examines the relationship between collective responsibility and individual guilt. It presents a rigorous philosophical account of the nature of our relations to the social groups in which we participate, and uses that account in a discussion of (...) contemporary moral theory. Christopher Kutz shows that the two prevailing theories of moral philosophy, Kantianism and consequentialism, both have difficulties resolving problems of complicity. He then argues for a richer theory of accountability in which any real understanding of collective action not only allows but demands individual responsibility. (shrink)
Being Known is a response to a philosophical challenge which arises for every area of thought: to reconcile a plausible account of what is involved in the truth of statements in a given area with a credible account of how we can know those statements. Christopher Peacocke presents a framework for addressing the challenge, a framework which links both the theory of knowledge and the theory of truth with the theory of concept-possession.
This is a conversation held at the book launch for Christopher Insole’s Kant and the Divine: From Contemplation to the Moral Law, hosted jointly, in November 2020, by the Centre for Catholic Studies, Durham University, and the Australian Catholic University. The conversation covers the claim made by Insole that Kant believes in God, but is not a Christian, the way in which reason itself is divine for Kant, and the suggestion that reading Kant can open up new possibilities for (...) dialogue between Christian thinkers and contemporary forms of secular religiosity. (shrink)
Aristotle has qualms about the movement of the soul. He contends directly, indeed, that ‘it is impossible that motion should belong to the soul’ (DA 406a2). This is surprising in both large and small ways. Still, when we appreciate the explanatory framework set by his hylomorphic analysis of change, we can see why Aristotle should think of the soul's motion as involving a kind of category mistake-not the putative Rylean mistake, but rather the mistake of treating a change as itself (...) capable of changing. (shrink)
The Neo-Aristotelian ethical naturalism of Philippa Foot and Rosalind Hursthouse purports to establish a naturalistic criterion for the virtues. Specifically, by developing a parallel between the natural ends of nonhuman animals and the natural ends of human beings, they argue that character traits are justified as virtues by the extent to which they promote and do not inhibit natural ends such as self-preservation, reproduction, and the well-being of one’s social group. I argue that the approach of Foot and Hursthouse cannot (...) provide a basis for moral universalism, the widely-accepted idea that each human being has moral worth and thus deserves significant moral consideration. Foot and Hursthouse both depict a virtuous agent as implicitly acting in accord with moral universalism. However, with respect to charity, a virtue they both emphasize, their naturalistic criterion at best provides a warrant for a restricted form of charity that extends only to a limited number of persons. There is nothing in the natural ends of human beings, as Foot and Hursthouse understand these, that gives us a reason for having any concern for the well-being of human beings as such. (shrink)
Christopher G. Timpson provides the first full-length philosophical treatment of quantum information theory and the questions it raises for our understanding of the quantum world. He argues for an ontologically deflationary account of the nature of quantum information, which is grounded in a revisionary analysis of the concepts of information.
In this book, Christopher Evan Franklin develops and defends a novel version of event-causal libertarianism. This view is a combination of libertarianism--the view that humans sometimes act freely and that those actions are the causal upshots of nondeterministic processes--and agency reductionism--the view that the causal role of the agent in exercises of free will is exhausted by the causal role of mental states and events (e.g., desires and beliefs) involving the agent. Franklin boldly counteracts a dominant theory that has (...) similar aims, put forth by well-known philosopher Robert Kane. -/- Many philosophers contend that event-causal libertarians have no advantage over compatibilists when it comes to securing a distinctively valuable kind of freedom and responsibility. To Franklin, this position is mistaken. Assuming agency reductionism is true, event-causal libertarians need only adopt the most plausible compatibilist theory and add indeterminism at the proper juncture in the genesis of human action. The result is minimal event-causal libertarianism: a model of free will with the metaphysical simplicity of compatibilism and the intuitive power of libertarianism. And yet a worry remains: toward the end of the book, Franklin reconsiders his assumption of agency reductionism, arguing that this picture faces a hitherto unsolved problem. This problem, however, has nothing to do with indeterminism or determinism, or even libertarianism or compatibilism, but with how to understand the nature of the self and its role in the genesis of action. Crucially, if this problem proves unsolvable, then not only is event-causal libertarianism untenable, so also is event-causal compatibilism. (shrink)
Ad hominem arguments are generally dismissed on the grounds that they are not attempts to engage in rational discourse, but are rather aimed at undermining argument by diverting attention from claims made to assessments of character of persons making claims. The manner of this dismissal however is based upon an unlikely paradigm of rationality: it is based upon the presumption that our intellectual capacities are not as limited as in fact they are, and do not vary as much as they (...) do between rational people. When we understand rationality in terms of intellectual virtues, however, which recognize these limitations and provide for the complexity of our thinking, ad hominem considerations can sometimes be relevant to assessing arguments. (shrink)