According to Bayesian epistemology, the epistemically rational agent updates her beliefs by conditionalization: that is, her posterior subjective probability after taking account of evidence X, pnew, is to be set equal to her prior conditional probability pold(·|X). Bayesians can be challenged to provide a justification for their claim that conditionalization is recommended by rationality—whence the normative force of the injunction to conditionalize? There are several existing justifications for conditionalization, but none directly addresses the idea that conditionalization will be epistemically rational (...) if and only if it can reasonably be expected to lead to epistemically good outcomes. We apply the approach of cognitive decision theory to provide a justification for conditionalization using precisely that idea. We assign epistemic utility functions to epistemically rational agents; an agent’s epistemic utility is to depend both upon the actual state of the world and on the agent’s credence distribution over possible states. We prove that, under independently motivated conditions, conditionalization is the unique updating rule that maximizes expected epistemic utility. (shrink)
I explore the prospects for modelling epistemic rationality (in the probabilist setting) via an epistemic decision theory, in a consequentialist spirit. Previous work has focused on cases in which the truth-values of the propositions in which the agent is selecting credences do not depend, either causally or merely evidentially, on the agent’s choice of credences. Relaxing that restriction leads to a proliferation of puzzle cases and theories to deal with them, including epistemic analogues of evidential and causal decision theory, and (...) of the Newcomb Problem and ‘Psychopath Button’ Problem. A variant of causal epistemic decision theory deals well with most cases. However, there is a recalcitrant class of problem cases for which no epistemic decision theory seems able to match our intuitive judgements of epistemic rationality. This lends both precision and credence to the view that there is a fundamental mismatch between epistemic consequentialism and the intuitive notion of epistemic rationality; the implications for understanding the latter are briefly discussed. (shrink)
Population axiology is the study of the conditions under which one state of affairs is better than another, when the states of affairs in ques- tion may differ over the numbers and the identities of the persons who ever live. Extant theories include totalism, averagism, variable value theories, critical level theories, and “person-affecting” theories. Each of these the- ories is open to objections that are at least prima facie serious. A series of impossibility theorems shows that this is no coincidence: (...) it can be proved, for various sets of prima facie intuitively compelling desiderata, that no axiology can simultaneously satisfy all the desiderata on the list. One’s choice of population axiology appears to be a choice of which intuition one is least unwilling to give up. (shrink)
Decisions, whether moral or prudential, should be guided at least in part by considerations of the consequences that would result from the various available actions. For any given action, however, the majority of its consequences are unpredictable at the time of decision. Many have worried that this leaves us, in some important sense, clueless. In this paper, I distinguish between ‘simple’ and ‘complex’ possible sources of cluelessness. In terms of this taxonomy, the majority of the existing literature on cluelessness focusses (...) on the simple sources. I argue, contra James Lenman in particular, that these would-be sources of cluelessness are unproblematic, on the grounds that indifference-based reasoning is far less problematic than Lenman (along with many others) supposes. However, there does seem to be a genuine phenomenon of cluelessness associated with the ‘complex’ sources; here, indifference-based reasoning is inapplicable by anyone’s lights. This ‘complex problem of cluelessness’ is vivid and pressing, in particular, in the context of Effective Altruism. This motivates a more thorough examination of the precise nature of cluelessness, and the precise source of the associated phenomenology of discomfort in forced-choice situations. The latter parts of the paper make some initial explorations in those directions. (shrink)
The Repugnant Conclusion served an important purpose in catalyzing and inspiring the pioneering stage of population ethics research. We believe, however, that the Repugnant Conclusion now receives too much focus. Avoiding the Repugnant Conclusion should no longer be the central goal driving population ethics research, despite its importance to the fundamental accomplishments of the existing literature.
It is widely recognized that ‘global’ symmetries, such as the boost invariance of classical mechanics and special relativity, can give rise to direct empirical counterparts such as the Galileo-ship phenomenon. However, conventional wisdom holds that ‘local’ symmetries, such as the diffeomorphism invariance of general relativity and the gauge invariance of classical electromagnetism, have no such direct empirical counterparts. We argue against this conventional wisdom. We develop a framework for analysing the relationship between Galileo-ship empirical phenomena on the one hand, and (...) physical theories that model such phenomena on the other, that renders the relationship between theoretical and empirical symmetries transparent, and from which it follows that both global and local symmetries can give rise to Galileo-ship phenomena. In particular, we use this framework to exhibit an analogue of Galileo’s ship for the local gauge invariance of electromagnetism. 1 Introduction2 Analogues of Galileo’s Ship? Faraday’s Cage and t’Hooft’s Beam-Splitter2.1 Faraday’s cage2.2 t’Hooft’s beam-splitter3 A Framework for Symmetries I: Systems and Subsystems4 An Example: Coulombic Electrostatics5 A Framework for Symmetries II: The Relationship between Theoretical and Empirical Symmetries6 Newtonian Gravity7 Local Symmetries that Are Not Boundary-Preserving: Classical Electromagnetism and Faraday’s Cage8 Local Boundary-Preserving Symmetries: Klein-Gordon-Maxwell Gauge Theory and t’Hooft’s Beam-Splitter9 Summary10 Conclusions. (shrink)
Difficulties over probability have often been considered fatal to the Everett interpretation of quantum mechanics. Here I argue that the Everettian can have everything she needs from `probability' without recourse to indeterminism, ignorance, primitive identity over time or subjective uncertainty: all she needs is a particular *rationality principle*. The decision-theoretic approach recently developed by Deutsch and Wallace claims to provide just such a principle. But, according to Wallace, decision theory is itself applicable only if the correct attitude to a future (...) Everettian measurement outcome is subjective uncertainty. I argue that subjective uncertainty is not to be had, but I offer an alternative interpretation that enables the Everettian to live without uncertainty: we can justify Everettian decision theory on the basis that an Everettian should *care about* all her future branches. The probabilities appearing in the decision-theoretic representation theorem can then be interpreted as the degrees to which the rational agent cares about each future branch. This reinterpretation, however, reduces the intuitive plausibility of one of the Deutsch -Wallace axioms. (shrink)
This article is a critical survey of the debate over the value of the social discount rate, with a particular focus on climate change. The ma- jority of the material surveyed is from the economics rather than from the philosophy literature, but the emphasis of the survey itself is on founda- tions in ethical and other normative theory rather than highly technical details. I begin by locating the standard approach to discounting within the overall landscape of ethical theory, and explaining (...) the assumptions and simplifications that are needed in order to arrive at the model that is standard in the discounting literature. The article then covers the general theory of the Ramsey equation and its relationship to observed interest rates, arguments for and against a positive rate of pure time preference, the consumption elasticity of utility, and the effect of various sorts of uncertainty on the discount rate. Finally, it turns specifically to the ap- plication of this debate to the case of climate change, focussing on the recent controversy over the low discount rate used in the Stern Review of the Economics of Climate Change. (shrink)
The Everett (many-worlds) interpretation of quantum mechanics faces a prima facie problem concerning quantum probabilities. Research in this area has been fast-paced over the last few years, following a controversial suggestion by David Deutsch that decision theory can solve the problem. This article provides a non-technical introduction to the decision-theoretic program, and a sketch of the current state of the debate.
Prioritarianism is supposed to be a theory of the overall good that captures the common intuition of . But it is difficult to give precise content to the prioritarian claim. Over the past few decades, prioritarians have increasingly responded to this by formulating prioritarianism not in terms of an alleged primitive notion of quantity of well-being, but instead in terms of von NeumannPrimitivistTechnicalpriority to the worse offMorgenstern utility is a retrograde step.
Much of the evidence for quantum mechanics is statistical in nature. The Everett interpretation, if it is to be a candidate for serious consideration, must be capable of doing justice to reasoning on which statistical evidence in which observed relative frequencies that closely match calculated probabilities counts as evidence in favour of a theory from which the probabilities are calculated. Since, on the Everett interpretation, all outcomes with nonzero amplitude are actualized on different branches, it is not obvious that sense (...) can be made of ascribing probabilities to outcomes of experiments, and this poses a prima facie problem for statistical inference. It is incumbent on the Everettian either to make sense of ascribing probabilities to outcomes of experiments in the Everett interpretation, or to find a substitute on which the usual statistical analysis of experimental results continues to count as evidence for quantum mechanics, and, since it is the very evidence for quantum mechanics that is at stake, this must be done in a way that does not presuppose the correctness of Everettian quantum mechanics. This requires an account of theory confirmation that applies to branching-universe theories but does not presuppose the correctness of any such theory. In this paper, we supply and defend such an account. The account has the consequence that statistical evidence can confirm a branching-universe theory such as Everettian quantum mechanics in the same way in which it can confirm a probabilistic theory. (shrink)
Recent work in the Everett interpretation has suggested that the problem of probability can be solved by understanding probability in terms of rationality. However, there are *two* problems relating to probability in Everett --- one practical, the other epistemic --- and the rationality-based program *directly* addresses only the practical problem. One might therefore worry that the problem of probability is only `half solved' by this approach. This paper aims to dispel that worry: a solution to the epistemic problem follows from (...) the rationality-based solution to the practical problem. (shrink)
Richard Feynman has claimed that anti-particles are nothing but particles `propagating backwards in time'; that time reversing a particle state always turns it into the corresponding anti-particle state. According to standard quantum field theory textbooks this is not so: time reversal does not turn particles into anti-particles. Feynman's view is interesting because, in particular, it suggests a nonstandard, and possibly illuminating, interpretation of the CPT theorem. In this paper, we explore a classical analog of Feynman's view, in the context of (...) the recent debate between David Albert and David Malament over time reversal in classical electromagnetism. (shrink)
Given the deep disagreement surrounding population axiology, one should remain uncertain about which theory is best. However, this uncertainty need not leave one neutral about which acts are better or worse. We show that, as the number of lives at stake grows, the Expected Moral Value approach to axiological uncertainty systematically pushes one toward choosing the option preferred by the Total View and critical-level views, even if one’s credence in those theories is low.
An important objection to preference-satisfaction theories of well-being is that these theories cannot make sense of interpersonal comparisons of well-being. A tradition dating back to Harsanyi () attempts to respond to this objection by appeal to so-called extended preferences: very roughly, preferences over situations whose description includes agents’ preferences. This paper examines the prospects for defending the preference-satisfaction theory via this extended preferences program. We argue that making conceptual sense of extended preferences is less problematic than others have supposed, but (...) that even so extended preferences do not provide a promising way for the preference satisfaction theorist to make interpersonal well-being comparisons. Our main objection takes the form of a trilemma: depending on how the theory based on extended preferences is developed, either the result will be inconsistent with ordinary preference-satisfaction theory, or it will fail to recover sufficiently rich interpersonal well-being comparisons, or it will take on a number of other arguably odd and undesirable commitments. (shrink)
Harsanyi claimed that his Aggregation and Impartial Observer Theorems provide a justification for utilitarianism. This claim has been strongly resisted, notably by Sen and Weymark, who argue that while Harsanyi has perhaps shown that overall good is a linear sum of individuals’ von Neumann-Morgenstern utilities, he has done nothing to establish any con- nection between the notion of von Neumann-Morgenstern utility and that of well-being, and hence that utilitarianism does not follow. The present article defends Harsanyi against the Sen-Weymark cri- (...) tique. I argue that, far from being a term with precise and independent quantitative content whose relationship to von Neumann-Morgenstern utility is then a substantive question, terms such as ‘well-being’ suffer (or suffered) from indeterminacy regarding precisely which quantity they refer to. If so, then (on the issue that this article focuses on) Harsanyi has gone as far towards defending ‘utilitarianism in the original sense’ as could coherently be asked. (shrink)
Recent work in the Everett interpretation has suggested that the problem of probability can be solved by understanding probability in terms of rationality. However, there are *two* problems relating to probability in Everett --- one practical, the other epistemic --- and the rationality-based program *directly* addresses only the practical problem. One might therefore worry that the problem of probability is only `half solved' by this approach. This paper aims to dispel that worry: a solution to the epistemic problem follows from (...) the rationality-based solution to the practical problem. (shrink)
An important objection to preference-satisfaction theories of well-being is that they cannot make sense of interpersonal comparisons. A tradition dating back to Harsanyi :434, 1953) attempts to solve this problem by appeal to people’s so-called extended preferences. This paper presents a new problem for the extended preferences program, related to Arrow’s celebrated impossibility theorem. We consider three ways in which the extended-preference theorist might avoid this problem, and recommend that she pursue one: developing aggregation rules that violate Arrow’s Independence of (...) Irrelevant Alternatives condition. (shrink)
It is often claimed that reducing population size would be advantageous for climate change mitigation, on the grounds that lower population would naturally correspond to lower emissions. This apparently obvious claim is in fact seriously misleading. Reducing population size would indeed, other suitable things being equal, reduce the emissions rate. But it is well recognised that the primary determinant of the eventual amount of climate change is not the emissions rate, but rather cumulative emissions. It is far less clear whether (...) reducing population size would reduce cumulative emissions, or would in any other way prove an advantage for reasons related to climate change. This paper identifies and briefly discusses the issues relevant to assessing that less clear question. (shrink)
Harsanyi claimed that his Aggregation and Impartial Observer Theorems provide a justification for utilitarianism. This claim has been strongly resisted, notably by Sen and Weymark, who argue that while Harsanyi has perhaps shown that overall good is a linear sum of individuals’ von Neumann–Morgenstern utilities, he has done nothing to establish any connection between the notion of von Neumann–Morgenstern utility and that of well-being, and hence that utilitarianism does not follow. -/- The present article defends Harsanyi against the Sen–Weymark critique. (...) I argue that, far from being a term with precise and independent quantitative content whose relationship to von Neumann–Morgenstern utility is then a substantive question, terms such as ‘well-being’ suffer (or suffered) from indeterminacy regarding precisely which quantity they refer to. If so, then (on the issue that this article focuses on) Harsanyi has gone as far towards defending ‘utilitarianism in the original sense’ as could coherently be asked. (shrink)
Given the deep disagreement surrounding population axiology, one should remain uncertain about which theory is best. However, this uncertainty need not leave one neutral about which acts are better or worse. We show that as the number of lives at stake grows, the Expected Moral Value approach to axiological uncertainty systematically pushes one towards choosing the option preferred by the Total and Critical Level views, even if one’s credence in those theories is low.
This chapter makes the case for strong longtermism: the claim that, in many situations, impact on the long-run future is the most important feature of our actions. Our case begins with the observation that an astronomical number of people could exist in the aeons to come. Even on conservative estimates, the expected future population is enormous. We then add a moral claim: all the consequences of our actions matter. In particular, the moral importance of what happens does not depend on (...) when it happens. That pushes us toward strong longtermism. We then address a few potential concerns, the first of which is that it is impossible to have any sufficiently predictable influence on the course of the long-run future. We argue that this is not true. Some actions can reasonably be expected to improve humanity’s long-term prospects. These include reducing the risk of human extinction, preventing climate change, guiding the development of artificial intelligence, and investing funds for later use. We end by arguing that these actions are more than just extremely effective ways to do good. Since the benefits of longtermist efforts are large and the personal costs are comparatively small, we are morally required to take up these efforts. (shrink)
The CPT theorem of quantum field theory states that any relativistic (Lorentz-invariant) quantum field theory must also be invariant under CPT, the composition of charge conjugation, parity reversal and time reversal. This paper sketches a puzzle that seems to arise when one puts the existence of this sort of theorem alongside a standard way of thinking about symmetries, according to which spacetime symmetries (at any rate) are associated with features of the spacetime structure. The puzzle is, roughly, that the existence (...) of a CPT theorem seems to show that it is not possible for a well-formulated theory that does not make use of a preferred frame or foliation to make use of a temporal orientation. Since a manifold with only a Lorentzian metric can be temporally orientable—capable of admitting a temporal orientation—this seems to be an odd sort of necessary connection between distinct existences. The paper then suggests a solution to the puzzle: it is suggested that the CPT theorem arises because temporal orientation is unlike other pieces of spacetime structure, in that one cannot represent it by a tensor field. To avoid irrelevant technical details, the discussion is carried out in the setting of classical field theory, using a little-known classical analog of the CPT theorem. (shrink)
I argue that excessive reliance on the notion of “the badness of death” tends to lead theorists astray when thinking about healthcare prioritisation. I survey two examples: the confusion surrounding the “time-relative interests account” of the badness of death, and a confusion in the recent literature on cost-benefit analyses for family planning interventions. In both cases, the confusions in question would have been avoided if (instead of attempting to theorise in terms of the badness of death) theorists had forced themselves (...) first to write down an appropriate value function, and then focused on the question of how to maximize value. (shrink)
This is the first collective study of the thinking behind the effective altruism movement. This movement comprises a growing global community of people who organise significant parts of their lives around the two key concepts represented in its name. Altruism is the idea that if we use a significant portion of the resources in our possession—whether money, time, or talents—with a view to helping others then we can improve the world considerably. When we do put such resources to altruistic use, (...) it is crucial to focus on how much good this or that intervention is reasonably expected to do per unit of resource expended (as a gauge of effectiveness). We can try to rank various possible actions against each other to establish which will do the most good with the resources expended. Thus we could aim to rank various possible kinds of action to alleviate poverty against one another, or against actions aimed at very different types of outcome, focused perhaps on animal welfare or future generations. The scale and organisation of the effective altruism movement encourage careful dialogue on questions that have perhaps long been there, throwing them into new and sharper relief, and giving rise to previously unnoticed questions. In this volume a team of internationally recognised philosophers, economists, and political theorists present refined and in-depth explorations of issues that arise once one takes seriously the twin ideas of altruistic commitment and effectiveness. (shrink)
In carrying out cost-benefit or cost-effective analysis, a discount rate should be applied to some kinds of future benefits and costs. It is controversial, though, whether future health is in this class. I argue that one of the standard arguments for discounting (from diminishing marginal returns) is inapplicable to the case of health, while another (favouring a pure rate of time preference) is unsound in any case. However, there are two other reasons that might support a positive discount rate for (...) future health: one relating to uncertainty, and the other relating to the instrumental benefits of improved health. While the latter considerations could be modelled via a discount rate, they could alternatively be modelled more explicitly, in other ways; I briefly discuss which modelling method is preferable. Finally, I argue against the common claims that failing to discount future health would lead to paradox, and/or to inconsistency with the way future cash flows are treated. (shrink)
Overpopulation is often identified as one of the key drivers of climate change. Further, it is often thought that the mechanism behind this is obvious: 'more people means more greenhouse gas emissions'. However, in light of the fact that climate change depends most closely on cumulative emissions rather than on emissions rates, the relationship between population size and climate change is more subtle than this. Reducing the size of instantaneous populations can fruitfully be thought of as spreading out a fixed (...) number of people more thinly over time, and (in light of the significance of cumulative emissions) it is not immediately clear whether or how such a 'spreading' would help with climate change. To bring the point into sharp relief, I first set out a simple model according to which population reduction would not lead to any climate-change-related improvement. I then critically examine the assumptions of the model. If population reduction would lead to a significant climate-change-related improvement, this must be because (i) population reduction would significantly reduce even cumulative emissions, and/or (ii) climate damages are, to a significant extent, driven by the pace of climate change, and not only the eventual extent of the change. (shrink)
Much public policy analysis requires us to place a monetary value on the bad- ness of a premature human death. Currently dominant approaches to determining this ‘value of a life’ focus exclusively on the ‘self-regarding’ value of life — that is, the value of a person’s life to the person whose death is in question — and altogether ignore effects on other people. This procedure would be justified if, as seems intuitively plausible, other-regarding effects were negligible in comparison with self-regarding (...) ones. I argue that in the light of the issue of overpopulation, that intuitively plausible condition is at best highly questionable. Unless the world is in fact underpopulated, the social disvalue of a premature death is likely to be significantly lower than the current estimates. (shrink)
Rights-based approaches and consequentialist approaches to ethics are often seen as being diametrically opposed to one another. In one sense, they are. In another sense, however, they can be reconciled: a ‘global’ form of consequentialism might supply consequentialist foundations for a derivative morality that is non-consequentialist, and perhaps rights-based, in content. By way of case study to illustrate how this might work, I survey what a global consequentialist should think about a recent dispute between Jeff McMahan and Henry Shue on (...) the morality and laws of war. (shrink)
We provide a careful development and rigorous proof of the CPT theorem within the framework of mainstream quantum field theory. This is in contrast to the usual rigorous proofs in purely axiomatic frameworks, and non-rigorous proof-sketches in the mainstream approach. We construct the CPT transformation for a general field directly, without appealing to the enumerative classification of representations, and in a manner that is clearly related to the requirements of our proof. Our approach applies equally in Minkowski spacetimes of any (...) dimension at least three, and is in principle neutral between classical and quantum field theories: the quantum CPT theorem has a natural classical analogue. The key mathematical tool is that of complexification; this tool is central to the existing axiomatic proofs, but plays no overt role in the usual mainstream approaches to CPT. (shrink)
The debate between substantivalists and relationists about spacetime was given a new lease of life approximately twenty years ago, when John Earman and John Norton published an argument for the conclusion that, in the light of general relativity, substantivalism is untenable. Responses to Earman and Norton’s argument generated a proliferation of ‘substantivalisms’, and a debate between them that was, to the ears of at least some, distinctively metaphysical in character.
This dissertation explores several issues related to the CPT theorem. Chapter 2 explores the meaning of spacetime symmetries in general and time reversal in particular. It is proposed that a third conception of time reversal, 'geometric time reversal', is more appropriate for certain theoretical purposes than the existing 'active' and 'passive' conceptions. It is argued that, in the case of classical electromagnetism, a particular nonstandard time reversal operation is at least as defensible as the standard view. This unorthodox time reversal (...) operation is of interest because it is the classical counterpart of a view according to which the so-called 'CPT theorem' of quantum field theory is better called 'PT theorem'; on this view, a puzzle about how an operation as apparently non-spatio-temporal as charge conjugation can be linked to spacetime symmetries in as intimate a way as a CPT theorem would seem to suggest dissolves. In chapter 3, we turn to the question of whether the CPT theorem is an essentially quantum-theoretic result. We state and prove a classical analogue of the CPT theorem for systems of tensor fields. This classical analogue, however, appears not to extend to systems of spinor fields. The intriguing answer to our question thus appears to be that the CPT theorem for spinors is essentially quantum-theoretic, but that the CPT theorem for tensor fields applies equally to the classical and quantum cases. Chapter4 explores a puzzle that arises when one puts the CPT theorem alongside a standard way of understanding spacetime symmetries, according to which spacetime symmetries are to be understood in terms of background spacetime structure. The puzzle is that a 'PT theorem' amounts to a statement that the theory may not make essential use of a preferred direction of time, and this seems odd. We propose a solution to that puzzle for the case of tensor field theories. (shrink)
ordinary electron, except it’s attracted to normal electrons – we say it has positive charge. For this reason it’s called a ‘positron’. The positron is a sister..
(a) How to design a nuclear power plant 3. Deutsch/Wallace solution to the practical problem (a) Argue that the rational Everettian agent makes decisions by maximizing expected utility, where the expectation value is an average over branches 4. The semantics of branching - two options..