The Repugnant Conclusion served an important purpose in catalyzing and inspiring the pioneering stage of population ethics research. We believe, however, that the Repugnant Conclusion now receives too much focus. Avoiding the Repugnant Conclusion should no longer be the central goal driving population ethics research, despite its importance to the fundamental accomplishments of the existing literature.
Empirical work has lately confirmed what many philosophers have taken to be true: people are ‘biased toward the future’. All else being equal, we usually prefer to have positive experiences in the future, and negative experiences in the past. According to one hypothesis, the temporal metaphysics hypothesis, future-bias is explained either by our beliefs about temporal metaphysics—the temporal belief hypothesis—or alternatively by our temporal phenomenology—the temporal phenomenology hypothesis. We empirically investigate a particular version of the temporal belief hypothesis according to (...) which future-bias is explained by the belief that time robustly passes. Our results do not match the apparent predictions of this hypothesis, and so provide evidence against it. But we also find that people give more future-biased responses when asked to simulate a belief in robust passage. We take this to suggest that the phenomenology that attends simulation of that belief may be partially responsible for future-bias, and we examine the implications of these results for debates about the rationality of future-bias. (shrink)
In the growing literature on decision-making under moral uncertainty, a number of skeptics have argued that there is an insuperable barrier to rational "hedging" for the risk of moral error, namely the apparent incomparability of moral reasons given by rival theories like Kantianism and utilitarianism. Various general theories of intertheoretic value comparison have been proposed to meet this objection, but each suffers from apparently fatal flaws. In this paper, I propose a more modest approach that aims to identify classes of (...) moral theories that share common principles strong enough to establish bases for intertheoretic comparison. I show that, contra the claims of skeptics, there are often rationally perspicuous grounds for precise, quantitative value comparisons within such classes. In light of this fact, I argue, the existence of some apparent incomparabilities between widely divergent moral theories cannot serve as a general argument against hedging for one's moral uncertainties. (shrink)
Philosophers have long noted, and empirical psychology has lately confirmed, that most people are “biased toward the future”: we prefer to have positive experiences in the future, and negative experiences in the past. At least two explanations have been offered for this bias: belief in temporal passage and the practical irrelevance of the past resulting from our inability to influence past events. We set out to test the latter explanation. In a large survey, we find that participants exhibit significantly less (...) future bias when asked to consider scenarios where they can affect their own past experiences. This supports the “practical irrelevance” explanation of future bias. It also suggests that future bias is not an inflexible preference hardwired by evolution, but results from a more general disposition to “accept the things we cannot change”. However, participants still exhibited substantial future bias in scenarios in which they could affect the past, leaving room for complementary explanations. Beyond the main finding, our results also indicate that future bias is stake-sensitive and that participants endorse the normative correctness of their future-biased preferences and choices. In combination, these results shed light on philosophical debates over the rationality of future bias, suggesting that it may be a rational response to empirical realities rather than a brute, arational disposition. (shrink)
Defenders of deontological constraints in normative ethics face a challenge: how should an agent decide what to do when she is uncertain whether some course of action would violate a constraint? The most common response to this challenge has been to defend a threshold principle on which it is subjectively permissible to act iff the agent's credence that her action would be constraint-violating is below some threshold t. But the threshold approach seems arbitrary and unmotivated: what would possibly determine where (...) the threshold should be set, and why should there be any precise threshold at all? Threshold views also seem to violate ought agglomeration, since a pair of actions each of which is below the threshold for acceptable moral risk can, in combination, exceed that threshold. In this paper, I argue that stochastic dominance reasoning can vindicate and lend rigor to the threshold approach: given characteristically deontological assumptions about the moral value of acts, it turns out that morally safe options will stochastically dominate morally risky alternatives when and only when the likelihood that the risky option violates a moral constraint is greater than some precisely definable threshold (in the simplest case, .5). I also show how, in combination with the observation that deontological moral evaluation is relativized to particular choice situations, this approach can overcome the agglomeration problem. This allows the deontologist to give a precise and well-motivated response to the problem of uncertainty. (shrink)
In ‘Normative Uncertainty as a Voting Problem’, William MacAskill argues that positive credence in ordinal-structured or intertheoretically incomparable normative theories does not prevent an agent from rationally accounting for her normative uncertainties in practical deliberation. Rather, such an agent can aggregate the theories in which she has positive credence by methods borrowed from voting theory—specifically, MacAskill suggests, by a kind of weighted Borda count. The appeal to voting methods opens up a promising new avenue for theories of rational choice under (...) normative uncertainty. The Borda rule, however, is open to at least two serious objections. First, it seems implicitly to ‘cardinalize’ ordinal theories, and so does not fully face up to the problem of merely ordinal theories. Second, the Borda rule faces a problem of option individuation. MacAskill attempts to solve this problem by invoking a measure on the set of practical options. But it is unclear that there is any natural way of defining such a measure that will not make the output of the Borda rule implausibly sensitive to irrelevant empirical features of decision-situations. After developing these objections, I suggest an alternative: the McKelvey uncovered set, a Condorcet method that selects all and only the maximal options under a strong pairwise defeat relation. This decision rule has several advantages over Borda and mostly avoids the force of MacAskill’s objection to Condorcet methods in general. (shrink)
The principle that rational agents should maximize expected utility or choiceworthiness is intuitively plausible in many ordinary cases of decision-making under uncertainty. But it is less plausible in cases of extreme, low-probability risk (like Pascal's Mugging), and intolerably paradoxical in cases like the St. Petersburg and Pasadena games. In this paper I show that, under certain conditions, stochastic dominance reasoning can capture most of the plausible implications of expectational reasoning while avoiding most of its pitfalls. Specifically, given sufficient background uncertainty (...) about the choiceworthiness of one's options, many expectation-maximizing gambles that do not stochastically dominate their alternatives "in a vacuum" become stochastically dominant in virtue of that background uncertainty. But, even under these conditions, stochastic dominance will not require agents to accept options whose expectational superiority depends on sufficiently small probabilities of extreme payoffs. The sort of background uncertainty on which these results depend looks unavoidable for any agent who measures the choiceworthiness of her options in part by the total amount of value in the resulting world. At least for such agents, then, stochastic dominance offers a plausible general principle of choice under uncertainty that can explain more of the apparent rational constraints on such choices than has previously been recognized. (shrink)
How should an agent decide what to do when she is uncertain not just about morally relevant empirical matters, like the consequences of some course of action, but about the basic principles of morality itself? This question has only recently been taken up in a systematic way by philosophers. Advocates of moral hedging claim that an agent should weigh the reasons put forward by each moral theory in which she has positive credence, considering both the likelihood that that theory is (...) true and the strength of the reasons it posits. The view that it is sometimes rational to hedge for one's moral uncertainties, however, has recently come under attack both from those who believe that an agent should always be guided by the dictates of the single moral theory she deems most probable and from those who believe that an agent's moral beliefs are simply irrelevant to what she ought to do. Among the many objections to hedging that have been pressed in the recent literature is the worry that there is no non-arbitrary way of making the intertheoretic comparisons of moral value necessary to aggregate the value assignments of rival moral theories into a single ranking of an agent's options. -/- This dissertation has two principal objectives: First, I argue that, contra these recent objections, an agent's moral beliefs and uncertainties are relevant to what she rationally ought to do, and more particularly, that agents are at least sometimes rationally required to hedge for their moral uncertainties. My principal argument for these claims appeals to the enkratic conception of rationality, according to which the requirements of practical rationality derive from an agent's beliefs about the objective, desire-independent value or choiceworthiness of her options. Second, I outline a new general theory of rational choice under moral uncertainty. Central to this theory is the idea of content-based aggregation, that the principles according to which an agent should compare and aggregate rival moral theories are grounded in the content of those theories themselves, including not only their value assignments but also the metaethical and other non-surface-level propositions that underlie, justify, or explain those value assignments. (shrink)
Longtermists claim that what we ought to do is mainly determined by how our actions might affect the very long-run future. A natural objection to longtermism is that these effects may be nearly impossible to predict -- perhaps so close to impossible that, despite the astronomical importance of the far future, the expected value of our present actions is mainly determined by near-term considerations. This paper aims to precisify and evaluate one version of this epistemic objection to longtermism. To that (...) end, I develop two simple models for comparing "longtermist" and "neartermist" interventions, incorporating the idea that it is harder to make a predictable difference to the further future. These models yield mixed conclusions: if we simply aim to maximize expected value, and don't mind premising our choices on minuscule probabilities of astronomical payoffs, the case for longtermism looks robust. But on some prima facie plausible empirical worldviews, the expectational superiority of longtermist interventions depends heavily on these "Pascalian" probabilities. So the case for longtermism may depend either on plausible but non-obvious empirical claims or on a tolerance for Pascalian fanaticism. (shrink)
I describe a thought experiment in which an agent must choose between suffering a greater pain in the past or a lesser pain in the future. This case demonstrates that the ‘temporal value asymmetry’ – our disposition to attribute greater significance to future pleasures and pains than to past – can have consequences for the rationality of actions as well as attitudes. This fact, I argue, blocks attempts to vindicate the temporal value asymmetry as a useful heuristic tied to the (...) asymmetry of causation. Since the two standard arguments for the rationality of the temporal value asymmetry appeal to causal asymmetry and the passage of time respectively, the failure of the causal asymmetry explanation suggests that the B-theory, which rejects temporal passage, has substantial revisionary implications concerning our attitudes toward past and future experience. (shrink)
How should you decide what to do when you're uncertain about basic normative principles (e.g., Kantianism vs. utilitarianism)? A natural suggestion is to follow some "second-order" norm: e.g., "comply with the first-order norm you regard as most probable" or "maximize expected choiceworthiness". But what if you're uncertain about second-order norms too -- must you then invoke some third-order norm? If so, it seems that any norm-guided response to normative uncertainty is doomed to a vicious regress. In this paper, I aim (...) to rescue second-order norms from this threat of regress. I first elaborate and defend the suggestion some philosophers have entertained that the regress problem forces us to accept normative externalism, the view that at least one norm is incumbent on agents regardless of their beliefs or evidence concerning that norm. But, I then argue, we need not accept externalism about first-order (e.g., moral) norms, thus closing off any question of what an agent should do in light of her normative beliefs. Rather, it is more plausible to ascribe external force to a single, second-order rational norm: the enkratic principle, correctly formulated. This modest form of externalism, I argue, is both intrinsically well-motivated and sufficient to head off the threat of regress. (shrink)
I argue that the use of a social discount rate to assess the consequences of climate policy is unhelpful and misleading. I consider two lines of justification for discounting: (i) ethical arguments for a "pure rate of time preference" and (ii) economic arguments that take time as a proxy for economic growth and the diminishing marginal utility of consumption. In both cases I conclude that, given the long time horizons, distinctive uncertainties, and particular costs and benefits at stake in the (...) climate context, discount rates are at best a poor proxy for the normative considerations they are meant to represent. (shrink)
All else being equal, most of us typically prefer to have positive experiences in the future rather than the past and negative experiences in the past rather than the future. Recent empirical evidence tends not only to support the idea that people have these preferences, but further, that people tend to prefer more painful experiences in their past rather than fewer in their future (and mutatis mutandis for pleasant experiences). Are such preferences rationally permissible, or are they, as time-neutralists contend, (...) rationally impermissible? And what is it that grounds their having the normative status that they do have? We consider two sorts of arguments regarding the normative status of future-biased preferences. The first appeals to the supposed arbitrariness of these preferences, and the second appeals to their upshot. We evaluate these arguments in light of the recent empirical research on future-bias. (shrink)
People are ‘biased toward the future’: all else being equal, we typically prefer to have positive experiences in the future, and negative experiences in the past. Several explanations have been suggested for this pattern of preferences. Adjudicating among these explanations can, among other things, shed light on the rationality of future-bias: For instance, if our preferences are explained by unjustified beliefs or an illusory phenomenology, we might conclude that they are irrational. This paper investigates one hypothesis, according to which future-bias (...) is explained by our having a phenomenology that we describe, or conceive of, as being as of time robustly passing. We empirically tested this hypothesis and found no evidence in its favour. Our results present a puzzle, however, when compared with the results of an earlier study. We conclude that although robust passage phenomenology on its own probably does not explain future-bias, having this phenomenology and taking it to be veridical may contribute to future-bias. (shrink)
Even if I think it very likely that some morally good act is supererogatory rather than obligatory, I may nonetheless be rationally required to perform that act. This claim follows from an apparently straightforward dominance argument, which parallels Jacob Ross's argument for 'rejecting' moral nihilism. These arguments face analogous pairs of objections that illustrate general challenges for dominance reasoning under normative uncertainty, but (I argue) these objections can be largely overcome. This has practical consequences for the ethics of philanthropy -- (...) in particular, it means that donors are often rationally required to maximize the positive impact of their donations. (shrink)
Average utilitarianism and several related axiologies, when paired with the standard expectational theory of decision-making under risk and with reasonable empirical credences, can find their practical prescriptions overwhelmingly determined by the minuscule probability that the agent assigns to solipsism -- i.e., to the hypothesis that there is only one welfare subject in the world, viz., herself. This either (i) constitutes a reductio of these axiologies, (ii) suggests that they require bespoke decision theories, or (iii) furnishes a novel argument for ethical (...) egoism. (shrink)
Decision-making under normative uncertainty requires an agent to aggregate the assessments of options given by rival normative theories into a single assessment that tells her what to do in light of her uncertainty. But what if the assessments of rival theories differ not just in their content but in their structure -- e.g., some are merely ordinal while others are cardinal? This paper describes and evaluates three general approaches to this "problem of structural diversity": structural enrichment, structural depletion, and multi-stage (...) aggregation. All three approaches have notable drawbacks, but I tentatively defend multi-stage aggregation as least bad of the three. (shrink)
Is the overall value of a world just the sum of values contributed by each value-bearing entity in that world? Additively separable axiologies (like total utilitarianism, prioritarianism, and critical level views) say 'yes', but non-additive axiologies (like average utilitarianism, rank-discounted utilitarianism, and variable value views) say 'no'. This distinction is practically important: additive axiologies support 'arguments from astronomical scale' which suggest (among other things) that it is overwhelmingly important for humanity to avoid premature extinction and ensure the existence of a (...) large future population, while non-additive axiologies need not. We show, however, that when there is a large enough 'background population' unaffected by our choices, a wide range of non-additive axiologies converge in their implications with some additive axiology -- for instance, average utilitarianism converges to critical-level utilitarianism and various egalitarian theories converge to prioritiarianism. We further argue that real-world background populations may be large enough to make these limit results practically significant. This means that arguments from astronomical scale, and other arguments in practical ethics that seem to presuppose additive separability, may be truth-preserving in practice whether or not we accept additive separability as a basic axiological principle. (shrink)
Rohana faces a choice where she can produce either a better outcome by lying or a worse outcome by telling the truth. She justifiably, but falsely, believes in.