It has been argued that Prioritarianism violates Risky Non-Antiegalitarianism, a condition stating roughly that an alternative is socially better than another if it both makes everyone better off in expectation and leads to more equality. I show that Risky Non-Antiegalitarianism is in fact compatible with Prioritarianism as ordinarily defined, but that it violates some other conditions that may be attractive to prioritarians. While I argue that the latter conditions are not core principles of Prioritarianism, the choice between these conditions and (...) Risky Non-Antiegalitarianism nonetheless constitutes an important intramural debate for prioritarians. (shrink)
Rights-based approaches and consequentialist approaches to ethics are often seen as being diametrically opposed to one another. In one sense, they are. In another sense, however, they can be reconciled: a ‘global’ form of consequentialism might supply consequentialist foundations for a derivative morality that is non-consequentialist, and perhaps rights-based, in content. By way of case study to illustrate how this might work, I survey what a global consequentialist should think about a recent dispute between Jeff McMahan and Henry Shue on (...) the morality and laws of war. (shrink)
The possibility of social and technological collapse has been the focus of science fiction tropes for decades, but more recent focus has been on specific sources of existential and global catastrophic risk. Because these scenarios are simple to understand and envision, they receive more attention than risks due to complex interplay of failures, or risks that cannot be clearly specified. In this paper, we discuss the possibility that complexity of a certain type leads to fragility which can function as a (...) source of catastrophic or even existential risk. The paper first reviews a hypothesis by Bostrom about inevitable technological risks, named the vulnerable world hypothesis. This paper next hypothesizes that fragility may not only be a possible risk, but could be inevitable,and would therefore be a subclass or example of Bostrom’s vulnerable worlds. After introducing the titular fragile world hypothesis, the paper details the conditions under which it would be correct, and presents arguments for why the conditions may in fact may apply. Finally, the assumptions and potential mitigations of the new hypothesis are contrasted with those Bostrom suggests. (shrink)
All ordinary decisions involve some risk. If I go outside for a walk, I may trip and injure myself. But if I don’t go for a walk, I slightly increase my chances of cardiovascular disease. Typically, we disregard most small risks. When, for practical purposes, is it appropriate for one to ignore risk? This issue looms large because many activities performed by those in wealthy societies, such as driving a car, in some way risk contributing to climate harms. Are these (...) activities morally appropriate? -/- In this paper, I first summarize and respond to some arguments that purport to show that it is appropriate to ignore or discount very small risks. I argue that because our rationality is bounded, it is impossible for us to include every small risk in our decision-making process, and so we may reasonably use heuristics to guide many decisions. However, contrary to some thinkers, I argue that this does not violate the spirit of expected value theory; it merely shows that we should adopt a so-called "two-level" view. Our use of heuristics allows for the reasonable ignoring of some risks, and this perhaps explains why one might be inclined to think that individual climate-related risks are negligible. However, virtually all greenhouse-gas emitting activities in fact have some climate risk on the negative side of the ledger, and the use of heuristics does not permit the general ignoring of climate-change-related risk by individuals on grounds of expediency of judgment and decision-making. (shrink)
The article develops a general theory of the goals of free moral commitment. The theoretical hook is the discussion of the strict efficiency striving as demanded by the movement and theory of effective altruism. A detailed example shows prima facie counterintuitive consequences of this efficiency striving, the analysis of which reveals various problems such as: merely point-like but not structural commitment; radical universalism; violation of established moral standards and institutions. The article takes these problems as an occasion to develop a (...) general theory of moral investment with moral guidelines and planning instruments of its implementation such as: efficiency; preservation of existing moral standards and obligations, especially also towards those close to one; rooted universalism with adequate consideration of all beneficiaries of one's moral concern through allocation of separate budgets; real efficiency also through inclusion of strategic and organic investments. (shrink)
Prevailing opinion—defended by Jason Brennan and others—is that voting to change the outcome is irrational, since although the payoffs of tipping an election can be quite large, the probability of doing so is extraordinarily small. This paper argues that prevailing opinion is incorrect. Voting is shown to be rational so long as two conditions are satisfied: First, the average social benefit of electing the better candidate must be at least twice as great as the individual cost of voting, and second, (...) the chance of casting the decisive vote must be at least 1/N, where N stands for the number of citizens. It is argued that both of these conditions are often true in the real world. (shrink)
I explore the debate about whether consequentialist theories can adequately accommodate the moral force of promissory obligation. I outline a straightforward act consequentialist account grounded in the value of satisfying expectations, and raise and assess three objections to this account: that it counterintuitively predicts that certain promises should be broken when commonsense morality insists that they should be kept, that the account is circular, and Michael Cholbi’s argument that this account problematically implies that promise-making is frequently obligatory. I then discuss (...) alternative act consequentialist accounts, including Philip Pettit’s suggestion that promise-keeping is an intrinsic good and Michael Smith’s agent-relative account. I outline Brad Hooker’s rule consequentialist account of promissory obligation and raise a challenge for it. I conclude that appeals to intuitions about cases will not settle the dispute, and that consequentialists and their critics must instead engage in substantive debate about the nature and stringency of promissory obligation. (shrink)
An ethical theory is alienating if accepting the theory inhibits the agent from fitting participation in some normative ideal, such as some ideal of integrity, friendship, or community. Many normative ideals involve non-consequentialist behavior of some form or another. If such ideals are normatively authoritative, they constitute counterexamples to consequentialism unless their authority can be explained or explained away. We address a range of attempts to avoid such counterexamples and argue that consequentialism cannot by itself account for the normative authority (...) of all plausible such ideals. At best, consequentialism can find a more modest place in an ethical theory that includes non-consequentialist principles with their own normative authority. (shrink)
The aim of the consequentializing project is to show that, for every plausible ethical theory, there is a version of consequentialism that is extensionally equivalent to it. One challenge this project faces is that there are common-sense ethical theories that posit moral dilemmas. There has been some speculation about how the consequentializers should react to these theories, but so far there has not been a systematic treatment of the topic. In this article, I show that there are at least five (...) ways in which we can construct versions of consequentialism that are extensionally equivalent to the ethical theories that contain moral dilemmas. I argue that all these consequentializing strategies face a dilemma: either they must posit moral dilemmas in unintuitive cases or they must rely on unsupported assumptions about value, permissions, requirements, or options. I also consider this result's consequences for the consequentializing project. (shrink)
Elinor Mason draws on ethics and responsibility theory to present a pluralistic view of both wrongness and blameworthiness. Mason argues that our moral concepts, rightness and wrongness, must be connected to our responsibility concepts. But the connection is not simple. She identifies three different ways to be blameworthy, corresponding to different ways of acting wrongly. The paradigmatic way to be blameworthy is to act subjectively wrongly. Mason argues for an account of subjective obligation that is connected to the notion of (...) trying - to act rightly is try to do well by morality, to act wrongly (and to be blameworthy) is to fail to try hard enough. Trying involves understanding morality, those who do not grasp morality are in a different category. So agents might also be blameworthy for being oriented away from what really matters. In that case, agents are blameworthy in a different sense, the detached sense. Finally, we can become blameworthy by taking responsibility in cases where our agency is ambiguous. In the final section, Mason gives us an account of taking responsibility and agues that that is an important art of our responsibility practices. (shrink)
In Introduction Bentham considers a difficulty. If the immediate aim of punishment is to deter agents considering breaking the law, then the severity of the threat of punishment must increase if they are strongly tempted to offend. But it seems intuitively that some people who were strongly tempted to offend should be punished leniently. Bentham argues in response that all potential offenders capable of being deterred must be deterred. He makes three mistakes. It is possible that it would produce the (...) most happiness at t2 to punish an offender who could have been deterred at t1, but was not. The Principle of Utility might condemn the threats that would be needed to deter all potential offenders who can be deterred. Given the dispositions to reoffend of some strongly tempted offenders, their punishments should be relatively lenient. There is more room for leniency in Bentham's theory than he realized. (shrink)
I challenge the common picture of the “Standard Story” of Action as a neutral account of action within which debates in normative ethics can take place. I unpack three commitments that are implicit in the Standard Story, and demonstrate that these commitments together entail a teleological conception of reasons, upon which all reasons to act are reasons to bring about states of affairs. Such a conception of reasons, in turn, supports a consequentialist framework for the evaluation of action, upon which (...) the normative status of actions is properly determined through appeal to rankings of states of affairs as better and worse. This covert support for consequentialism from the theory of action, I argue, has had a distorting effect on debates in normative ethics. I then present challenges to each of these three commitments, a challenge to the first commitment by T.M. Scanlon, a challenge to the second by recent interpreters of Anscombe, and a new challenge to the third commitment that requires only minimal and prima facie plausible modifications to the Standard Story. The success of any one of the challenges, I demonstrate, is sufficient to block support from the theory of action for the teleological conception of reasons and the consequentialist evaluative framework. I close by demonstrating the pivotal role that such arguments grounded in the theory of action play in the current debate between evaluator-relative consequentialists and their critics. (shrink)
Actualists hold that contrary-to-duty scenarios give rise to deontic dilemmas and provide counterexamples to the transmission principle, according to which we ought to take the necessary means to actions we ought to perform. In an earlier article, I have argued, contrary to actualism, that the notion of ‘ought’ that figures in conclusions of practical deliberation does not allow for deontic dilemmas and validates the transmission principle. Here I defend these claims, together with my possibilist account of contrary-to-duty scenarios, against Stephen (...) White’s recent criticism. (shrink)
In a recent article, Xiaofei Liu seeks to defend, from the standpoint of consequentialism, the Doctrine of Doing and Allowing: DDA. While there are various conceptions of DDA, Liu understands it as the view that it is more difficult to justify doing harm than allowing harm. Liu argues that a typical harm doing involves the production of one more evil and one less good than a typical harm allowing. Thus, prima facie, it takes a greater amount of good to justify (...) doing a certain harm than it does to justify allowing that same harm. In this reply, I argue that Liu fails to show, from within a consequentialist framework, that there is an asymmetry between the evils produced by doing and allowing harm. I conclude with some brief remarks on what may establish such an asymmetry. (shrink)
In his new book, The Dimensions of Consequentialism, Martin Peterson proposes a version of multi-dimensional consequentialism according to which risk is one among several dimensions. We argue that Peterson’s treatment of risk is unsatisfactory. More precisely, we want to show that all problems of one-dimensional (objective or subjective) consequentialism are also problems for Peterson’s proposal, although it may fall prey to them less often. In ending our paper, we address the objection that our discussion overlooks the fact that Peterson’s proposal (...) is not the best version of multi-dimensional consequentialism. Our reply is that the possibilities of improving multi-dimensional consequentialism are very limited as far as risk is concerned. (shrink)
In his recent book, The Dimensions of Consequentialism, Martin Peterson puts forward a new version of consequentialism that he dubs ‘multidimensional consequentialism’. The defining thesis of the new theory is that there are irreducible moral aspects that jointly determine the deontic status of an act. In defending his particular version of multidimensional consequentialism, Peterson advocates the thesis—he calls it DEGREE—that if two or more moral aspects clash, the act under consideration is right to some non-extreme degree. This goes against the (...) orthodoxy according to which—Peterson calls this RESOLUTION—each act is always either entirely right or entirely wrong. The argument against RESOLUTION appeals to the existence of so-called deontic leaps: the idea is that endorsing RESOLUTION would not give each relevant moral aspect its due in the final analysis. Our paper argues that, contrary to Peterson, all moral aspects remain visible in what can properly be called the final analysis of a moral theory that involves RESOLUTION, moral aspects do not have to remain visible in judgements of all-things-considered rightness or wrongness, respectively, introduction of what Peterson calls verdictive reasons does not change the overall picture in favour of DEGREE. We conclude that multi-dimensional consequentialists should accept RESOLUTION rather than DEGREE. (shrink)
I argue that Alvin Goldman has failed to save process reliabilism from my critique in earlier work of consequentialist or teleological epistemic theories. First, Goldman misconstrues the nature of my challenge: two of the cases he discusses I never claimed to be counterexamples to process reliabilism. Second, Goldman’s reply to the type of case I actually claimed to be a counterexample to process reliabilism is unsuccessful. He proposes a variety of responses, but all of them either feature an implausible restriction (...) on process types, or fail to rule out cases with the sort of structure that generates the worry, or both. (shrink)
Prioritarianism is the moral view that a fixed improvement in someone's well-being matters more the worse off they are. Its supporters argue that it best captures our intuitions about unequal distributions of well-being. I show that prioritarianism sometimes recommends acts that will make things more unequal while simultaneously lowering the total well-being and making things worse for everyone ex ante. Intuitively, there is little to recommend such acts and I take this to be a serious counterexample for prioritarianism.
Suppose you can save only one of two groups of people from harm, with one person in one group, and five persons in the other group. Are you obligated to save the greater number? While common sense seems to say ‘yes’, the numbers skeptic says ‘no’. Numbers Skepticism has been partly motivated by the anti-consequentialist thought that the goods, harms and well-being of individual people do not aggregate in any morally significant way. However, even many non-consequentialists think that Numbers Skepticism (...) goes too far in rejecting the claim that you ought to save the greater number. Besides the prima facie implausibility of Numbers Skepticism, Michael Otsuka has developed an intriguing argument against this position. Otsuka argues that Numbers Skepticism, in conjunction with an independently plausible moral principle, leads to inconsistent choices regarding what ought to be done in certain circumstances. This inconsistency in turn provides us with a good reason to reject Numbers Skepticism. Kirsten Meyer offers a notable challenge to Otsuka’s argument. I argue that Meyer’s challenge can be met, and then offer my own reasons for rejecting Otsuka’s argument. In light of these criticisms, I then develop an improved, yet structurally similar argument to Otsuka’s argument. I argue for the slightly different conclusion that the view proposed by John Taurek that ‘the numbers don’t count’ leads to inconsistent choices, which in turn provides us with a good reason to reject Taurek’s position. (shrink)
When it comes to epistemic normativity, should we take the good to be prior to the right? That is, should we ground facts about what we ought and ought not believe on a given occasion in facts about the value of being in certain cognitive states (such as, for example, the value of having true beliefs)? The overwhelming answer among contemporary epistemologists is “Yes, we should.” This essay argues to the contrary. Just as taking the good to be prior to (...) the right in ethics often leads one to sanction implausible trade-offs when determining what an agent should do, so too, this essay argues, taking the good to be prior to the right in epistemology leads one to sanction implausible trade-offs when determining what a subject should believe. Epistemic value—and, by extension, epistemic goals—are not the explanatory foundation upon which all other normative notions in epistemology rest. (shrink)
Williams argues that impartial moral theories undermine agents’ integrity by making them responsible for allowings as well as doings. I argue that in some cases of allowings, where there is an intervening agent, the agent has been coerced, and so is not fully responsible. I provide an analysis of coercion. Whether an agent is coerced depends on various things (the coercer must provide strong reasons, and the coercer must have a mens rea), and crucially, the coercee’s action is rendered less (...) than fully voluntary by the coercion. The attack on voluntariness is usually explained by limiting coercion to threats rather than offers. I argue that this approach cannot work. Instead I argue that non-voluntariness (and thus coercion) must be understood in terms of the subjective state of the victim. It is a necessary condition of coercion that the coercee actually suffers alienation from her own actions as a result of domination by the coercer. I defend this account and show that it provides an explanation for why agents who are coerced do not act in a fully voluntary way. (shrink)
Ms. Dimitriou's motivist view has a simple upshot: for at least some cases, our moral assessment of an action should depend on the motives behind it (Dimitriou, passim). This may be contrasted with the antimotivist position, the view that argues motives should not figure into our moral assessment of an action. She presents two provocative cases where an agent’s motive “infects” the concomitant action. One example involves racist thinking and the other a form of sexual self-gratification. Given that we would (...) never find the action that accompanies these motives morally acceptable once we know what the motives are, Ms. Dimitriou has argued that we ought to embrace motivism. In this brief commentary, I would like to present a few cases that seemingly show the motivist position is flawed. I want my comments to generate a discussion of how Ms. Dimitriou’s position can handle these weird cases, even though my presentation will likely come off as a direct assault of her view. (shrink)
Recently two distinct forms of rule-utilitarianism have been introduced that differ on how to measure the consequences of rules. Brad Hooker advocates fixed-rate rule-utilitarianism, while Michael Ridge advocates variable-rate rule-utilitarianism. I argue that both of these are inferior to a new proposal, optimum-rate rule-utilitarianism. According to optimum-rate rule-utilitarianism, an ideal code is the code whose optimum acceptance level is no lower than that of any alternative code. I then argue that all three forms of rule-utilitarianism fall prey to two fatal (...) problems that leave us without any viable form of rule-utilitarianism. (shrink)
Manuscript originally written in 1995. Discusses various attempts to characterize alternatives relevant for deliberation and for the formulation of act-consequentialist accounts of what actions ought to be performed.
One popular line of argument put forward in support of the principle that the right is prior to the good is to show that teleological theories, which put the good prior to the right, lead to implausible normative results. There are situa- tions, it is argued, in which putting the good prior to the right entails that we ought to do things that cannot be right for us to do. Consequently, goodness cannot (always) explain an action's rightness. This indicates that (...) what is right must be determined independently of the good. In this paper, I argue that these purported counterexamples to teleology fail to establish that the right must be prior to the good. In fact, putting the right prior to the good can lead to sets of ought statements which potentially con- flict with the principle that ‘ought’ implies ‘can’. I argue that no plausible ethical theory can determine what is right independently of a notion of value or goodness. Every plausible ethical theory needs a mapping from goodness to rightness, which implies that right cannot be prior to the good. (shrink)
Consequentialism, many philosophers have claimed, asks too much of us to be a plausible ethical theory. Indeed, the theory's severe demandingness is often claimed to be its chief flaw. My thesis is that as we come to better understand this objection, we see that, even if it signals or tracks the existence of a real problem for Consequentialism, it cannot itself be a fundamental problem with the view. The objection cannot itself provide good reason to break with Consequentialism, because it (...) must presuppose prior and independent breaks with the view. The way the objection measures the demandingness of an ethical theory reflects rather than justifies being in the grip of key anti-Consequentialist conclusions. We should reject Consequentialism independently of the Objection or not at all. Thus, we can reduce by one the list of worrisome fundamental complaints against Consequentialism. (shrink)
Consequentialists typically think that the moral quality of one's conduct depends on the difference one makes. But consequentialists may also think that even if one is not making a difference, the moral quality of one's conduct can still be affected by whether one is participating in an endeavour that does make a difference. Derek Parfit discusses this issue – the moral significance of what I call ‘participation’ – in the chapter of Reasons and Persons that he devotes to what he (...) calls ‘moral mathematics’. In my paper, I expose an inconsistency in Parfit's discussion of moral mathematics by showing how it gives conflicting answers to the question of whether participation matters. I conclude by showing how an appreciation of Parfit's error sheds some light on consequentialist thought generally, and on the debate between act- and rule-consequentialists specifically. (shrink)
Frank Jackson claims that consequentialists should hold the view that Derek Parfit labels the second ‘mistake in moral mathematics’, which is the view that “If some act is right or wrong because of . . . effects, the only relevant effects are the effects of this particular act.” But each of the three arguments that Jackson offers is unsound. The root of the problem is that in order to argue for the conclusion Jackson aims to establish (that consequentialists should not (...) regard the second “mistake” as a mistake), one must presuppose an overly narrow, and hence distorted, understanding of what consequentialism is. (shrink)
Moral puzzles about actions which bring about very small or what are said to be imperceptible harms or benefits for each of a large number of people are well known. Less well known is an argument by Warren Quinn that standard theories of rationality can lead an agent to end up torturing himself or herself in a completely foreseeable way, and that this shows that standard theories of rationality need to be revised. We show where Quinn's argument goes wrong, and (...) apply this to the moral puzzles. (shrink)
Cliff Landesman provides a vivid description of a case where we have no best outcome available to us. He poses this as a problem for utilitarians who advise us to do the best we can. This does indeed make such advice impractical. I begin by contrasting older versions of utilitarianism with newer ones that have appeared in deontic logic and that were designed precisely to accommodate Landesman's sort of scenario. (I cast matters in terms of the Limit Assumption and world-theoretic (...) versions of utilitarianism.) I then make three points. First, Landesman's problem does not pose any special problem for these newer theories. Secondly, I note that it is an interesting consequence of these newer theories being utilitarian theories, that--contrary to the tradition--utilitarianism isn't automatically a no conflicts theory of obligation. Thirdly, and most importantly, I identify a new, deeper and wider theoretical problem: "The Confinement Problem". This problem infests the newer versions of utilitarianism. Worse still, the infestation spreads to satisficing consequentialism (cf. Scheffler, Slote), the direction Landesman points to for a solution to his problem, and this new problem is one where the theoretical rulings of these theories clearly conflict with intuition. (shrink)
Traditional utilitarianism, when applied, implies a surprising prediction about the future, viz., that all experience of pleasure and pain must end once and for all, or infinitely dwindle. Not only is this implication surprising, it should render utilitarianism unacceptable to persons who hold any of the following theses: that evaluative propositions may not imply descriptive, factual propositions; that evaluative propositions may not imply contingent factual propositions about the future; that there will always exist beings who experience pleasure or pain.
"From the Proceedings of the British Academy, London, volume LXV (1979)" - title page. Series: Henrietta Hertz Trust annual philosophical lecture -- 1978 Other Titles: Proceedings of the British Academy. Vol.65: 1979.