Commonsense Consequentialism is a book about morality, rationality, and the interconnections between the two. In it, Douglas W. Portmore defends a version of consequentialism that both comports with our commonsense moral intuitions and shares with other consequentialist theories the same compelling teleological conception of practical reasons. Broadly construed, consequentialism is the view that an act's deontic status is determined by how its outcome ranks relative to those of the available alternatives on some evaluative ranking. Portmore argues that outcomes should be (...) ranked, not according to their impersonal value, but according to how much reason the relevant agent has to desire that each outcome obtains and that, when outcomes are ranked in this way, we arrive at a version of consequentialism that can better account for our commonsense moral intuitions than even many forms of deontology can. What's more, Portmore argues that we should accept this version of consequentialism, because we should accept both that an agent can be morally required to do only what she has most reason to do and that what she has most reason to do is to perform the act that would produce the outcome that she has most reason to want to obtain.Although the primary aim of the book is to defend a particular moral theory, Portmore defends this theory as part of a coherent whole concerning our commonsense views about the nature and substance of both morality and rationality. Thus, it will be of interest not only to those working on consequentialism and other areas of normative ethics, but also to those working in metaethics. Beyond offering an account of morality, Portmore offers accounts of practical reasons, practical rationality, and the objective/subjective obligation distinction. (shrink)
The book concerns what I take to be the least controversial normative principle concerning action: you ought to perform your best option—best, that is, in terms of whatever ultimately matters. The book sets aside the question of what ultimately matters so as to focus on more basic issues, such as: What are our options? Do I have the option of typing out the cure for cancer if that’s what I would in fact do if I had the right intentions at (...) the right times (e.g., the intention to type the letter T at t1, the intention to type the letter H at t2, the intention to type the letter E at t3, etc.)? If I can’t form intentions voluntarily, does that mean that I don’t have the option of forming the intention that I ought to form? Which options do we assess directly in terms of their own goodness and which do we assess in terms of their relations to the goodness of other options? What do we hold fixed when assessing how good an option is? Do we, for instance, hold fixed the agent’s future beliefs, desires, and intentions? And do we hold fixed the agent’s predictable future misbehavior? Lastly, how do the things that ultimately matter determine the goodness of an option? If one of the things that ultimately matters in determining the goodness of an option is that the option doesn’t involve violating anyone’s rights, do we evaluate the option itself in terms of whether it involves violating anyone’s rights or do we evaluate the option’s prospect in terms of this and then the option in terms of its prospect? And what if there is indeterminacy or uncertainty with regards to whether an option would involve violating someone’s rights? (shrink)
Maximalism is the view that an agent is permitted to perform a certain type of action if and only if she is permitted to perform some instance of this type, where φ-ing is an instance of ψ-ing if and only if φ-ing entails ψ-ing but not vice versa. Now, the aim of this paper is not to defend maximalism, but to defend a certain account of our options that when combined with maximalism results in a theory that accommodates the idea (...) that a moral theory ought to be morally harmonious—that is, ought to be such that the agents who satisfy the theory, whoever and however numerous they may be, are guaranteed to produce the morally best world that they have the option of producing. I argue that, for something to count as an option for an agent, it must, in the relevant sense, be under her control. And I argue that the relevant sort of control is the sort that we exercise over our reasons-responsive attitudes by being both receptive and reactive to reasons. I call this sort of control rational control, and I call the view that φ-ing is an option for a subject if and only if she has rational control over whether she φs rationalism. When we combine this view with maximalism, we get rationalist maximalism, which I argue is a promising moral theory. (shrink)
To consequentialize a non-consequentialist theory, take whatever considerations that the non-consequentialist theory holds to be relevant to determining the deontic statuses of actions and insist that those considerations are relevant to determining the proper ranking of outcomes. In this way, the consequentialist can produce an ordering of outcomes that when combined with her criterion of rightness yields the same set of deontic verdicts that the non-consequentialist theory yields. In this paper, I argue that any plausible non-consequentialist theory can be consequentialized. (...) I explain the motivation for the consequentializing project and defend it against recent criticisms by Mark Schroeder and others. (shrink)
A growing trend of thought has it that any plausible nonconsequentialist theory can be consequentialized, which is to say that it can be given a consequentialist representation. In this essay, I explore both whether this claim is true and what its implications are. I also explain the procedure for consequentializing a nonconsequentialist theory and give an account of the motivation for doing so.
Some right acts have what philosophers call moral worth. A right act has moral worth if and only if its agent deserves credit for having acted rightly in this instance. And I argue that an agent deserves credit for having acted rightly if and only if her act issues from an appropriate set of concerns, where the appropriateness of these concerns is a function what her ultimate moral concerns should be. Two important upshots of the resulting account of moral worth (...) are that (1) an act can have moral worth even if it doesn’t manifest a concern for doing what’s right and that (2) an act can lack moral worth even if it is performed for the right reasons. (shrink)
We ought to perform our best option—that is, the option that we have most reason, all things considered, to perform. This is perhaps the most fundamental and least controversial of all normative principles concerning action. Yet, it is not, I believe, well understood. For even setting aside questions about what our options are and what our reasons are, there are prior questions concerning how best to formulate the principle. In this paper, I address these questions. One of the more interesting (...) upshots of this inquiry is that the deontic statuses (e.g., obligatory, optional, and impermissible) of individual actions are determined by the deontic statuses of the larger sets of actions of which they are a part. And, as I show, this has a number of interesting implications both for normative theory and for our understanding of practical reasons. (shrink)
Blame is multifarious. It can be passionate or dispassionate. It can be expressed or kept private. We blame both the living and the dead. And we blame ourselves as well as others. What’s more, we blame ourselves, not only for our moral failings, but also for our non-moral failings: for our aesthetic bad taste, gustatory self-indulgence, or poor athletic performance. And we blame ourselves both for things over which we exerted agential control (e.g., our voluntary acts) and for things over (...) which we lacked such control (e.g., our desires, beliefs, and intentions). I argue that, despite this manifest diversity in our blaming practices, it’s possible to provide comprehensive account of blame. Indeed, I propose a set of necessary and sufficient conditions that aims to specify blame’s extension in terms of its constitution as opposed to its function. And I argue that this proposal has a number of advantages beyond accounting for blame in all its disparate forms. For one, it can account for the fact that one’s having had control over whether one was to φ is a necessary condition for one’s being fittingly blamed for having φ-ed. For another, it can account for why, unlike fitting shame, fitting blame is always deserved, which in turn explains why there is something morally problematic about ridding oneself of one’s fitting self-blame (e.g., one’s fitting guilt). (shrink)
Many philosophers hold that the achievement of one's goals can contribute to one's welfare apart from whatever independent contributions that the objects of those goals or the processes by which they are achieved make. Call this the Achievement View, and call those who accept it achievementists. In this paper, I argue that achievementists should accept both that one factor that affects how much the achievement of a goal contributes to one’s welfare is the amount that one has invested in that (...) goal and that the amount that one has invested in a goal is a function of how much one has personally sacrificed for its sake, not a function of how much effort one has put into achieving it. So I will, contrary to at least one achievementist, be arguing against the view that the greater the amount of productive effort that goes into achieving a goal, the more its achievement contributes to one's welfare. Furthermore, I argue that the reason that the achievement of those goals for which one has personally sacrificed matters more to one’s welfare is that, in general, the redemption of one's self-sacrifices in itself contributes to one’s welfare. Lastly, I argue that the view that the redemption of one's self-sacrifices in itself contributes to one's welfare is plausible independent of whether or not we find the Achievement View plausible. We should accept this view so as to account both for the Shape of a Life Phenomenon and for the rationality of honoring "sunk costs.". (shrink)
In this paper, I argue that those moral theorists who wish to accommodate agent-centered options and supererogatory acts must accept both that the reason an agent has to promote her own interests is a nonmoral reason and that this nonmoral reason can prevent the moral reason she has to sacrifice those interests for the sake of doing more to promote the interests of others from generating a moral requirement to do so. These theorists must, then, deny that moral reasons morally (...) override nonmoral reasons, such that even the weakest moral reason trumps the strongest nonmoral reason in the determination of an act’s moral status (e.g., morally permissible or impermissible). If this is right, then it seems that these theorists have their work cut out for them. It will not be enough for them to provide a criterion of rightness that accommodates agent-centered options and supererogatory acts, for, in doing so, they incur a debt. As I will show, in accommodating agent-centered options, they commit themselves to the view that moral reasons are not morally overriding, and so they owe us an account of how both moral reasons and nonmoral reasons come together to determine an act’s moral status. (shrink)
On what I take to be the standard account of supererogation, an act is supererogatory if and only if it is morally optional and there is more moral reason to perform it than to perform some permissible alternative. And, on this account, an agent has more moral reason to perform one act than to perform another if and only if she morally ought to prefer how things would be if she were to perform the one to how things would be (...) if she were to perform the other. I argue that this account has two serious problems. The first, which I call the latitude problem, is that it has counterintuitive implications in cases where the duty to be exceeded is one that allows for significant latitude in how to comply with it. The second, which I call the transitivity problem, is that it runs afoul of the plausible idea that the one-reason-morally-justifies-acting-against-another relation is transitive. What’s more, I argue that both problems can be overcome by an alternative account, which I call the maximalist account. (shrink)
It seems that we can be directly accountable for our reasons-responsive attitudes—e.g., our beliefs, desires, and intentions. Yet, we rarely, if ever, have volitional control over such attitudes, volitional control being the sort of control that we exert over our intentional actions. This presents a trilemma: (Horn 1) deny that we can be directly accountable for our reasons-responsive attitudes, (Horn 2) deny that φ’s being under our control is necessary for our being directly accountable for φ-ing, or (Horn 3) deny (...) that the relevant sort of control is volitional control. This paper argues that we should take Horn 3. (shrink)
In this paper, I argue that maximizing act-consequentialism (MAC)—the theory that holds that agents ought always to act so as to produce the best available state of affairs—can accommodate both agent-centered options and supererogatory acts. Thus I will show that MAC can accommodate the view that agents often have the moral option of either pursuing their own personal interests or sacrificing those interests for the sake of the impersonal good. And I will show that MAC can accommodate the idea that (...) certain acts are supererogatory in the sense of not being morally required even though they are what the agent has most moral reason to do. These two theses are surprising in themselves, but even more surprising is how I arrive at them. I argue that anyone generally concerned to accommodate, in some coherent fashion, our pre-theoretical moral intuitions at both the normative and meta-ethical levels will have to give a certain account of agent-centered options and supererogatory acts and that this account is the very one that allows for the maximizing act-consequentialist to accommodate both. So my paper will not only be of interest to those concerned with the tenability of consequentialism, but also to anyone interested in giving a coherent account of our pre-theoretical moral intuitions. (shrink)
There is, on a given moral view, a constraint against performing acts of a certain type if that view prohibits agents from performing an instance of that act-type even to prevent two or more others from each performing a morally comparable instance of that act-type. The fact that commonsense morality includes many such constraints has been seen by several philosophers as a decisive objection against consequentialism. Despite this, I argue that constraints are more plausibly accommodated within a consequentialist framework than (...) within the more standard side-constraint framework. For I argue that when we combine agent-relative consequentialism with a Kantian theory of value, we arrive at a version of consequentialism, which I call 'Kantsequentialism', that has several advantages over the standard side-constraint approach to accommodating constraints. What’s more, I argue that Kantsequentialism doesn’t have any of the disadvantages that critics of consequentializing have presumed that such a theory must have. (shrink)
In this paper, I take it for granted both that there are two types of blameworthiness—accountability blameworthiness and attributability blameworthiness—and that avoidability is necessary only for the former. My task, then, is to explain why avoidability is necessary for accountability blameworthiness but not for attributability blameworthiness. I argue that what explains this is both the fact that these two types of blameworthiness make different sorts of reactive attitudes fitting and that only one of these two types of attitudes requires having (...) been able to refrain from φ-ing in order for them to be fitting. (shrink)
Consequentialism is an agent-neutral teleological theory, and deontology is an agent-relative non-teleological theory. I argue that a certain hybrid of the two—namely, non-egoistic agent-relative teleological ethics (NATE)—is quite promising. This hybrid takes what is best from both consequentialism and deontology while leaving behind the problems associated with each. Like consequentialism and unlike deontology, NATE can accommodate the compelling idea that it is always permissible to bring about the best available state of affairs. Yet unlike consequentialism and like deontology, NATE accords (...) well with our commonsense moral intuitions. (shrink)
A theory is agent neutral if it gives every agent the same set of aims and agent relative otherwise. Most philosophers take act-consequentialism to be agent-neutral, but I argue that at the heart of consequentialism is the idea that all acts are morally permissible in virtue of their propensity to promote value and that, given this, it is possible to have a theory that is both agent-relative and act-consequentialist. Furthermore, I demonstrate that agent-relative act-consequentialism can avoid the counterintuitive implications associated (...) with utilitarianism while maintaining the compelling idea that it is never wrong to bring about the best outcome. (shrink)
This paper argues that the standard account of posthumous harm is untenable. The standard account presupposes the desire-fulfillment theory of welfare, but I argue that no plausible version of this theory can allow for the possibility of posthumous harm. I argue that there are, at least, two problems with the standard account from the perspective of a desire-fulfillment theorist. First, as most desire-fulfillment theorists acknowledge, the theory must be restricted in such a way that only those desires that pertain to (...) one’s own life count in determining one’s welfare. The problem is that no one has yet provided a plausible account of which desires these are such that desires for posthumous prestige and the like are included. Second and more importantly, if the desire-fulfillment theory is going to be at all plausible, it must, I argue, restrict itself not only to those desires that pertain to one’s own life but also to those desires that are future independent, and this would rule out the possibility of posthumous harm. If I’m right, then even the desire-fulfillment theorist should reject the standard account of posthumous harm. We cannot plausibly account for posthumous harm in terms of desire fulfillment (or the lack thereof). (shrink)
Common-sense morality includes various agent-centred constraints, including ones against killing unnecessarily and breaking a promise. However, it's not always clear whether, had an agent ϕ-ed, she would have violated a constraint. And sometimes the reason for this is not that we lack knowledge of the relevant facts, but that there is no fact about whether her ϕ-ing would have constituted a constraint-violation. What, then, is a constraint-accepting theory to say about whether it would have been permissible for her to have (...) ϕ-ed? In this paper, I canvass various possible approaches to answering this question and I argue that teleology offers the most plausible approach—teleology being the view that every act has its deontic status in virtue of how its outcome ranks, relative to those of its alternatives. So although, until recently, it had been thought that only deontological theories can accommodate constraints, it turns out that teleological theories not only can accommodate constraints, but can do so more plausibly than deontological theories can. (shrink)
Dual-ranking act-consequentialism (DRAC) is a rather peculiar version of act-consequentialism. Unlike more traditional forms of act-consequentialism, DRAC doesn’t take the deontic status of an action to be a function of some evaluative ranking of outcomes. Rather, it takes the deontic status of an action to be a function of some non-evaluative ranking that is in turn a function of two auxiliary rankings that are evaluative. I argue that DRAC is promising in that it can accommodate certain features of commonsense morality (...) that no single-ranking version of act-consequentialism can: supererogation, agent-centered options, and the self-other asymmetry. I also defend DRAC against three objections: (1) that its dual-ranking structure is ad hoc, (2) that it denies (putatively implausibly) that it is always permissible to make self-sacrifices that don’t make things worse for others, and (3) that it violates certain axioms of expected utility theory, viz., transitivity and independence. (shrink)
In this article, I argue that Brad Hooker's rule-consequentialism implausibly implies that what earthlings are morally required to sacrifice for the sake of helping their less fortunate brethren depends on whether or not other people exist on some distant planet even when these others would be too far away for earthlings to affect.
IN THIS PAPER, I make a presumptive case for moral rationalism: the view that agents can be morally required to do only what they have decisive reason to do, all things considered. And I argue that this view leads us to reject all traditional versions of actâ€consequentialism. I begin by explaining how moral rationalism leads us to reject utilitarianism.
This paper concerns Warren Quinn’s famous “The Puzzle of the Self-Torturer.” I argue that even if we accept his assumption that practical rationality is purely instrumental such that what he ought to do is simply a function of how the relevant options compare to each other in terms of satisfying his actual preferences that doesn’t mean that every explanation as to why he shouldn’t advance to the next level must appeal to the idea that so advancing would be suboptimal in (...) terms of the satisfaction of his actual preferences. Rather, we can admit that his advancing would always be optimal, but argue that advancing isn’t always what he ought to do given that advancing sometimes fails to meet some necessary condition for being what he ought to do. For instance, something can be what he ought to do only if it’s an option for him. What’s more, something can be what he ought to do only if it’s something that he can do without responding inappropriately to his reasons—or, so, I argue. Thus, the solution to the puzzle is, I argue, to realize that, in certain circumstances, advancing is not what the self-torturer ought to do given that he can do so only by responding inappropriately to his reasons. (shrink)
Maximalism is the view that if an agent is permitted to perform a certain type of action (say, baking), this is in virtue of the fact that she is permitted to perform some instance of this type (say, baking a pie), where φ-ing is an instance of ψ-ing if and only if φ-ing entails ψ-ing but not vice versa. Now, the point of this paper is not to defend maximalism, but to defend a certain account of our options that when (...) combined with maximalism results in a theory that both avoids the sorts of objections that have typically been levelled against maximalism and accommodates the plausible idea that a moral theory must be collectively successful in the sense that everyone’s satisfying the theory guarantees that our theory-given aims will be best achieved. I argue that, for something to count as an option for an agent, it must, in the relevant sense, be under her control. And I argue that the relevant sort of control is the sort that we exercise over our reasons-responsive attitudes (e.g., our beliefs, desires, and intentions) by being both receptive and reactive to reasons. I call this sort of control rational control, and I call the view that φ-ing is an option for an agent if and only if she has rational control over whether she φs rationalism. When we combine this view with maximalism, we get rationalist maximalism, which I argue is a promising moral theory. (shrink)
Agents often face a choice of what to do. And it seems that, in most of these choice situations, the relevant reasons do not require performing some particular act, but instead permit performing any of numerous act alternatives. This is known as the basic belief. Below, I argue that the best explanation for the basic belief is not that the relevant reasons are incommensurable (Raz) or that their justifying strength exceeds the requiring strength of opposing reasons (Gert), but that they (...) are imperfect reasons—reasons that do not support performing any particular act, but instead support choosing any of the numerous alternatives that would each achieve the same worthy end. In the process, I develop and defend a novel theory of objective rationality, arguing that it is superior to its two most notable rivals. (shrink)
The performance of one option can entail the performance of another. For instance, baking an apple pie entails baking a pie. Now, suppose that both of these options—baking a pie and baking an apple pie—are permissible. This raises the issue of which, if either, is more fundamental than the other. Is baking a pie permissible because it’s permissible to bake an apple pie? Or is baking an apple pie permissible because it’s permissible to bake a pie? Or are they equally (...) fundamental, as they would be if they were both permissible because, say, they both accord with Kant’s categorical imperative? I defend the view that the permissibility of an option that entails another is more fundamental than the permissibility of the option that it entails. That is, I defend maximalism: the view that if an agent is permitted to perform a certain type of action (say, baking a pie), this is in virtue of the fact that she is permitted to perform some instance of this type (say, baking an apple pie), where φ-ing is an instance of ψ-ing if and only if φ-ing entails ψ-ing but not vice versa. If maximalism is correct, then, as I show, most theories of morality and rationality must be revised. (shrink)
I argue that when determining whether an agent ought to perform an act, we should not hold fixed the fact that she’s going to form certain attitudes (and, here, I’m concerned with only reasons-responsive attitudes such as beliefs, desires, and intentions). For, as I argue, agents have, in the relevant sense, just as much control over which attitudes they form as which acts they perform. This is important because what effect an act will have on the world depends not only (...) on which acts the agent will simultaneously and subsequently perform, but also on which attitudes she will simultaneously and subsequently form. And this all leads me to adopt a new type of practical theory, which I call rational possibilism. On this theory, we first evaluate the entire set of things over which the agent exerts control, where this includes the formation of certain attitudes as well as the performance of certain acts. And, then, we evaluate individual acts as being permissible if and only if, and because, there is such a set that is itself permissible and that includes that act as a proper part. Importantly, this theory has two unusual features. First, it is not exclusively act-orientated, for it requires more from us than just the performance of certain voluntary acts. It requires, in addition, that we involuntarily form certain attitudes. Second, it is attitude-dependent in that it holds that which acts we’re required to perform depends on which attitudes we’re required to form. I then show how these two features can help us both to address certain puzzling cases of rational choice and to understand why most typical practical theories (utilitarianism, virtue ethics, rational egoism, Rossian deontology, etc.) are problematic. (shrink)
On the Total Principle, the best state of affairs (ceteris paribus) is the one with the greatest net sum of welfare value. Parfit rejects this principle, because he believes that it implies the Repugnant Conclusion, the conclusion that for any large population of people, all with lives well worth living, there will be some much larger population whose existence would be better, even though its members all have lives that are only barely worth living. Recently, however, a number of philosophers (...) have suggested that the Total Principle does not imply the Repugnant Conclusion provided that a certain axiological view (namely, the ‘Discontinuity View’) is correct. Nevertheless, as I point out, there are three different versions of the Repugnant Conclusion, and it appears that the Total Principle will imply two of the three even if the Discontinuity View is correct. I then go on to argue that one of the two remaining versions turns out not to be repugnant after all. Second, I argue that the last remaining version is not, as it turns out, implied by the Total Principle. Thus, my arguments show that the Total Principle has no repugnant implications. (shrink)
Consequentialism is usually thought to be unable to accommodate many of our commonsense moral intuitions. In particular, it has seemed incompatible with the intuition that agents should not violate someone's rights even in order to prevent numerous others from committing comparable rights violations. Nevertheless, I argue that a certain form of consequentialism can accommodate this intuition: agent-relative consequentialism--the view according to which agents ought always to bring about what is, from their own individual perspective, the best available outcome. Moreover, I (...) argue that the consequentialist's agent-focused account of the impermissibility of such preventive violations is more plausible than the deontologist's victim-focused account. Contrary to Frances Kamm, I argue that agent-relative consequentialism can adequately deal with single-agent cases, cases where an agent would have to commit one rights violation now in order to minimize her commissions of such rights violations over time. (shrink)
The performance of one option can entail the performance of another. For instance, I have the option of baking a pumpkin pie as well as the option of baking a pie, and the former entails the latter. Now, suppose that I have both reason to bake a pie and reason to bake a pumpkin pie. This raises the question: Which, if either, is more fundamental than the other? Do I have reason to bake a pie because I have reason to (...) perform some instance of pie-baking—perhaps, pumpkin-pie baking? Or do I have reason to bake a pumpkin pie because I have reason to bake a pie? Or are they equally fundamental, as they would be if, say, I had reason to do each because each would have optimal consequences? The aim of this paper is to compare two possible answers to this question—omnism and maximalism—and to argue that the latter is preferable. Roughly speaking, maximalism is the view that only those options that are not entailed by any other option are to be assessed in terms of whether they have some feature (such as that of having optimal consequences), whereas omnism is the view that all options are to be assessed in terms of whether they have this feature. I argue that there are at least two reasons to prefer maximalism, for it is able to overcome two critical problems with omnism. (shrink)
In this paper, I argue that we have obligations not only to perform certain actions, but also to have certain attitudes (such as desires, beliefs, and intentions), and this despite the fact that we rarely, if ever, have direct voluntary control over our attitudes. Moreover, I argue that whatever obligations we have with respect to actions derive from our obligations with respect to attitudes. More specifically, I argue that an agent is obligated to perform an action if and only if (...) it’s the action that she would perform if she were to have the attitudes that she ought to have. This view, which I call attitudism, has three important implications. First, it implies that an adequate practical theory must not be exclusively act-orientated. That is, it must require more of us than just the performance of certain voluntary acts. Second, it implies that an adequate practical theory must be attitude-dependent. That is, it must hold that what we ought to do depends on what attitudes we ought to have. Third, it implies that no adequate practical theory can require us to perform acts that we would not perform even if we were to have the attitudes that we ought to have. I then show how these implications can help us both to address certain puzzling cases of rational choice and to understand why most typical practical theories (utilitarianism, rational egoism, virtue ethics, Rossian deontology, etc.) are mistaken. (shrink)
I argue that rule consequentialism sometimes requires us to act in ways that we lack sufficient reason to act. And this presents a dilemma for Parfit. Either Parfit should concede that we should reject rule consequentialism (and, hence, Triple Theory, which implies it) despite the putatively strong reasons that he believes we have for accepting the view or he should deny that morality has the importance he attributes to it. For if morality is such that we sometimes have decisive reason (...) to act wrongly, then what we should be concerned with, practically speaking, is not with the morality of our actions, but with whether our actions are supported by sufficient reasons. We could, then, for all intents and purposes just ignore morality and focus on what we have sufficient reason to do, all things considered. So if my arguments are cogent, they show that Parfit’s Triple Theory is either false or relatively unimportant in that we can, for all intents and purposes, simply ignore its requirements and just do whatever it is that we have sufficient reason to do, all things considered. (shrink)
In this paper, I criticize David McNaughton and Piers Rawling's formalization of the agent-relative/agent-neutral distinction. I argue that their formalization is unable to accommodate an important ethical distinction between two types of conditional obligations. I then suggest a way of revising their formalization so as to fix the problem.
We ought to perform our best option—that is, the option that we have most reason, all things considered, to perform. This is perhaps the most fundamental and least controversial of all normative principles concerning action. Yet, it is not, I believe, well understood. For even setting aside questions about what our reasons are and about how best to formulate the principle, there is a question about how we should construe our options. This question is of the upmost importance, for which (...) option will count as being best depends on how broadly or narrowly we are to construe our options. In this paper, I argue that we ought to construe an agent’s options at a time, t, as being those actions (or sets of actions) that are scrupulously securable by her at t. (shrink)
On commonsense morality, there are two types of situations where an agent is not required to maximize the impersonal good. First, there are those situations where the agent is prohibited from doing so--constraints. Second, there are those situations where the agent is permitted to do so but also has the option of doing something else--options. I argue that there are three possible explanations for the absence of a moral requirement to maximize the impersonal good and that the commonsense moralist must (...) appeal to all three in order to account for the vast array of constraints and options we take there be. (shrink)
An act that accords with duty has moral worth if and only if the agent’s reason for performing it is the same as what would have motivated a perfectly virtuous agent to perform it. On one of the two leading accounts of moral worth, an act that accords with duty has moral worth if and only if the agent’s reason for performing it is the fact that it’s obligatory. On the other, an act that accords with duty has moral worth (...) if and only if the agent’s reason for performing it is the fact that it has that feature of obligatory acts that makes them obligatory. I argue that both views are incorrect, providing counterexamples to each. I then argue that, on the correct account, an act can have moral worth only if its agent is motivated out of a fundamental concern for the things that ultimately matter. (shrink)
I explain what teleological reasons are, distinguish between direct and indirect teleological reasons, and discuss both whether all practical reasons are teleological and whether all teleological reasons are direct.
Imagine both that (1) S1 is deliberating at t about whether or not to x at t' and that (2) although S1’s x-ing at t' would not itself have good consequences, good consequences would ensue if both S1 x's at t' and S2 y's at t", where S1 may or may not be identical to S2 and where t < t' ≤ t". In this paper, I consider how consequentialists should treat S2 and the possibility that S2 will y at (...) t". At one end of the spectrum, consequentialists would hold that, in deciding whether or not to x at t', S1 should always treat S2 as a force of nature over which she has no control and, thus, treat the possibility that S2 will y at t" as she would the possibility that a hurricane will take a certain path. On this view, S1 is to predict whether or not S2 will y and act accordingly. At the other end of the spectrum, consequentialists would hold that S1 should always treat S2 as someone available for mutual cooperation and, thus, treat the possibility that S2 will y at t" as something to be relied upon. On this view, S1 is to rely on S2’s cooperation and so play her part in the best cooperative scheme involving the two of them. A third and intermediate position would be to hold that whether S1 should treat S2 as a force of nature or as someone available for mutual cooperation depends on whether S1 can see to it that S2 will y at t" by, say, having the right set attitudes. I’ll argue for this third position. As we’ll see, an important implication of this view is that consequentialists should be concerned not just with an agent’s voluntary actions but also with their involuntary acquisitions of various mental attitudes, such as beliefs, desires, and intentions. Indeed, I will argue that consequentialists should hold both that (1) an agent’s most fundamental duty is to have all those attitudes that she has decisive reason to have and only those attitudes that she has sufficient reason to have and that (2) she has a derivative duty to perform an act x if and only if her fulfilling this fundamental duty ensures that she x’s. Thus, I argue (as Donald Regan did before me) that consequentialism should not be exclusively act-orientated – that it should require agents not only to perform certain voluntary actions but also to have certain attitudes. In the process, I develop a new version of consequentialism, which I call attitude-consequentialism. (The latest version of this paper can always be found at: https://dl.dropboxusercontent.com/u/14740340/Consequentialism%20and%20Coordination%20Problems.pdf) -/- . (shrink)
The performance of one option can entail the performance of another. For instance, I have the option of baking a pumpkin pie as well as the option of baking a pie, and the former entails the latter. Now, suppose that both of these options are permissible. This raises the issue of which, if either, is more fundamental than the other. Is baking a pie permissible because it’s permissible to perform some instance of pie-baking, such as pumpkin-pie baking? Or is baking (...) a pumpkin pie permissible because it’s permissible to bake a pie? Or are they equally fundamental, as they would be if they were both permissible because, say, they both have optimal consequences? The aim of this paper is to compare two alternative responses to this issue—omnism and maximalism—and to argue that the latter is preferable. Roughly speaking, maximalism is the view that only those options that are not entailed by any other option are to be assessed in terms of whether they have some right-making feature F (such as that of having optimal consequences), whereas omnism is the view that all options are to be assessed in terms of whether they are F. I argue that maximalism is preferable to omnism because it provides a more plausible solution to the problem of act versions and is not subject to any problems of its own. And if I’m right about maximalism’s being preferable to omnism, then most moral theories, which are all versions of omnism, need significant revision. (shrink)
Following Shelly Kagan’s useful terminology, foundational consequentialists are those who hold that the ranking of outcomes is at the foundation of all moral assessment. That is, they hold that moral assessments of right and wrong, virtuous and vicious, morally good and morally bad, etc. are all ultimately a function of how outcomes rank. But foundational consequentialists disagree on what is to be directly evaluated in terms of the ranking of outcomes, which is to say that they disagree on what the (...) primary evaluative focal point is. Act-consequentialists take acts to be the primary evaluative focal point. They evaluate acts in terms of how their outcomes rank (the higher ranked the outcome, the morally better the act), but evaluate everything else in terms of the morally best acts. Thus, the morally best rules are those that would, if internalized, most reliably lead us to perform the morally best acts. Rule-consequentialists, by contrast, take rules to be the primary evaluative focal point. They evaluate rules according to how their outcomes rank and then assess everything else in terms of the morally best rules. Thus, the morally best acts are those that conform to the morally best rules. In this paper, I argue that foundational consequentialists should not take the primary evaluative focal point (or points) to be acts, rules, virtues, or even everything. In so doing, I argue against act-consequentialism, rule-consequentialism, and global consequentialism. But my project is not entirely negative, for I argue that the primary evaluative focal point should be a complex of acts and attitudes. In the end, then, I claim that foundational consequentialists should accept a new kind of consequentialism, which I call attitude-consequentialism. (shrink)
WHEN ONE ASSUMES, as I will, that death marks the irrevocable end to one’s existence, it is difficult to make sense of the idea that a person could be harmed or benefited by events that take place after her death. How could a posthumous event either enhance or diminish the welfare of the deceased, who no longer exists? Yet we find that many people have a prudential (i.e., self-interested) concern for what’s going to happen after their deaths.1 People are, for (...) instance, concerned that their reputations not be slandered, that their achievements not be undermined, and that their contributions not be forgotten, not even after their deaths. Of course, many philosophers would insist that such a concern for what’s going to happen after one’s death must be based on, or a remnant of, a false belief in an afterlife. I, however, will argue that even if death marks the unequivocal and permanent end to one’s existence, people have good reason to be prudentially concerned with what’s going to happen after their deaths, for, as I will show, a person’s welfare can indeed be affected by posthumous events. (shrink)
In this paper, I present an argument that poses the following dilemma for moral theorists: either (a) reject at least one of three of our most firmly held moral convictions or (b) reject the view that moral reasons are morally overriding, that is, reject the view that moral reasons override non-moral reasons such that even the weakest moral reason defeats the strongest non-moral reason in determining an act’s moral status (e.g., morally permissible). I then argue that we should opt for (...) the second horn of this dilemma, in part because we should be loath to reject such firmly held moral convictions, but also because doing so allows us to dissolve an apparent paradox regarding supererogation. If I’m right, if non-moral reasons are relevant to determining what is and isn’t morally permissible, then it would seem that moral theorists have their work cut out for them. Not only will they need to determine what the fundamental right-making and wrong-making features of actions are (i.e., what moral reasons there are), but they will also need to determine what non-moral reasons there are and which of these are relevant to determining an act’s deontic status. And moral theorists will have to account for how these two very different sorts of reasons—moral and non-moral reasons—”come together” to determine an act’s deontic status. I will not attempt to do this work here, but rather only to argue that the work needs to be done. (shrink)
This is Chapter 4 of my Commonsense Consequentialism: Wherein Morality Meets Rationality. In this chapter, I argue that that any plausible nonconsequentialist theory can be consequentialized, which is to say that, for any plausible nonconsequentialist theory, we can construct a consequentialist theory that yields the exact same set of deontic verdicts that it yields.