A powerful argument against the counterfactual comparative account of harm is that it cannot distinguish harming from failing to benefit. In reply to this problem, I suggest a new account of harm. The account is a counterfactual comparative one, but it counts as harms only those events that make a person occupy his level of well-being at the world at which the event occurs. This account distinguishes harming from failing to benefit in a way that accommodates our intuitions about the (...) standard problem cases. In laying the groundwork for this account, I also demonstrate that rival accounts of harm are able to distinguish harming from failing to benefit only if, and because, they also appeal to the distinction between making upshots happen and allowing upshots to happen. One important implication of my discussion is that preserving the moral asymmetry between harming and failing to benefit requires a commitment to the existence of a metaphysical and moral distinction between making and allowing. (shrink)
We propose that the prevalent moral aversion to AWS is supported by a pair of compelling objections. First, we argue that even a sophisticated robot is not the kind of thing that is capable of replicating human moral judgment. This conclusion follows if human moral judgment is not codifiable, i.e., it cannot be captured by a list of rules. Moral judgment requires either the ability to engage in wide reflective equilibrium, the ability to perceive certain facts as moral considerations, moral (...) imagination, or the ability to have moral experiences with a particular phenomenological character. Robots cannot in principle possess these abilities, so robots cannot in principle replicate human moral judgment. If robots cannot in principle replicate human moral judgment then it is morally problematic to deploy AWS with that aim in mind. Second, we then argue that even if it is possible for a sufficiently sophisticated robot to make ‘moral decisions’ that are extensionally indistinguishable from (or better than) human moral decisions, these ‘decisions’ could not be made for the right reasons. This means that the ‘moral decisions’ made by AWS are bound to be morally deficient in at least one respect even if they are extensionally indistinguishable from human ones. Our objections to AWS support the prevalent aversion to the employment of AWS in war. They also enjoy several significant advantages over the most common objections to AWS in the literature. (shrink)
This paper argues that contemporary philosophical literature on meaning in life has important implications for the debate about our obligations to non-human animals. If animal lives can be meaningful, then practices including factory farming and animal research might be morally worse than ethicists have thought. We argue for two theses about meaning in life: that the best account of meaningful lives must take intentional action to be necessary for meaning—an individual’s life has meaning if and only if the individual acts (...) intentionally in ways that contribute to finally valuable states of affairs; and that this first thesis does not entail that only human lives are meaningful. Because non-human animals can be intentional agents of a certain sort, our account yields the verdict that many animals’ lives can be meaningful. We conclude by considering the moral implications of these theses for common practices involving animals. (shrink)
Most people believe that suffering is intrinsically bad. In conjunction with facts about our world and plausible moral principles, this yields a pro tanto obligation to reduce suffering. This is the intuitive starting point for the moral argument in favor of interventions to prevent wild animal suffering. If we accept the moral principle that we ought, pro tanto, to reduce the suffering of all sentient creatures, and we recognize the prevalence of suffering in the wild, then we seem committed to (...) the existence of such a pro tanto obligation. Of course, competing values such as the aesthetic, scientific or moral values of species, biodiversity, naturalness or wildness, might be relevant to the all-things-considered case for or against intervention. Still, many argue that, even if we were to give some weight to such values, no plausible theory could resist the conclusion that WAS is overridingly important. This article is concerned with large-scale interventions to prevent WAS and their tractability and the deep epistemic problem they raise. We concede that suffering gives us a reason to prevent it where it occurs, but we argue that the nature of ecosystems leaves us with no reason to predict that interventions would reduce, rather than exacerbate, suffering. We consider two interventions, based on gene editing technology, proposed as holding promise to prevent WAS; raise epistemic concerns about them; discuss their potential moral costs; and conclude by proposing a way forward: to justify interventions to prevent WAS, we need to develop models that predict the effects of interventions on biodiversity, ecosystem functioning, and animals’ well-being. (shrink)
I defend a theory of the way in which death is a harm to the person who dies that fits into a larger, unified account of harm ; and includes an account of the time of death's harmfulness, one that avoids the implications that death is a timeless harm and that people have levels of welfare at times at which they do not exist.
Desire satisfaction theories of well-being and deprivationism about the badness of death face similar problems: desire satisfaction theories have trouble locating the time when the satisfaction of a future or past-directed desire benefits a person; deprivationism has trouble locating a time when death is bad for a person. I argue that desire satisfaction theorists and deprivation theorists can address their respective timing problems by accepting fusionism, the view that some events benefit or harm individuals only at fusions of moments in (...) time. Fusionism improves on existing solutions to the timing problem for deprivationism because it locates death’s badness at the same time as both the victim of death and death itself, and it accounts for all of the ways that death is bad for a person. Fusionism improves on existing solutions to the problem of temporally locating the benefit of future and past-directed desires because it respects several attractive principles, including the view that the intrinsic value of a time for someone is determined solely by states of affairs that obtain at that time and the view that intrinsically beneficial events benefit a person when they occur. (shrink)
Machine Learning has become a popular tool in a variety of applications in criminal justice, including sentencing and policing. Media has brought attention to the possibility of predictive policing systems causing disparate impacts and exacerbating social injustices. However, there is little academic research on the importance of fairness in machine learning applications in policing. Although prior research has shown that machine learning models can handle some tasks efficiently, they are susceptible to replicating systemic bias of previous human decision-makers. While there (...) is much research on fair machine learning in general, there is a need to investigate fair machine learning techniques as they pertain to the predictive policing. Therefore, we evaluate the existing publications in the field of fairness in machine learning and predictive policing to arrive at a set of standards for fair predictive policing. We also review the evaluations of ML applications in the area of criminal justice and potential techniques to improve these technologies going forward. We urge that the growing literature on fairness in ML be brought into conversation with the legal and social science concerns being raised about predictive policing. Lastly, in any area, including predictive policing, the pros and cons of the technology need to be evaluated holistically to determine whether and how the technology should be used in policing. (shrink)
To many, the idea of autonomous weapons systems (AWS) killing human beings is grotesque. Yet critics have had difficulty explaining why it should make a significant moral difference if a human combatant is killed by an AWS as opposed to being killed by a human combatant. The purpose of this paper is to explore the roots of various deontological concerns with AWS and to consider whether these concerns are distinct from any concerns that also apply to long- distance, human-guided weaponry. (...) We suggest that at least one major driver of the intuitive moral aversion to lethal AWS is that their use disrespects their human targets by violating the martial contract between human combatants. On our understanding of this doctrine, service personnel cede a right not to be directly targeted with lethal violence to other human agents alone. Artificial agents, of which AWS are one example, cannot understand the value of human life. A human combatant cannot transfer his privileges of targeting enemy combatants to a robot. Therefore, the human duty-holder who deploys AWS breaches the martial contract between human combatants and disrespects the targeted combatants. We consider whether this novel deontological objection to AWS forms the foundation of several other popular yet imperfect deontological objections to AWS. (shrink)
To many, the idea of autonomous weapons systems (AWS) killing human beings is grotesque. Yet critics have had difficulty explaining why it should make a significant moral difference if a human combatant is killed by an AWS as opposed to being killed by a human combatant. The purpose of this paper is to explore the roots of various deontological concerns with AWS and to consider whether these concerns are distinct from any concerns that also apply to long-distance, human-guided weaponry. We (...) suggest that at least one major driver of the intuitive moral aversion to lethal AWS is that their use disrespects their human targets by violating the martial contract between human combatants. On our understanding of this doctrine, service personnel cede a right not to be directly targeted with lethal violence to other human agents alone. Artificial agents, of which AWS are one example, cannot understand the value of human life. A human combatant cannot transfer his privileges of targeting enemy combatants to a robot. Therefore, the human duty-holder who deploys AWS breaches the martial contract between human combatants and disrespects the targeted combatants. We consider whether this novel deontological objection to AWS forms the foundation of several other popular yet imperfect deontological objections to AWS. (shrink)
Robert Sparrow argues that several initially plausible arguments in favor of the deployment of autonomous weapons systems (AWS) in warfare fail, and that their deployment faces a serious moral objection: deploying AWS fails to express the respect for the casualties of war that morality requires. We critically discuss Sparrow’s argument from respect and respond on behalf of some objections he considers. Sparrow’s argument against AWS relies on the claim that they are distinct from accepted weapons of war in that they (...) either fail to transmit an attitude of respect or they transmit an attitude of disrespect. We argue that this distinction between AWS and widely accepted weapons is illusory, and so cannot ground a moral difference between AWS and existing methods of waging war. We also suggest that, if deploying conventional soldiers in some situation would be permissible, and if we could expect deploying AWS to cause fewer civilian casualties, then it would be consistent with an intuitive understanding of respect to deploy AWS in this situation. (shrink)
John Martin Fischer and Anthony L. Brueckner have argued that a person’s death is, in many cases, bad for him, whereas a person’s prenatal non-existence is not bad for him. Their suggestion relies on the idea that death deprives the person of pleasant experiences that it is rational for him to care about, whereas prenatal non-existence only deprives him of pleasant experiences that it is not rational for him to care about. Jens Johansson has objected to this justification of ‘The (...) Asymmetry’ between the badness of death and pre-natal non-existence on the grounds that what it is actually rational for us to care about is irrelevant to the question of whether the event is bad for us. Taylor Cyr has recently argued that Jens Johansson’s objection to Fischer’s and Brueckner’s position relies on an incoherent example, and is thus unsuccessful. I argue that Cyr’s attempt to defend Fischer and Brueckner in fact illustrates that their position is incoherent, and that Johansson’s objection therefore succeeds. (shrink)
ABSTRACTThe jus ad bellum criterion of right intention is a central guiding principle of just war theory. It asserts that a country’s resort to war is just only if that country resorts to war for the right reasons. However, there is significant confusion, and little consensus, about how to specify the CRI. We seek to clear up this confusion by evaluating several distinct ways of understanding the criterion. On one understanding, a state’s resort to war is just only if it (...) plans to adhere to the principles of just war while achieving its just cause. We argue that the first understanding makes the CRI superfluous, because it can be subsumed under the probability of success criterion. On a second understanding, a resort to war is just only if a state’s motives, which explain its resort to war, are of the right kind. We argue that this second understanding of the CRI makes it a significant further obstacle to justifying war. However, this second understanding faces a possible infinite regress problem, wh... (shrink)
A common criticism of the use of algorithms in criminal justice is that algorithms and their determinations are in some sense ‘opaque’—that is, difficult or impossible to understand, whether because of their complexity or because of intellectual property protections. Scholars have noted some key problems with opacity, including that opacity can mask unfair treatment and threaten public accountability. In this paper, we explore a different but related concern with algorithmic opacity, which centers on the role of public trust in grounding (...) the legitimacy of criminal justice institutions. We argue that algorithmic opacity threatens the trustworthiness of criminal justice institutions, which in turn threatens their legitimacy. We first offer an account of institutional trustworthiness before showing how opacity threatens to undermine an institution’s trustworthiness. We then explore how threats to trustworthiness affect institutional legitimacy. Finally, we offer some policy recommendations to mitigate the threat to trustworthiness posed by the opacity problem. (shrink)
This article introduces a non-human version of the non-identity problem and suggests that such a variation exposes weaknesses in several proposed person-focused solutions to the classic version of the problem. It suggests first that person-affecting solutions fail when applied to non-human animals and, second, that many common moral arguments against climate change should be called into question. We argue that a more inclusive version of the person-affecting principle, which we call the ‘patient-affecting principle’, captures more accurately the moral challenge posed (...) by the non-identity problem. We argue further that the failure of person-affecting solutions to solve non-human versions of the problem lend support to impersonal solutions to the problem which avoid issues of personhood or species identity. Finally, we conclude that some environmental arguments against climate change that rely on the notion of personal harm should be recast in impersonal terms. (shrink)
The aim of this paper is to explain and defend a type of argument common in the doing/allowing literature called a “contrast argument.” I am concerned with defending a particular type of contrast argument that is intended to demonstrate the moral irrelevance of the doing/allowing distinction. This type of argument, referred to in this paper as an “irrelevance argument,” is exemplified by an argument offered by James Rachels (1975) that employs the Smith and Jones bathtub cases. My main contention in (...) this paper is that none of the objections to the use of irrelevance arguments are successful, and that they still pose a genuine challenge to defenders of the moral relevance of the doing/allowing distinction. (shrink)
This paper synthesizes scholarship from several academic disciplines to identify and analyze five major ethical challenges facing data-driven policing. Because the term “data-driven policing” emcompasses a broad swath of technologies, we first outline several data-driven policing initiatives currently in use in the United States. We then lay out the five ethical challenges. Certain of these challenges have received considerable attention already, while others have been largely overlooked. In many cases, the challenges have been articulated in the context of related discussions, (...) but their distinctively ethical dimensions have not been explored in much detail. Our goal here is to articulate and clarify these ethical challenges, while also highlighting areas where these issues intersect and overlap. Ultimately, responsible data-driven policing requires collaboration between communities, academics, technology developers, police departments, and policy-makers to confront and address these challenges. And as we will see, it may also require critically reexamining the role and value of police in society. (shrink)
Many social trends are conspiring to drive the adoption of greater automation in society, and we will certainly see a greater offloading of human decisionmaking to robots in the future. Many of these decisions are morally salient, including decisions about how benefits and burdens are distributed. Roboticists and ethicists have begun to think carefully about the moral decision making apparatus for machines. Their concerns often center around the plausible claim that robots will lack many of the mental capacities that are (...) indispensable in human moral decision making, such as empathy. To the extent that robots may be robustly artificially intelligent, these concerns subside, but they give way to new worries about creating artificial agents to do our bidding, if those artificial agents have moral standing. We suggest that the question of AI consciousness poses a dilemma. Whether artificially intelligent agents will be conscious or not, we will face serious difficulties in programming them to reliably make moral decisions. (shrink)
A quiet revolution is occurring in the field of transplantation. Traditionally, transplants have involved solid organs such as the kidney, heart and liver which are transplanted to prevent recipients from dying. Now transplants are being done of the face, hand, uterus, penis and larynx that aim at improving a recipient's quality of life. The shift away from saving lives to seeking to make them better requires a shift in the ethical thinking that has long formed the foundation of organ transplantation. (...) The addition of new forms of transplants requires doctors, patients, regulators and the public to rethink the risk and benefit ratio represented by trade-offs between saving life, extending life and risking the loss of life to achieve improvements in the quality of life. (shrink)
Many social trends are conspiring to drive the adoption of greater automation in society, and we will certainly see a greater offloading of human decisionmaking to robots in the future. Many of these decisions are morally salient, including decisions about how benefits and burdens are distributed. Roboticists and ethicists have begun to think carefully about the moral decision making apparatus for machines. Their concerns often center around the plausible claim that robots will lack many of the mental capacities that are (...) indispensable in human moral decision making, such as empathy. To the extent that robots may be robustly artificially intelligent, these concerns subside, but they give way to new worries about creating artificial agents to do our bidding, if those artificial agents have moral standing. We suggest that the question of AI consciousness poses a dilemma. Whether artificially intelligent agents will be conscious or not, we will face serious difficulties in programming them to reliably make moral decisions. (shrink)
Predictive policing, the practice of using of algorithmic systems to forecast crime, is heralded by police departments as the new frontier of crime analysis. At the same time, it is opposed by civil rights groups, academics, and media outlets for being ‘biased’ and therefore discriminatory against communities of color. This paper argues that the prevailing focus on racial bias has overshadowed two normative factors that are essential to a full assessment of the moral permissibility of predictive policing: fairness in the (...) social distribution of the benefits and burdens of policing as well as the distinctive role of consent in determining fair distribution. When these normative factors are given their due attention, several requirements emerge for the fair implementation of predictive policing. Among these requirements are that police departments inform and solicit buy-in from affected communities about strategic decision-making and that departments favor non-enforcement-oriented interventions. (shrink)
Two alternative accounts have emerged as viable competitors to the forerunning counterfactual comparative account in the recent debate concerning the nature of harm. These are the “non-comparative statebased account of harm ” defended by Elizabeth Harman, the “event-based account of harm ” defended by Matthew Hanser. I raise one simple but serious counterexample involving “non-regrettable disabilities” that applies to both of these alternative accounts but that is avoided by the counterfactual comparative account. I point out that my counterexample is one (...) instance of a broader problem for alternatives to the counterfactual comparative account. The problem is that each of them divorces the concept of harm from the intuitive idea that we have moral and prudential reasons to avoid it. (shrink)
Anthropocentric indirect arguments , which call for specific policies or actions because of human benefits that are correlated with but not caused by benefits to the environment, are gaining increasing traction with those who take a pragmatic approach to environmental protection. I contend that nonanthropocentrists might remain justifiably uneasy about AIAs because such arguments fail to challenge prevailing speciesist moral attitudes. I close by considering whether Elliott can address this concern of nonanthropocentrists by appealing to the ability of AIAs to (...) engender an intrinsic concern for the environment in the people they persuade. (shrink)
I respond to Monika Piotrowska's argument against anthropocentric theories of moral status that they yield disparate moral verdicts about parallel cases of embryonic stem cell transplantation. I argue that anthropocentric theories of moral status may not fall prey to this problem because embryonic stem cell transplantation may constitute creation rather than mere enhancement.
I respond to David Shoemaker's arguments for the conclusion that personal identity is irrelevant for death. I contend that we can accept Shoemaker's claim that loss of personal identity is not sufficient for death while nonetheless maintaining that there is an important theoretical relationship between death and personal identity. I argue that this relationship is also of practical importance for physicians' decisions about organ reallocation.
The Precautionary Principle is frequently invoked as a guiding principle in environmental policy. In this article, I raise a couple of problems for the application of the Precautionary Principle when it comes to policies concerning Genetically Modified Organisms (GMOs). First, I argue that if we accept Stephen Gardiner’s sensible conditions under which it is appropriate to employ the Precautionary Principle for emerging technologies, it is unclear that GMOs meet those conditions. In particular, I contend that GM crops hold the potential (...) to provide more than a mere bonus; they hold the (admittedly uncertain) potential to prevent serious harm to millions of people. This means that, if proponents of the Precautionary Principle take prevention of harm as seriously as avoidance of harm, then precaution may tell in favor of GMOs rather than against them. Second, I observe that the use of GM technology in the developing world is likely to be identity-affecting; it will cause people to exist who otherwise would not have. I argue that this undermines Precautionary Principle-based objections to GM technology that appeal to the potentially harmful effects of GMOs on future generations. (shrink)
This paper introduces a novel approach to evaluating theories of the good. It proposes evaluating these theories on the basis of their compatibility with the most plausible ways of calculating overall intrinsic value of a world. The paper evaluates the plausibility of egalitarianism using this approach, arguing that egalitarianism runs afoul of the more plausible ways of calculating the overall intrinsic value of a world. Egalitarianism conflicts with the general motivation for totalism and critical-level totalism, which is that independent contributions (...) of each individual’s life should be counted separately. It conflicts with the most plausible version of averagism because only the highly implausible simultaneous life-segment version of egalitarianism can make sense of inequality being disvaluable at a time. Egalitarianism combined with a diminishing marginal value theory also fails because it holds that, other things equal, the world is a better place when we reduce inequality by adding many people whose lives go very badly but whose sheer numbers lessen inequality. The discussion moves the debate about egalitarianism forward by circumventing the oft-discussed, but intractable, debate concerning the leveling down objection. It also reveals a promising new approach to critiquing theories of the good. (shrink)
This book was born out of two interdisciplinary seminars held in 2014. The first one was the Climate Ethics and Climate Economics workshop in April adjoined as part of the European Consortium for Political Research Joint Sessions 2014 in Salamanca. Spurred on by the invigorating discussions, the participants decided to put together more workshops, with Ethical Underpinnings of Climate Economics following in Helsinki in November that same year. Without the organisers of these workshops the collaborators of this book would not (...) have come together: Matthew Rendall, Dominic Roser, Sade Hormio, Simo Kyllonen, Aaron Maltais and Joanna Burch-Brown. We would also like to thank all the participants at the workshops for making them so enjoyable and worthwhile. The Helsinki workshop that this book was named after was organized as part of the Climate Ethics and Economics project, led by Aki Lehtinen and funded by the University of Helsinki. The three-year project is based at the Social and Moral Philosophy discipline in the Department of Political and Economic Studies. The workshop itself was made possible by funding from the Academy of Finland Centre of Excellence in the Philosophy of the Social Sciences, which also helped to host the event. (shrink)