Many subscribe to an Ethic of Life, an ethical perspective on which all living things are deserving of some level of moral concern. Within philosophy, the Ethic of Life has been clarified, developed, and rigorously defended; it has also found its strongest critics. Currently, the debate is at a standstill. This book ends this stalemate by proving that the Ethic of Life must be abandoned.
In order to determine whether current (or future) machines have a welfare that we as agents ought to take into account in our moral deliberations, we must determine which capacities give rise to interests and whether current machines have those capacities. After developing an account of moral patiency, I argue that current machines should be treated as mere machines. That is, current machines should be treated as if they lack those capacities that would give rise to psychological interests. Therefore, they (...) are moral patients only if they have non-psychological interests. I then provide an account of what I call teleo interests that constitute the most plausible type of non-psychological interest that a being might have. I then argue that even if current machines have teleo interests, they are such that agents need not concern themselves with these interests. Therefore, for all intents and purposes, current machines are not moral patients. (shrink)
Synthetic organisms are at the same time organisms and artifacts. In this paper we aim to determine whether such entities have a good of their own, and so are candidates for being directly morally considerable. We argue that the good of non-sentient organisms is grounded in an etiological account of teleology, on which non-sentient organisms can come to be teleologically organized on the basis of their natural selection etiology. After defending this account of teleology, we argue that there are no (...) grounds for excluding synthetic organisms from having a good also grounded in their teleological organization. However, this comes at a cost; traditional artifacts will also be seen as having a good of their own. We defend this as the best solution to the puzzle about what to say about the good of synthetic organisms. (shrink)
This paper addresses the foundations of Teleological Individualism, the view that organisms, even non-sentient organisms, are goal-oriented systems while biological collectives, such as ecosystems or conspecific groups, are mere assemblages of organisms. Typical defenses of Teleological Individualism ground the teleological organization of organisms in the workings of natural selection. This paper shows that grounding teleological organization in natural selection is antithetical to Teleological Individualism because such views assume a view about the units of selection on which it is only individual (...) organisms that are units of selection. However, none of the Conventionalist, Reductionist, or Multi-Level Realist theories serve to justify such an assumption. Thus, Teleological Individualism cannot be grounded in natural selection. (shrink)
Robust technological enhancement of core cognitive capacities is now a realistic possibility. From the perspective of neutralism, the view that justifications for public policy should be neutral between reasonable conceptions of the good, only members of a subset of the ethical concerns serve as legitimate justifications for public policy regarding robust technological enhancement. This paper provides a framework for the legitimate use of ethical concerns in justifying public policy decisions regarding these enhancement technologies by evaluating the ethical concerns that arise (...) in the context of testing such technologies on nonhuman animals. Traditional issues in bioethics, as well as novel concerns such as the possibility of moral status enhancement, are evaluated from the perspective of neutralism. (shrink)
The ongoing explosion of interest in artificial intelligence is fueled in part by recently developed techniques in machine learning. Those techniques allow automated systems to process huge amounts of data, utilizing mathematical methods that depart from traditional statistical approaches, and resulting in impressive advancements in our ability to make predictions and uncover correlations across a host of interesting domains. But as is now widely discussed, the way that those systems arrive at their outputs is often opaque, even to the experts (...) who design and deploy them. Is it morally problematic to make use of opaque automated methods when making high-stakes decisions, like whether to issue a loan to an applicant, or whether to approve a parole request? Many scholars answer in the affirmative. However, there is no widely accepted explanation for why transparent systems are morally preferable to opaque systems. We argue that the use of automated decision-making systems sometimes violates duties of consideration that are owed by decision-makers to decision-subjects, duties that are both epistemic and practical in character. Violations of that kind generate a weighty consideration against the use of opaque decision systems. In the course of defending our approach, we show that it is able to address three major challenges sometimes leveled against attempts to defend the moral import of transparency in automated decision-making. (shrink)
This chapter evaluates whether AI systems are or will be rights-holders, explaining the conditions under which people should recognize AI systems as rights-holders. It develops a skeptical stance toward the idea that current forms of artificial intelligence are holders of moral rights, beginning with an articulation of one of the most prominent and most plausible theories of moral rights: the Interest Theory of rights. On the Interest Theory, AI systems will be rights-holders only if they have interests or a well-being. (...) Current AI systems are not bearers of well-being, and so fail to meet the necessary condition for being rights-holders. This argument is robust against a range of different objections. However, the chapter also shows why difficulties in assessing whether future AI systems might have interests or be bearers of well-being—and so be rights-holders—raise difficult ethical challenges for certain developments in AI. (shrink)
Our environmental wrongdoings result in a moral debt that requires restitution. One component of restitution is reparative and another is remediative. The remediative component requires that we remediate our characters in ways that alter or eliminate the character traits that tend to lead, in their expression, to environmental wrongdoing. Restitutive restoration is a way of engaging in ecological restoration that helps to meet the remediative requirement that accompanies environmental wrongdoing. This account of restoration provides a new motivation and justification for (...) engaging in restorative practices in addition to the standard pragmatist justification and motivations. (shrink)
Embedding ethics modules within computer science courses has become a popular response to the growing recognition that CS programs need to better equip their students to navigate the ethical dimensions of computing technologies like AI, machine learning, and big data analytics. However, the popularity of this approach has outpaced the evidence of its positive outcomes. To help close that gap, this empirical study reports positive results from Northeastern’s program that embeds values analysis modules into CS courses. The resulting data suggest (...) that such modules have a positive effect on students’ moral attitudes and that students leave the modules believing they are more prepared to navigate the ethical dimensions they’ll likely face in their eventual careers. Importantly, these gains were accomplished at an institution without a philosophy doctoral program, suggesting this strategy can be effectively employed by a wider range of institutions than many have thought. (shrink)
Climate negotiations under the United Nations Framework Convention on Climate Change have so far failed to achieve a robust international agreement to reduce greenhouse gas emissions. Game theory has been used to investigate possible climate negotiation solutions and strategies for accomplishing them. Negotiations have been primarily modelled as public goods games such as the Prisoner’s Dilemma, though coordination games or games of conflict have also been used. Many of these models have solutions, in the form of equilibria, corresponding to possible (...) positive outcomes—that is, agreements with the requisite emissions reduction commitments. Other work on large-scale social dilemmas suggests that it should be possible to resolve the climate problem. It therefore seems that equilibrium selection may be a barrier to successful negotiations. Here we use an N-player bargaining game in an agent-based model with learning dynamics to examine the past failures of and future prospects for a robust international climate agreement. The model suggests reasons why the desirable solutions identified in previous game-theoretic models have not yet been accomplished in practice and what mechanisms might be used to achieve these solutions. (shrink)
Those who wish to abolish or restrict the use of non-human animals in so-called factory farming and/or experimentation often argue that these animal use practices are incommensurate with animals’ moral status. If sound, these arguments would establish that, as a matter of ethics or justice, we should voluntarily abstain from the immoral animal use practices in question. But these arguments can’t and shouldn’t be taken to establish a related conclusion: that the moral status of animals justifies political intervention to disallow (...) or significantly diminish factory farming and animal experimentation. In this paper, we set out to do two things: First, we argue that while the arguments mentioned above may establish the moral impermissibility or injustice of the practices they condemn, they are not sufficient to justify political interventions or social policies to abolish or restrict such practices. It is one thing to argue that some moral imperative or imperative of justice exists, and quite another thing to call for the use of political power to induce compliance with that imperative. Our second task is to assess the prospects for developing an argument that is sufficient to justify political interventions to restrict or abolish the use of non-human animals in factory farming or experimentation. Beyond establishing the immorality or injustice of animal consumption or experimentation, one must show that the interventions in question constitute legitimate use of political power. Would prohibiting or discouraging animal use be legitimate? We attempt to answer this question within the context of fundamental liberal constraints on the legitimate use of coercive political power. (shrink)
This book consists of thirteen chapters that address the ethical issues raised by technological intervention and design across a broad range of biological and ecological systems. Among the technologies addressed are geoengineering, human enhancement, sex selection, genetic modification, and synthetic biology.