The United States Army’s Future Combat Systems Project, which aims to manufacture a “robot army” to be ready for deployment by 2012, is only the latest and most dramatic example of military interest in the use of artificially intelligent systems in modern warfare. This paper considers the ethics of a decision to send artificially intelligent robots into war, by asking who we should hold responsible when an autonomous weapon system is involved in an atrocity of the sort that would normally (...) be described as a war crime. A number of possible loci of responsibility for robot war crimes are canvassed; the persons who designed or programmed the system, the commanding officer who ordered its use, the machine itself. I argue that in fact none of these are ultimately satisfactory. Yet it is a necessary condition for fighting a just war, under the principle of jus in bellum, that someone can be justly held responsible for deaths that occur in the course of the war. As this condition cannot be met in relation to deaths caused by an autonomous weapon system it would therefore be unethical to deploy such systems in warfare. (shrink)
Ever since the publication of Derek Parfit’s Reasons and Persons, bioethicists have tended to distinguish between two different ways in which reproductive technologies may have implications for the...
Despite the advent of CRISPR, safe and effective gene editing for human enhancement remains well beyond our current technological capabilities. For the discussion about enhancing human beings to be worth having, then, we must assume that gene-editing technology will improve rapidly. However, rapid progress in the development and application of any technology comes at a price: obsolescence. If the genetic enhancements we can provide children get better and better each year, then the enhancements granted to children born in any given (...) year will rapidly go out of date. Sooner or later, every modified child will find him- or herself to be “yesterday’s child.” The impacts of such obsolescence on our individual, social, and philosophical self-understanding constitute an underexplored set of considerations relevant to the ethics of genome editing. (shrink)
It is remarkable how much robotics research is promoted by appealing to the idea that the only way to deal with a looming demographic crisis is to develop robots to look after older persons. This paper surveys and assesses the claims made on behalf of robots in relation to their capacity to meet the needs of older persons. We consider each of the roles that has been suggested for robots in aged care and attempt to evaluate how successful robots might (...) be in these roles. We do so from the perspective of writers concerned primarily with the quality of aged care, paying particular attention to the social and ethical implications of the introduction of robots, rather than from the perspective of robotics, engineering, or computer science. We emphasis the importance of the social and emotional needs of older persons—which, we argue, robots are incapable of meeting—in almost any task involved in their care. Even if robots were to become capable of filling some service roles in the aged-care sector, economic pressures on the sector would most likely ensure that the result was a decrease in the amount of human contact experienced by older persons being cared for, which itself would be detrimental to their well-being. This means that the prospects for the ethical use of robots in the aged-care sector are far fewer than first appears. More controversially, we believe that it is not only misguided, but actually unethical, to attempt to substitute robot simulacra for genuine social interaction. A subsidiary goal of this paper is to draw attention to the discourse about aged care and robotics and locate it in the context of broader social attitudes towards older persons. We conclude by proposing a deliberative process involving older persons as a test for the ethics of the use of robots in aged care. (shrink)
A number of philosophers working in applied ethics and bioethics are now earnestly debating the ethics of what they term “moral bioenhancement.” I argue that the society-wide program of biological manipulations required to achieve the purported goals of moral bioenhancement would necessarily implicate the state in a controversial moral perfectionism. Moreover, the prospect of being able to reliably identify some people as, by biological constitution, significantly and consistently more moral than others would seem to pose a profound challenge to egalitarian (...) social and political ideals. Even if moral bioenhancement should ultimately prove to be impossible, there is a chance that a bogus science of bioenhancement would lead to arbitrary inequalities in access to political power or facilitate the unjust rule of authoritarians; in the meantime, the debate about the ethics of moral bioenhancement risks reinvigorating dangerous ideas about the extent of natural inequality in the possession of the moral faculties. (shrink)
The fact that real-world decisions made by artificial intelligences are often ethically loaded has led a number of authorities to advocate the development of “moral machines”. I argue that the project of building “ethics” “into” machines presupposes a flawed understanding of the nature of ethics. Drawing on the work of the Australian philosopher, Raimond Gaita, I argue that ethical dilemmas are problems for particular people and not problems for everyone who faces a similar situation. Moreover, the force of an ethical (...) claim depends in part on the life history of the person who is making it. For both these reasons, machines could at best be engineered to provide a shallow simulacrum of ethics, which would have limited utility in confronting the ethical and policy dilemmas associated with AI. (shrink)
In ‘Moral Enhancement, Freedom, and the God Machine’, Savulescu and Persson argue that recent scientific findings suggest that there is a realistic prospect of achieving ‘moral enhancement’ and respond to Harris's criticism that this would threaten individual freedom and autonomy. I argue that although some pharmaceutical and neuro‐scientific interventions may influence behaviour and emotions in ways that we may be inclined to evaluate positively, describing this as ‘moral enhancement’ presupposes a particular, contested account, of what it is to act morally (...) and implies that entirely familiar drugs such as alcohol, ecstasy, and marijuana are also capable of making people ‘more moral’. Moreover, while Savulescu and Persson establish the theoretical possibility of using drugs to promote autonomy, the real threat posed to freedom by ‘moral bioenhancement’ is that the ‘enhancers’ will be wielding power over the ‘enhanced’. Drawing on Pettit's notion of ‘freedom as non‐domination’, I argue that individuals may be rendered unfree even by a hypothetical technology such as Savulescu and Persson's ‘God machine’, which would only intervene if they chose to act immorally. While it is impossible to rule out the theoretical possibility that moral enhancement might be all‐things‐considered justified even where it did threaten freedom and autonomy, I argue that any technology for biomedical shaping of behaviour and dispositions is much more likely to be used for ill rather than good. (shrink)
Sex robots are likely to play an important role in shaping public understandings of sex and of relations between the sexes in the future. This paper contributes to the larger project of understanding how they will do so by examining the ethics of the “rape” of robots. I argue that the design of realistic female robots that could explicitly refuse consent to sex in order to facilitate rape fantasy would be unethical because sex with robots in these circumstances is a (...) representation of the rape of a woman, which may increase the rate of rape, expresses disrespect for women, and demonstrates a significant character defect. Even when the intention is not to facilitate rape, the design of robots that can explicitly refuse consent is problematic due to the likelihood that some users will experiment with raping them. Designing robots that lack the capacity to explicitly refuse consent may be morally problematic depending on which of two accounts of the representational content of sex with realistic humanoid robots is correct. (shrink)
There is increasing speculation within military and policy circles that the future of armed conflict is likely to include extensive deployment of robots designed to identify targets and destroy them without the direct oversight of a human operator. My aim in this paper is twofold. First, I will argue that the ethical case for allowing autonomous targeting, at least in specific restricted domains, is stronger than critics have acknowledged. Second, I will attempt to uncover, explicate, and defend the intuition that (...) even in this context there would be something ethically problematic about such targeting. I argue that an account of the non-consequentialist foundations of the principle of distinction suggests that the use of autonomous weapon systems is unethical by virtue of failing to show appropriate respect for the humanity of our enemies. However, the success of the strongest form of this argument depends upon understanding the robot itself as doing the killing. To the extent that we believe that, on the contrary, AWS are only the means whereby those who order them into action kill, the idea that the use of AWS fails to respect the humanity of our enemy will turn upon an account of what is required by respect that is essentially conventional. Thus, while the theoretical foundations of the idea that AWS are weapons that are “evil in themselves” are weaker than critics have sometimes maintained, they are nonetheless sufficient to the task of demanding a prohibition of the development and deployment of such weapons. (shrink)
In this paper I describe a future in which persons in advanced old age are cared for entirely by robots and suggest that this would be a dystopia, which we would be well advised to avoid if we can. Paying attention to the objective elements of welfare rather than to people’s happiness reveals the central importance of respect and recognition, which robots cannot provide, to the practice of aged care. A realistic appreciation of the current economics of the aged care (...) sector suggests that the introduction of robots into an aged care setting will most likely threaten rather than enhance these goods. I argue that, as a result, the development of robotics is likely to transform aged care in accordance with a trajectory of development that leads towards this dystopian future even when this is not the intention of the engineers working to develop robots for aged care. While an argument can be made for the use of robots in aged care where the people being cared for have chosen to allow robots in this role, I suggest that over-emphasising this possibility risks rendering it a self-fulfilling prophecy, depriving those being cared for of valuable social recognition, and failing to provide respect for older persons by allowing the options available to them to be shaped by the design choices of others. (shrink)
In Enhancing Evolution: The Ethical Case for Making Better People (2007), John Harris argues that a proper concern for the welfare of future human beings implies that we are morally obligated to pursue enhancements. Similarly, in “Procreative Beneficience: Why We Should Select The Best Children” (2001) and in a number of subsequent publications, Julian Savulescu has suggested that we are morally obligated to use genetic (and other) technologies to produce the best children possible. In this paper I argue that if (...) we do have such obligations then their implications are much more radical than either Harris or Savulescu admit. There is an uneasy tension in the work of these authors, between their consequentialism and their (apparent) libertarianism when it comes to the rights of individuals to use—or not use—enhancement technologies as they see fit. Only through a very particular and not especially plausible negotiation of the tension between their moral theory and their policy prescriptions can Harris and Savulescu obscure the fact that their philosophies have implications that most people would find profoundly unattractive. (shrink)
The idea that a world in which everyone was born “perfect” would be a world in which something valuable was missing often comes up in debates about the ethics of technologies of prenatal testing and preimplantation genetic diagnosis . This thought plays an important role in the “disability critique” of prenatal testing. However, the idea that human genetic variation is an important good with significant benefits for society at large is also embraced by a wide range of figures writing in (...) the bioethics literature, including some who are notoriously hostile to the idea that we should not select against disability. By developing a number of thought experiments wherein we are to contemplate increasing genetic diversity from a lower baseline in order to secure this value, I argue that this powerful intuition is more problematic than is generally recognized, especially where the price of diversity is the well-being of particular individuals. (shrink)
Nancy Jecker is right when she says that older persons ought not to be ashamed if they wish to remain sexually active in advanced old age. She offers a useful account of the role that sexuality plays in supporting key human capabilities. However, Jecker assumes an exaggerated account of what sex robots are likely to be able to offer for the foreseeable future when she suggests that we are obligated to make them available to older persons with disabilities. Moreover, whether (...) older persons should be ashamed to desire sex robots—or, more importantly, whether we should be ashamed at the thought that we should respond to the sexual needs of older persons by providing them with sex robots—turns on a range of arguments that Jecker fails to adequately consider. (shrink)
The normative significance of the distinction between therapy and enhancement has come under sustained philosophical attack in recent discussions of the ethics of shaping future persons by means of preimplantation genetic diagnosis and other advanced genetic technologies. In this paper, I argue that giving up the idea that the answer to the question as to whether a condition is “normal” should play a crucial role in assessing the ethics of genetic interventions has unrecognized and strongly counterintuitive implications when it comes (...) to selecting what sort of children should be brought into the world. According to standard philosophical accounts of the factors one should take into account when making such .. (shrink)
The cochlear implant controversy involves questions about the nature of disability and the definition of “normal” bodies; it also raises arguments about the nature and significance of culture and the rights of minority cultures. I defend the claim that there might be such a thing as “Deaf culture” and then examine how two different understandings of the role of culture in the lives of individuals can lead to different conclusions about the rights of Deaf parents in relation to their children, (...) and about the ethics of public funding for research on cochlear implants. An argument asserting the rights of minority cultures to equal respect and consideration within a multicultural society, informed by communitarian political philosophy, offers the best prospect for the defence of the unique culture(s) of the Deaf. (shrink)
Following the success of Sony Corporation’s “AIBO”, robot cats and dogs are multiplying rapidly. “Robot pets” employing sophisticated artificial intelligence and animatronic technologies are now being marketed as toys and companions by a number of large consumer electronics corporations. -/- It is often suggested in popular writing about these devices that they could play a worthwhile role in serving the needs of an increasingly aging and socially isolated population. Robot companions, shaped like familiar household pets, could comfort and entertain lonely (...) older persons. This goal is misguided and unethical. While there are a number of apparent benefits that might be thought to accrue from ownership of a robot pet, the majority and the most important of these are predicated on mistaking, at a conscious or unconscious level, the robot for a real animal. For an individual to benefit significantly from ownership of a robot pet they must systematically delude themselves regarding the real nature of their relation with the animal. It requires sentimentality of a morally deplorable sort. Indulging in such sentimentality violates a (weak) duty that we have to ourselves to apprehend the world accurately. The design and manufacture of these robots is unethical in so far as it presupposes or encourages this delusion. -/- The invention of robot pets heralds the arrival of what might be called “ersatz companions” more generally. That is, of devices that are designed to engage in and replicate significant social and emotional relationships. The advent of robot dogs offers a valuable opportunity to think about the worth of such companions, the proper place of robots in society and the value we should place on our relationships with them. (shrink)
The argument of Julian Savulescu’s 2001 paper, “Procreative Beneficence: Why We Should Select the Best Children” is flawed in a number of respects. Savulescu confuses reasons with obligations and equivocates between the claim that parents have some reason to want the best for their children and the more radical claim that they are morally obligated to attempt to produce the best child possible. Savulescu offers a prima facie implausible account of parental obligation, as even the best parents typically fail to (...) do everything they think would be best for their children let alone everything that is in fact best for their children. The profound philosophical difficulties which beset the attempt to formulate a plausible account of the best human life constitute a further independent reason to resile from Savulescu’s conclusion. Savulescu’s argument also requires parents to become complicit with racist and homophobic oppression, which is yet another reason to reject it. Removing the equivocation from Savulescu’s argument allows us to see that the assertion of an obligation to choose the “best child” has much more in common with the “old” eugenics than Savulescu acknowledges. (shrink)
Since the first sex reassignment operations were performed, individual sex has come to be, to some extent at least, a technological artifact. The existence of sperm sorting technology, and of prenatal determination of fetal sex via ultrasound along with the option of termination, means that we now have the power to choose the sex of our children. An influential contemporary line of thought about medical ethics suggests that we should use technology to serve the welfare of individuals and to remove (...) limitations on the opportunities available to them. I argue that, if these are our goals, we may do well to move towards a “post sex” humanity. Until we have the technology to produce genuine hermaphrodites, the most efficient way to do this is to use sex selection technology to ensure that only girl children are born. There are significant restrictions on the opportunities available to men, around gestation, childbirth, and breast-feeding, which will be extremely difficult to overcome via social or technological mechanisms for the foreseeable future. Women also have longer life expectancies than men. Girl babies therefore have a significantly more “open” future than boy babies. Resisting the conclusion that we should ensure that all children are born the same sex will require insisting that sexual difference is natural to human beings and that we should not use technology to reshape humanity beyond certain natural limits. The real concern of my paper, then, is the moral significance of the idea of a normal human body in modern medicine. (shrink)
In, “The Turing Triage Test”, published in Ethics and Information Technology, I described a hypothetical scenario, modelled on the famous Turing Test for machine intelligence, which might serve as means of testing whether or not machines had achieved the moral standing of people. In this paper, I: (1) explain why the Turing Triage Test is of vital interest in the context of contemporary debates about the ethics of AI; (2) address some issues that complexify the application of this test; and, (...) (3) in doing so, defend a way of thinking about the question of the moral standing of intelligent machines, which takes the idea of “seriousness” seriously. This last objective is, in fact, my primary one and is motivated by the sense that, to date, much of the “philosophy” of AI has suffered from a profound failure to properly distinguish between things that we can say and things that we can really mean. (shrink)
Unmanned systems in military applications will often play a role in determining the success or failure of combat missions and thus in determining who lives and dies in times of war. Designers of UMS must therefore consider ethical, as well as operational, requirements and limits when developing UMS. I group the ethical issues involved in UMS design under two broad headings, Building Safe Systems and Designing for the Law of Armed Conflict, and identify and discuss a number of issues under (...) each of these headings. As well as identifying issues, I offer some analysis of their implications and how they might be addressed. (shrink)
A claim about continuing technological progress plays an essential, if unacknowledged, role in the philosophical literature on “human enhancement.” I argue that—should it eventuate—continuous improvement in enhancement technologies may prove more bane than benefit. A rapid increase in the power of available enhancements would mean that each cohort of enhanced individuals will find itself in danger of being outcompeted by the next in competition for important social goods—a situation I characterize as an “enhanced rat race.” Rather than risk the chance (...) of being rendered technologically and socially obsolete by the time one is in one’s early 20s, it may be rational to prefer that a wide range of enhancements that would generate positional disadvantages that outweigh their absolute advantages be prohibited altogether. The danger of an enhanced rat race therefore constitutes a novel argument in favor of abandoning the pursuit of certain sorts of enhancements. (shrink)
If, as a number of writers have predicted, the computers of the future will possess intelligence and capacities that exceed our own then it seems as though they will be worthy of a moral respect at least equal to, and perhaps greater than, human beings. In this paper I propose a test to determine when we have reached that point. Inspired by Alan Turing’s (1950) original “Turing test”, which argued that we would be justified in conceding that machines could think (...) if they could fill the role of a person in a conversation, I propose a test for when computers have achieved moral standing by asking when a computer might take the place of a human being in a moral dilemma, such as a “triage” situation in which a choice must be made as to which of two human lives to save. We will know that machines have achieved moral standing comparable to a human when the replacement of one of these people with an artificial intelligence leaves the character of the dilemma intact. That is, when we might sometimes judge that it is reasonable to preserve the continuing existence of a machine over the life of a human being. This is the “Turing Triage Test”. I argue that if personhood is understood as a matter of possessing a set of important cognitive capacities then it seems likely that future AIs will be able to pass this test. However this conclusion serves as a reductio of this account of the nature of persons. I set out an alternative account of the nature of persons, which places the concept of a person at the centre of an interdependent network of moral and affective responses, such as remorse, grief and sympathy. I argue that according to this second, superior, account of the nature of persons, machines will be unable to pass the Turing Triage Test until they possess bodies and faces with expressive capacities akin to those of the human form. (shrink)
If people are inclined to attribute race to humanoid robots, as recent research suggests, then designers of social robots confront a difficult choice. Most existing social robots have white surfaces and are therefore, I suggest, likely to be perceived as White, exposing their designers to accusations of racism. However, manufacturing robots that would be perceived as Black, Brown, or Asian risks representing people of these races as slaves, especially given the historical associations between robots and slaves at the very origins (...) of the project of robotics. The only way engineers might avoid this ethical and political dilemma is to design and manufacture robots to which people will struggle to attribute race. Doing so, however, would require rethinking the relationship between robots and “the social,” which sits at the heart of the project of social robotics. Discussion of the race politics of robots is also worthwhile because of the potential it has to generate insights about the politics of artifacts, the relationship between culture and technology, and the responsibilities of engineers. (shrink)
This paper makes the case for arms control regimes to govern the development and deployment of autonomous weapon systems and long range uninhabited aerial vehicles.
What does Artificial Intelligence (AI) have to contribute to health care? And what should we be looking out for if we are worried about its risks? In this paper we offer a survey, and initial evaluation, of hopes and fears about the applications of artificial intelligence in medicine. AI clearly has enormous potential as a research tool, in genomics and public health especially, as well as a diagnostic aid. It’s also highly likely to impact on the organisational and business practices (...) of healthcare systems in ways that are perhaps under-appreciated. Enthusiasts for AI have held out the prospect that it will free physicians up to spend more time attending to what really matters to them and their patients. We will argue that this claim depends upon implausible assumptions about the institutional and economic imperatives operating in contemporary healthcare settings. We will also highlight important concerns about privacy, surveillance, and bias in big data, as well as the risks of over trust in machines, the challenges of transparency, the deskilling of healthcare practitioners, the way AI reframes healthcare, and the implications of AI for the distribution of power in healthcare institutions. We will suggest that two questions, in particular, are deserving of further attention from philosophers and bioethicists. What does care look like when one is dealing with data as much as people? And, what weight should we give to the advice of machines in our own deliberations about medical decisions? (shrink)
Artificial intelligence (AI) systems have demonstrated impressive performance across a variety of clinical tasks. However, notoriously, sometimes these systems are 'black boxes'. The initial response in the literature was a demand for 'explainable AI'. However, recently, several authors have suggested that making AI more explainable or 'interpretable' is likely to be at the cost of the accuracy of these systems and that prioritising interpretability in medical AI may constitute a 'lethal prejudice'. In this paper, we defend the value of interpretability (...) in the context of the use of AI in medicine. Clinicians may prefer interpretable systems over more accurate black boxes, which in turn is sufficient to give designers of AI reason to prefer more interpretable systems in order to ensure that AI is adopted and its benefits realised. Moreover, clinicians may be justified in this preference. Achieving the downstream benefits from AI is critically dependent on how the outputs of these systems are interpreted by physicians and patients. A preference for the use of highly accurate black box AI systems, over less accurate but more interpretable systems , may itself constitute a form of lethal prejudice that may diminish the benefits of AI to - and perhaps even harm - patients. (shrink)
This paper uses the fictional case of the ‘Babel fish’ to explore and illustrate the issues involved in the controversy about the use of cochlear implants in prelinguistically deaf children. Analysis of this controversy suggests that the development of genetic tests for deafness poses a serious threat to the continued flourishing of Deaf culture. I argue that the relationships between Deaf and hearing cultures that are revealed and constructed in debates about genetic testing are themselves deserving of ethical evaluation. Making (...) good policy about genetic testing for deafness will require addressing questions in political philosophy and anthropology about the value of culture and also thinking hard about what sorts of experiences and achievements make a human life worthwhile. (shrink)
John Harris and Julian Savulescu, leading figures in the "new" eugenics, argue that parents are morally obligated to use genetic and other technologies to enhance their children. But the argument they give leads to conclusions even more radical than they acknowledge. Ultimately, the world it would lead to is not all that different from that championed by eugenicists one hundred years ago.
This article discusses the ethics of the use of preimplantation genetic diagnosis to prevent the birth of children with intersex conditions/disorders of sex development , such as congenital adrenal hyperplasia and androgen insensitivity syndrome . While pediatric surgeries performed on children with ambiguous genitalia have been the topic of intense bioethical controversy, there has been almost no discussion to date of the ethics of the use of PGD to reduce the prevalence of these conditions. I suggest that PGD for those (...) conditions that involve serious medical risks for those born with them is morally permissible and that PGD for other “cosmetic” variations in sexual anatomy is more defensible than might first appear. However, importantly, the arguments that establish the latter claim have radical and disturbing implications for our attitude toward diversity more generally. (shrink)
In Deep Medicine, Eric Topol argues that the development of artificial intelligence (AI) for healthcare will lead to a dramatic shift in the culture and practice of medicine. Topol claims that, rather than replacing physicians, AI could function alongside of them in order to allow them to devote more of their time to face-to-face patient care. Unfortunately, these high hopes for AI-enhanced medicine fail to appreciate a number of factors that, we believe, suggest a radically different picture for the future (...) of healthcare. Far from facilitating a return to “the golden age of doctoring”, the role of economic and institutional considerations in determining how medical AI will be used mean that it is likely to further erode therapeutic relationships and threaten professional and patient satisfaction. (shrink)
‘Liberal eugenics’ has emerged as the most popular position amongst philosophers writing in the contemporary debate about the ethics of human enhancement. This position has been most clearly articulated by Nicholas Agar, who argues that the ‘new’ liberal eugenics can avoid the repugnant consequences associated with eugenics in the past. Agar suggests that parents should be free to make only those interventions into the genetics of their children that will benefit them no matter what way of life they grow up (...) to endorse. I argue that Agar's attempt to distinguish the new from the old eugenics fails. Once we start to consciously determine the genetics of future persons, we will not be able to avoid controversial assumptions about the relative worth of different life plans. Liberal eugenicists therefore confront the horns of a dilemma. Whichever way they try to resolve it, the consequences of widespread use of technologies of genetic selection are likely to look more like the old eugenics than defenders of the new eugenics have acknowledged. (shrink)
In so far as long-range tele-operated weapons, such as the United States’ Predator and Reaper drones, allow their operators to fight wars in what appears to be complete safety, thousands of kilometres removed from those whom they target and kill, it is unclear whether drone operators either require courage or have the opportunity to develop or exercise it. This chapter investigates the implications of the development of tele-operated warfare for the extent to which courage will remain central to the role (...) of the warrior and for the future culture of the armed services in the age of drone warfare. (shrink)
A series of recent scientific results suggest that, in the not-too-distant future, it will be possible to create viable human gametes from human stem cells. This paper discusses the potential of this technology to make possible what I call ‘in vitro eugenics’: the deliberate breeding of human beings in vitro by fusing sperm and egg derived from different stem-cell lines to create an embryo and then deriving new gametes from stem cells derived from that embryo. Repeated iterations of this process (...) would allow scientists to proceed through multiple human generations in the laboratory. In vitro eugenics might be used to study the heredity of genetic disorders and to produce cell lines of a desired character for medical applications. More controversially, it might also function as a powerful technology of ‘human enhancement’ by allowing researchers to use all the techniques of selective breeding to produce individuals with a desired genotype. (shrink)
Objectives: Machine learning (ML) has the potential to facilitate “continual learning” in medicine, in which an ML system continues to evolve in response to exposure to new data over time, even after being deployed in a clinical setting. In this article, we provide a tutorial on the range of ethical issues raised by the use of such “adaptive” ML systems in medicine that have, thus far, been neglected in the literature. -/- Target audience: The target audiences for this tutorial are (...) the developers of ML AI systems, healthcare regulators, the broader medical informatics community, and practicing clinicians. -/- Scope: Discussions of adaptive ML systems to date have overlooked the distinction between 2 sorts of variance that such systems may exhibit—diachronic evolution (change over time) and synchronic variation (difference between cotemporaneous instantiations of the algorithm at different sites)—and underestimated the significance of the latter. We highlight the challenges that diachronic evolution and synchronic variation present for the quality of patient care, informed consent, and equity, and discuss the complex ethical trade-offs involved in the design of such systems. (shrink)
In this paper, I explore the “expressivist critique” of the use of prenatal testing to select against the birth of persons with impairments. I begin by setting out the expressivist critique and then highlighting, through an investigation of an influential objection to this critique, the ways in which both critics and proponents of the use of technologies of genetic selection negotiate a difficult set of dilemmas surrounding the relationship between genes and identity. I suggest that we may be able to (...) advance the debate about these technologies by becoming more aware of the ways in which this debate is itself in part a political contestation over this relationship. Ultimately, I will argue, the real force of the expressivist objection lies in its capacity to draw our attention to political questions about the role of the state and about relationships between different social groups rather than between parents and prospective children. That is to say, crucial issues, when evaluating the force of this criticism, turn out to be: the nature of the institutions which determine how decisions about prenatal selection are made; and how we think of each other, that is, what we take to be the defining characteristics of human beings. Paradoxically, arguments about the ethics of the “sorting society”, both supportive and critical, are an important arena in which these institutions and these ideas about identity are contested and shaped. An increased awareness of the reflexive nature of the process of debating these issues may assist us in better negotiating them. (shrink)
The emergence of controlled, Maastricht Category III, non-heart-beating organ donation programs has the potential to greatly increase the supply of donor solid organs by increasing the number of potential donors. Category III donation involves unconscious and dying intensive care patients whose organs become available for transplant after life-sustaining treatments are withdrawn, usually on grounds of futility. The shortfall in organs from heart-beating organ donation following brain death has prompted a surge of interest in NHBD. In a recent editorial, the British (...) Medical Journal described NHBD as representing “a challenge which the medical profession has to take up.”. (shrink)
Since the 1980s, a number of medical researchers have suggested that in the future it might be possible for men to become pregnant. Given the role played by the right to reproductive liberty in other debates about reproductive technologies, it will be extremely difficult to deny that this right extends to include male pregnancy. However, this constitutes a reductio ad absurdum of the idea of reproductive liberty. One therefore would be well advised to look again at the extent of this (...) purported right in other contexts in which it is deployed. (shrink)
In their paper, “Autonomy and the ethics of biological behaviour modification”, Savulescu, Douglas, and Persson discuss the ethics of a technology for improving moral motivation and behaviour that does not yet exist and will most likely never exist. At the heart of their argument sits the imagined case of a “moral technology” that magically prevents people from developing intentions to commit seriously immoral actions. It is not too much of a stretch, then, to characterise their paper as a thought experiment (...) in service of a thought experiment. In order for an argument involving a thought experiment to progress debate in applied ethics three things must be true. First, the thought experiment must accurately represent and illuminate a pressing ethical dilemma. Second – and most obviously – the central claims of the argument regarding the thought experiment must be plausible. Third, it must be possible to apply or develop the arguments established with reference to the thought experiment to the real world cases the experiment is intended to illuminate. In this commentary, I argue that there are serious reasons to question the extent to which their argument meets each of these challenges involved in the use of thought experiments in applied ethics. While Savulescu et al. succeed in showing how behavioural modification might be compatible with freedom and autonomy – and perhaps justifiable even if it were not — in the fantastic case they consider, there is little we can conclude from this about any technology of “moral bioenhancement” in the foreseeable future. Indeed, there is a real danger that their argument will license attempts to manipulate behaviour through drugs and brain implants, which raise profound moral issues that they barely mention. (shrink)
A number of advances in assisted reproduction have been greeted by the accusation that they would produce children ‘without parents’. In this paper I will argue that while to date these accusations have been false, there is a limited but important sense in which they would be true of children born of a reproductive technology that is now on the horizon. If our genetic parents are those individuals from whom we have inherited 50% of our genes, then, unlike in any (...) other reproductive scenario, children who were conceived from gametes derived from stem cell lines derived from discarded IVF embryos would have no genetic parents! This paper defends this claim and investigates its ethical implications. I argue that there are reasons to think that the creation of such embryos might be morally superior to the existing alternatives in an important set of circumstances. (shrink)
Would it be ethical to deploy autonomous weapon systems (AWS) if they were unable to reliably recognize when enemy forces had surrendered? I suggest that an inability to reliably recognize surrender would not prohibit the ethical deployment of AWS where there was a limited window of opportunity for targets to surrender between the launch of the AWS and its impact. However, the operations of AWS with a high degree of autonomy and/or long periods of time between release and impact are (...) likely to remain controversial until they have the capacity to reliably recognize surrender. (shrink)
A number of recent and influential accounts of military ethics have argued that there exists a distinctive “role morality” for members of the armed services—a “warrior code.” A “good warrior” is a person who cultivates and exercises the “martial” or “warrior” virtues. By transforming combat into a “desk job” that can be conducted from the safety of the home territory of advanced industrial powers without need for physical strength or martial valour, long-range robotic weapons, such as the “Predator” and “Reaper” (...) drones fielded by the United States, call the relevance of the “martial virtues” into question. This chapter investigates the implications of these developments for conceptions of military virtue and, consequently, for the future of war. (shrink)
In this paper I examine what I take to be the best case for reproductive human cloning, as a medical procedure designed to overcome infertility, and argue that it founders on an irresolvable tension in the attitude towards the importance of being ‘genetically related’ to our children implied in the desire to clone. Except in the case where couples are cloning a child they have previously conceived naturally, cloning is unable to establish the right sort of genetic relation to make (...) couples the parents of their cloned child. If anybody is the genetic parent of a cloned child it is the natural parent(s) of the DNA donor. Paradoxically, in order to resist the claims of the parents of the donor to the cloned child, the argument for human reproductive cloning must place more weight on the intention to parent a child, than we do in cases of ordinary reproduction. It must insist that the parental relation is established by the intentions of the couple who bring a clone into the world and not by their genetic relation to the child. The emphasis placed on intention as establishing the parental relationship works to undermine the justification for cloning in the first place. For cloning to play a useful role as a reproductive technology, it must allow couples to become parents who could do so no other way. However, to the extent that intention is sufficient to establish parenthood, adoption or surrogacy, which are existing alternatives to cloning, will serve equally well to allow couples to become parents. (shrink)
I apply an agent-based virtue ethics to issues in environmental philosophy regarding our treatment of complex inorganic systems. I consider the ethics of terraforming: hypothetical planetary engineering on a vast scale which is aimed at producing habitable environments on otherwise “hostile” planets. I argue that the undertaking of such a project demonstrates at least two serious defects of moral character: an aesthetic insensitivity and the sin of hubris. Trying to change whole planets to suit our ends is arrogant vandalism. I (...) maintain that these descriptions of character are coherent and important ethical concepts. Finally, I demonstrate how the arguments developed in opposition to terraforming, a somewhat farfetched example, can be used in cases closer to home to provide arguments against our use of recombinantDNA technologies and against the construction of tourist developments in wilderness areas. (shrink)
In this paper, I respond to criticisms by John Harris, contained in a commentary on my article “Harris, harmed states, and sexed bodies”, which appeared in the Journal of Medical Ethics, volume 37, number 5. I argue that Harris's response to my criticisms exposes the strong eugenic tendencies in his own thought, when he suggests that the reproductive obligations of parents should be determined with reference to a claim about what would enhance ‘society’ or ‘the species’.
One day soon it may be possible to replace a failing heart, liver, or kidney with a long-lasting mechanical replacement or perhaps even with a 3-D printed version based on the patient's own tissue. Such artificial organs could make transplant waiting lists and immunosuppression a thing of the past. Supposing that this happens, what will the ongoing care of people with these implants involve? In particular, how will the need to maintain the functioning of artificial organs over an extended period (...) affect patients and their doctors and the responsibilities of those who manufacture such devices? Drawing on lessons from the history of the cardiac pacemaker, this article offers an initial survey of the ethical issues posed by the need to maintain and service artificial organs. We briefly outline the nature and history of cardiac pacemakers, with a particular focus on the need for technical support, maintenance, and replacement of these devices. Drawing on the existing medical literature and on our conversations and correspondence with cardiologists, regulators, and manufacturers, we describe five sources of ethical issues associated with pacemaker maintenance: the location of the devices inside the human body, such that maintenance generates surgical risks; the complexity of the devices, which increases the risk of harms to patients as well as introducing potential injustices in access to treatment; the role of software—particularly software that can be remotely accessed—in the functioning of the devices, which generates privacy and security issues; the impact of continual development and improvement of the device; and the influence of commercial interests in the context of a medical device market in which there are several competing products. Finally, we offer some initial suggestions as to how these questions should be answered. (shrink)
In this paper I will argue that contemporary non-Aboriginal Australians can collectively be held responsible for past injustices committed against the Aboriginal peoples of this land. An examination of the role played by history in determining the nature of the present reveals both the temporal extension of the Australian community that confronts the question of responsibility for historical injustice and the ways in which we continue to participate in those same injustices. Because existing injustices suffered by indigenous Australians are essentially (...) continuous with the racist history of the invasion of the Australian continent and dispossession of the Australian Aboriginal peoples, we may be held responsible for the wrongs committed in the course of that history. (shrink)
Disability activists influenced by queer theory and advocates of “human enhancement” have each disputed the idea that what is “normal” is normatively significant, which currently plays a key role in the regulation of pre-implantation genetic diagnosis (PGD). Previously, I have argued that the only way to avoid the implication that parents have strong reasons to select children of one sex (most plausibly, female) over the other is to affirm the moral significance of sexually dimorphic human biological norms. After outlining the (...) logic that generates this conclusion, I investigate the extent to which it might also facilitate an alternative, progressive, opening up of the notion of the normal and of the criteria against which we should evaluate the relative merits of different forms of embodiment. This paper therefore investigates the implications of ideas derived from queer theory for the future of PGD and of PGD for the future of queerness. (shrink)
This paper analyses rhetorics of scientific and corporate enthusiasm surrounding nanotechnology. I argue that enthusiasts for nanotechnologies often try to have it both ways on questions concerning the nature and possible impact of these technologies, and the inevitability of their development and use. In arguments about their nature and impact we are simultaneously informed that these are revolutionary technologies with the potential to profoundly change the world and that they merely represent the extension of existing technologies. They are revolutionary and (...) familiar. In debates surrounding possible regulation of these technologies it is claimed both that their development is inevitable, so that regulation would be fruitless, and that increased research funding and legislative changes are necessary in order that we can enjoy their benefits. That is, they are inevitable and precarious. An increased awareness of these rhetorical contradictions may allow us better to assess the likely impact and future of nanotechnology. (shrink)
The question of the morality of war is something of an embarrassment to liberal political thinkers. A philosophical tradition which aspires to found its preferred institutions in respect for individual autonomy, contract, and voluntary association, is naturally confronted by a phenomenon that is almost exclusively explained and justified in the language of States, force and territory. But the apparent difficulties involved in providing a convincing account of nature and ethics of war in terms of relations between individuals has not prevented (...) liberal theorists from attempting this task. This paper examines a recent attempt by Igor Primoratz to sketch out the implications of a consistent liberalism for just war doctrine and, in particular, as regards the question of who may be a legitimate target of attack in wartime. Primoratz’s paper itself is a critique of Michael Waltzer’s authoritative exposition of just war theory for failing to be sufficiently and consistently liberal. The debate between these two authors is a productive site for investigating the potential and limitations of liberal theories of just war. (shrink)