The power of technology to transform religions, science, and political institutions has often been presented as nothing short of revolutionary. Does technology have a similarly transformative influence on societies’ morality? Scholars have not rigorously investigated the role of technology in moral revolutions, even though existing research on technomoral change suggests that this role may be considerable. In this paper, we explore what the role of technology in moral revolutions, understood as processes of radical group-level moral change, amounts to. We do (...) so by investigating four historical episodes of radical moral change in which technology plays a noteworthy role. Our case-studies illustrate the plurality of mechanisms involved in technomoral revolutions, but also suggest general patterns of technomoral change, such as technology’s capacity to stabilize and destabilize moral systems, and to make morally salient phenomena visible or invisible. We find several leads to expand and refine conceptual tools for analysing moral change, specifically by crystallizing the notions of ‘technomoral niche construction’ and ‘moral payoff mechanisms’. Coming to terms with the role of technology in radical moral change, we argue, enriches our understanding of moral revolutions, and alerts us to the depths of which technology can change our societies in wanted and unwanted ways. (shrink)
The development of highly humanoid sex robots is on the technological horizon. If sex robots are integrated into the legal community as “electronic persons”, the issue of sexual consent arises, which is essential for legally and morally permissible sexual relations between human persons. This paper explores whether it is conceivable, possible, and desirable that humanoid robots should be designed such that they are capable of consenting to sex. We consider reasons for giving both “no” and “yes” answers to these three (...) questions by examining the concept of consent in general, as well as critiques of its adequacy in the domain of sexual ethics; the relationship between consent and free will; and the relationship between consent and consciousness. Additionally we canvass the most influential existing literature on the ethics of sex with robots. (shrink)
Some critics of sex-robots worry that their use might spread objectifying attitudes about sex, and common sense places a higher value on sex within love-relationships than on casual sex. If there could be mutual love between humans and sex-robots, this could help to ease the worries about objectifying attitudes. And mutual love between humans and sex-robots, if possible, could also help to make this sex more valuable. But is mutual love between humans and robots possible, or even conceivable? We discuss (...) three clusters of ideas and associations commonly discussed within the philosophy of love, and relate these to the topic of whether mutual love could be achieved between humans and sex-robots: (i) the idea of love as a “good match”; (ii) the idea of valuing each other in our distinctive particularity; and (iii) the idea of a steadfast commitment. We consider relations among these ideas and the sort of agency and free will that we attribute to human romantic partners. Our conclusion is that mutual love between humans and advanced sex-robots is not an altogether impossible proposition. However, it is unlikely that we will be able to create robots sophisticated enough to be able to participate in love-relationships anytime soon. -/- . (shrink)
In the last few decades, several philosophers have written on the topic of moral revolutions, distinguishing them from other kinds of society-level moral change. This article surveys recent accounts of moral revolutions in moral philosophy. Different authors use quite different criteria to pick out moral revolutions. Features treated as relevant include radicality, depth or fundamentality, pervasiveness, novelty and particular causes. We also characterize the factors that have been proposed to cause moral revolutions, including anomalies in existing moral codes, changing honour (...) codes, art, economic conditions and individuals or groups. Finally, we discuss what accounts of moral revolutions have in common, how they differ and how moral revolutions are distinguished from other kinds of moral change, such as drift and reform. (shrink)
Moral bioenhancement, nudge-designed environments, and ambient persuasive technologies may help people behave more consistently with their deeply held moral convictions. Alternatively, they may aid people in overcoming cognitive and affective limitations that prevent them from appreciating a situation’s moral dimensions. Or they may simply make it easier for them to make the morally right choice by helping them to overcome sources of weakness of will. This paper makes two assumptions. First, technologies to improve people’s moral capacities are realizable. Second, such (...) technologies will actually help people get morality right and behave more consistently with whatever the ‘real’ right thing to do turns out to be. The paper then considers whether or not humanity loses anything valuable, particularly opportunities for moral progress, when being moral is made much easier by eliminating difficult moral deliberation and internal moral struggle. Ultimately, the worry that moral struggle has value as a catalyst for moral progress is rejected. Moral progress is understood here as the discovery and application of new values or sensitization to new sources of harm. (shrink)
Addiction appears to be a deeply moralized concept. To understand the entwinement of addiction and morality, we briefly discuss the disease model and its alternatives in order to address the following questions: Is the disease model the only path towards a ‘de-moralized’ discourse of addiction? While it is tempting to think that medical language surrounding addiction provides liberation from the moralized language, evidence suggests that this is not necessarily the case. On the other hand non-disease models of addiction may seem (...) to resuscitate problematic forms of the moralization of addiction, including, invoking blame, shame, and the wholesale rejection of addicts as people who have deep character flaws, while ignoring the complex biological and social context of addiction. This is also not necessarily the case. We argue that a deficit in reasons responsiveness as basis for attribution of moral responsibility can be realized by multiple different causes, disease being one, but it also seems likely that alternative accounts of addiction as developed by Flanagan, Lewis, and Levy, may also involve mechanisms, psychological, social, and neurobiological that can diminish reasons responsiveness. It thus seems to us that nondisease models of addiction do not necessarily involve moralization. Hence, a non-stigmatizing approach to recovery can be realized in ways that are consistent with both the disease model and alternative models of addiction. (shrink)
In this chapter, we consider ethical and philosophical aspects of trust in the practice of medicine. We focus on trust within the patient-physician relationship, trust and professionalism, and trust in Western (allopathic) institutions of medicine and medical research. Philosophical approaches to trust contain important insights into medicine as an ethical and social practice. In what follows we explain several philosophical approaches and discuss their strengths and weaknesses in this context. We also highlight some relevant empirical work in the section on (...) trust in the institutions of medicine. It is hoped that the approaches discussed here can be extended to nursing and other topics in the philosophy of medicine. (shrink)
In this chapter we identify three potentially morally problematic behaviours that are common among users of dating and hook-up apps (DHAs) and provide arguments as to why they may or may not be considered (a) in a category of their own, distinct from similar behaviours outside of DHAs; (b) caused or facilitated by affordances and business logic of DHAs; (c) as indeed morally wrong. We also consider ways in which morally problematic behaviours can be anticipated, mitigated, or even prevented by (...) analysis of the ethical and moral dimensions of technologies and their afforded uses. Finally, we offer some possible directions for future work on these topics in particular and on the ethical consequences of DHAs in general. (shrink)
Engineering an artificial intelligence to play an advisory role in morally charged decision making will inevitably introduce meta-ethical positions into the design. Some of these positions, by informing the design and operation of the AI, will introduce risks. This paper offers an analysis of these potential risks along the realism/anti-realism dimension in metaethics and reveals that realism poses greater risks, but, on the other hand, anti-realism undermines the motivation for engineering a moral AI in the first place.
It is not clear to what the projects of creating an artificial intelligence (AI) that does ethics, is moral, or makes moral judgments amounts. In this paper we discuss some of the extant metaethical theories and debates in moral philosophy by which such projects should be informed, specifically focusing on the project of creating an AI that makes moral judgments. We argue that the scope and aims of that project depend a great deal on antecedent metaethical commitments. Metaethics, therefore, plays (...) the role of an Archimedean fulcrum in this context, very much like the Archimedean role that it is often taken to take in context of normative ethics (Dworkin 1996; Dreier 2002; Fantl 2006; Ehrenberg 2008). (shrink)
Upon entering the examination room, Caitlyn encounters a woman sitting alone and in distress. Caitlyn introduces herself as the hospital ethicist and tells the woman, Mrs. Dennis, that her aim is to help her reach a decision about whether to perform an autopsy on her recently deceased husband. Mrs. Dennis begins the encounter by telling the ethicist that she has to decide quickly, but that she is very torn about what to do. Mrs. Dennis adds, “My sons disagree about the (...) autopsy.” As a standardized patient, a specialized actor, the woman playing Mrs. Dennis has already delivered the same opening lines several times to different learners practicing their clinical ethics consultation skills. An SP encounter is a simulated patient encounter used for educational purposes that requires the standardization of verbal and behavioral responses. In the encounter, the simulator, or “patient,” uses a scripted medical history to enable the learner to employ a certain skill, say, the ability to perform a neurological exam. The use of standardized patients in the evaluation of clinical skills has become a staple in medical education. To tackle the challenge of teaching clinical ethics consultation skills, we have incorporated SP encounters into the curriculum of the Bioethics Program of The Union Graduate College and the Icahn School of Medicine at Mount Sinai. SP encounters are incorporated into one of our onsite classes, the Onsite Clinical Ethics Practicum, and they are part of the capstone examination, which all of our graduates must complete successfully. The inclusion of simulated encounters into the curriculum is one way in which we equip our students with the core competencies specified by the American Society for Bioethics and Humanities Task Force for clinical ethicists. (shrink)
Kunstmatige intelligentie (AI) en systemen die met machine learning (ML) werken, kunnen veel onderdelen van het medische besluitvormingsproces ondersteunen of vervangen. Ook zouden ze artsen kunnen helpen bij het omgaan met klinische, morele dilemma’s. AI/ML-beslissingen kunnen zo in de plaats komen van professionele beslissingen. We betogen dat dit belangrijke gevolgen heeft voor de relatie tussen een patiënt en de medische professie als instelling, en dat dit onvermijdelijk zal leiden tot uitholling van het institutionele vertrouwen in de geneeskunde.
This paper provides an analysis of the way in which two foundational principles of medical ethics–the trusted doctor and patient autonomy–can be undermined by the use of machine learning (ML) algorithms and addresses its legal significance. This paper can be a guide to both health care providers and other stakeholders about how to anticipate and in some cases mitigate ethical conflicts caused by the use of ML in healthcare. It can also be read as a road map as to what (...) needs to be done to achieve an acceptable level of explainability in an ML algorithm when it is used in a healthcare context. (shrink)
Advocates of moral enhancement through pharmacological, genetic, or other direct interventions sometimes explicitly argue, or assume without argument, that traditional moral education and development is insufficient to bring about moral enhancement. Traditional moral education grounded in a Kohlbergian theory of moral development is indeed unsuitable for that task; however, the psychology of moral development and education has come a long way since then. Recent studies support the view that moral cognition is a higher-order process, unified at a functional level, and (...) that a specific moral faculty does not exist. It is more likely that moral cognition involves a number of different mechanisms, each connected to other cognitive and affective processes. Taking this evidence into account, we propose a novel, empirically informed approach to moral development and education, in children and adults, which is based on a cognitive-affective approach to moral dispositions. This is an interpretative approach that derives from the cognitive-affective personality system (Mischel and Shoda, 1995). This conception individuates moral dispositions by reference to the cognitive and affective processes that realise them. Conceived of in this way, moral dispositions influence an agent's behaviour when they interact with situational factors, such as mood or social context. Understanding moral dispositions in this way lays the groundwork for proposing a range of indirect methods of moral enhancement, techniques that promise similar results as direct interventions whilst posing fewer risks. (shrink)