Citations of:
Add citations
You must login to add citations.
|
|
This chapter examines the possibility of using AI technologies to improve human moral reasoning and decision-making, especially in the context of purchasing and consumer decisions. We characterize such AI technologies as artificial ethics assistants (AEAs). We focus on just one part of the AI-aided moral improvement question: the case of the individual who wants to improve their morality, where what constitutes an improvement is evaluated by the individual’s own values. We distinguish three broad areas in which an individual might think (...) |
|
Kunstmatige intelligentie (AI) en systemen die met machine learning (ML) werken, kunnen veel onderdelen van het medische besluitvormingsproces ondersteunen of vervangen. Ook zouden ze artsen kunnen helpen bij het omgaan met klinische, morele dilemma’s. AI/ML-beslissingen kunnen zo in de plaats komen van professionele beslissingen. We betogen dat dit belangrijke gevolgen heeft voor de relatie tussen een patiënt en de medische professie als instelling, en dat dit onvermijdelijk zal leiden tot uitholling van het institutionele vertrouwen in de geneeskunde. |
|
The moral enhancement of human beings is a constant theme in the history of humanity. Today, faced with the threats of a new, globalised world, concern over this matter is more pressing. For this reason, the use of biotechnology to make human beings more moral has been considered. However, this approach is dangerous and very controversial. The purpose of this article is to argue that the use of another new technology, AI, would be preferable to achieve this goal. Whilst several (...) |
|
This paper offers a theoretical framework that can be used to derive viable engineering strategies for the design and development of robots that can nudge people towards moral improvement. The framework relies on research in developmental psychology and insights from Stoic ethics. Stoicism recommends contemplative practices that over time help one develop dispositions to behave in ways that improve the functioning of mechanisms that are constitutive of moral cognition. Robots can nudge individuals towards these practices and can therefore help develop (...) |
|
The rapid adoption and implementation of artificial intelligence in medicine creates an ontologically distinct situation from prior care models. There are both potential advantages and disadvantages with such technology in advancing the interests of patients, with resultant ontological and epistemic concerns for physicians and patients relating to the instatiation of AI as a dependent, semi- or fully-autonomous agent in the encounter. The concept of libertarian paternalism potentially exercised by AI has created challenges to conventional assessments of patient and physician autonomy. (...) |
|
Engineering an artificial intelligence to play an advisory role in morally charged decision making will inevitably introduce meta-ethical positions into the design. Some of these positions, by informing the design and operation of the AI, will introduce risks. This paper offers an analysis of these potential risks along the realism/anti-realism dimension in metaethics and reveals that realism poses greater risks, but, on the other hand, anti-realism undermines the motivation for engineering a moral AI in the first place. |
|
Moral bioenhancement, nudge-designed environments, and ambient persuasive technologies may help people behave more consistently with their deeply held moral convictions. Alternatively, they may aid people in overcoming cognitive and affective limitations that prevent them from appreciating a situation’s moral dimensions. Or they may simply make it easier for them to make the morally right choice by helping them to overcome sources of weakness of will. This paper makes two assumptions. First, technologies to improve people’s moral capacities are realizable. Second, such (...) |