In this paper, we first classify different types of second opinions and evaluate the ethical and epistemological implications of providing those in a clinical context. Second, we discuss the issue of how artificial intelligent could replace the human cognitive labour of providing such second opinion and find that several AI reach the levels of accuracy and efficiency needed to clarify their use an urgent ethical issue. Third, we outline the normative conditions of how AI may be used as second opinion (...) in clinical processes, weighing the benefits of its efficiency against concerns of responsibility attribution. Fourth, we provide a ‘rule of disagreement’ that fulfils these conditions while retaining some of the benefits of expanding the use of AI-based decision support systems in clinical contexts. This is because the rule of disagreement proposes to use AI as much as possible, but retain the ability to use human second opinions to resolve disagreements between AI and physician-in-charge. Fifth, we discuss some counterarguments. (shrink)
There is a remedy available for many of our ailments: Psychopharmacology promises to alleviate unsatisfying memory, bad moods, and low self-esteem. Bioethicists have long discussed the ethical implications of enhancement interventions. However, they have not considered relevant evidence from psychology and economics. The growth in autonomy in many areas of life is publicized as progress for the individual. However, the broadening of areas at one’s disposal together with the increasing individualization of value systems leads to situations in which the range (...) of options asks too much of the individual. I scrutinize whether increased self-determination and unbound possibilities are really in a person’s best interests. Evidence from psychology and economics challenges the assumption that unlimited autonomy is best in all cases. The responsibility for autonomous self-formation that comes with possibilities provided by neuro-enhancement developments can be a burden. To guarantee quality of life I suggest a balance of beneficence, support, and respect for autonomy. (shrink)
Addiction appears to be a deeply moralized concept. To understand the entwinement of addiction and morality, we briefly discuss the disease model and its alternatives in order to address the following questions: Is the disease model the only path towards a ‘de-moralized’ discourse of addiction? While it is tempting to think that medical language surrounding addiction provides liberation from the moralized language, evidence suggests that this is not necessarily the case. On the other hand non-disease models of addiction may seem (...) to resuscitate problematic forms of the moralization of addiction, including, invoking blame, shame, and the wholesale rejection of addicts as people who have deep character flaws, while ignoring the complex biological and social context of addiction. This is also not necessarily the case. We argue that a deficit in reasons responsiveness as basis for attribution of moral responsibility can be realized by multiple different causes, disease being one, but it also seems likely that alternative accounts of addiction as developed by Flanagan, Lewis, and Levy, may also involve mechanisms, psychological, social, and neurobiological that can diminish reasons responsiveness. It thus seems to us that nondisease models of addiction do not necessarily involve moralization. Hence, a non-stigmatizing approach to recovery can be realized in ways that are consistent with both the disease model and alternative models of addiction. (shrink)
Robotic and artificially intelligent systems are becoming prevalent in our day-to-day lives. As human interaction is increasingly replaced by human–computer and human–robot interaction, we occasionally speak and act as though we are blaming or praising various technological devices. While such responses may arise naturally, they are still unusual. Indeed, for some authors, it is the programmers or users—and not the system itself—that we properly hold responsible in these cases. Furthermore, some argue that since directing blame or praise at technology itself (...) is unfitting, designing systems in ways that encourage such practices can only exacerbate the problem. On the other hand, there may be good moral reasons to continue engaging in our natural practices, even in cases involving AI systems or robots. In particular, daily interactions with technology may stand to impact the development of our moral practices in human-to-human interactions. In this paper, we put forward an empirically grounded argument in favor of some technologies being designed for social responsiveness. Although our usual practices will likely undergo adjustments in response to innovative technologies, some systems which we encounter can be designed to accommodate our natural moral responses. In short, fostering HCI and HRI that sustains and promotes our natural moral practices calls for a co-developmental process with some AI and robotic technologies. (shrink)
ZusammenfassungFünfzehn Jahre nach ihrer Entstehung ist die Neuroethik ein internationales wissenschaftliches Feld mit enormer Dynamik. Innerhalb weniger Jahre wurden eigene Kongresse, Zeitschriften, Forschungsförderprogramme, Fachgesellschaften und Institute gegründet. Gleichwohl besteht erheblicher Dissens über die Definition und den Gegenstandsbereich dieses neuen Gebiets. Wir argumentieren hier für eine differenzierte Konzeption, wonach neben der Reflexion ethischer Probleme der Neurowissenschaft und ihrer überwiegend neurotechnologischen Anwendungen auch die ethische Reflexion neurowissenschaftlicher Forschung zur Moralität zur Neuroethik gehört. Dies umfasst zwar nicht neurowissenschaftliche oder neuropsychologische Studien zur Moralität, (...) wohl aber die Reflexion der Bedeutung dieser Forschung für die Ethik und das Recht. Wir geben einen Überblick über die wichtigsten Themen der Neuroethik, woraus deutlich wird, wie sehr in verschiedenen gesellschaftlichen Bereichen, auch jenseits von Medizin und Gesundheitswesen, neuroethische Fragen relevant sind. Das Potenzial der Neuroethik als eines neuen Wissenschaftsfeldes liegt darin, durch eine Verknüpfung neurophilosophischer und medizinethischer Themen sowie eine breite interdisziplinäre Vernetzung neue Antworten auf gesellschaftlich drängende Fragen zu finden. (shrink)
The increased presence of medical AI in clinical use raises the ethical question which standard of explainability is required for an acceptable and responsible implementation of AI-based applications in medical contexts. In this paper, we elaborate on the emerging debate surrounding the standards of explainability for medical AI. For this, we first distinguish several goods explainability is usually considered to contribute to the use of AI in general, and medical AI in specific. Second, we propose to understand the value of (...) explainability relative to other available norms of explainable decision-making. Third, in pointing out that we usually accept heuristics and uses of bounded rationality for medical decision-making by physicians, we argue that the explainability of medical decisions should not be measured against an idealized diagnostic process, but according to practical considerations. We conclude, fourth, to resolve the issue of explainability-standards by relocating the issue to the AI’s certifiability and interpretability. (shrink)
We argue that brains generate predictions only within the constraints of the action repertoire. This makes the computational complexity tractable and fosters a step-by-step parallel development of sensory and motor systems. Hence, it is more of a benefit than a literal constraint and may serve as a universal normative principle to understand sensorimotor coupling and interactions with the world.
This paper explores the role and resolution of disagreements between physicians and their diagnostic AI-based decision support systems. With an ever-growing number of applications for these independently operating diagnostic tools, it becomes less and less clear what a physician ought to do in case their diagnosis is in faultless conflict with the results of the DSS. The consequences of such uncertainty can ultimately lead to effects detrimental to the intended purpose of such machines, e.g. by shifting the burden of proof (...) towards a physician. Thus, we require normative clarity for integrating these machines without affecting established, trusted, and relied upon workflows. In reconstructing different causes of conflicts between physicians and their AI-based tools—inspired by the approach of “meaningful human control” over autonomous systems and the challenges to resolve them—we will delineate normative conditions for “meaningful disagreements”. These incorporate the potential of DSS to take on more tasks and outline how the moral responsibility of a physician can be preserved in an increasingly automated clinical work environment. (shrink)