Policymakers, employers, insurance companies, researchers, and health care providers have developed an increasing interest in using principles from behavioral economics and psychology to persuade people to change their health-related behaviors, lifestyles, and habits. In this article, we examine how principles from behavioral economics and psychology are being used to nudge people (the public, patients, or health care providers) toward particular decisions or behaviors related to health or health care, and we identify the ethically relevant dimensions that should be considered for (...) the utilization of each principle. (shrink)
In bioethics, the predominant categorization of various types of influence has been a tripartite classification of rational persuasion (meaning influence by reason and argument), coercion (meaning influence by irresistible threats—or on a few accounts, offers), and manipulation (meaning everything in between). The standard ethical analysis in bioethics has been that rational persuasion is always permissible, and coercion is almost always impermissible save a few cases such as imminent threat to self or others. However, many forms of influence fall into the (...) broad middle terrain—and this terrain is in desperate need of conceptual refining and ethical analysis in light of recent interest in using principles from behavioral science to influence health decisions and behaviors. This paper aims to address the neglected space between rational persuasion and coercion in bioethics. First, I argue for conceptual revisions that include removing the “manipulation” label and relabeling this space “nonargumentative]influence,” with two subtypes: “reason-bypassing” and “reason-countering.” Second, I argue that bioethicists have made the mistake of relying heavily on the conceptual categories themselves for normative work and instead should assess the ethical permissibility of a particular instance of influence by asking several key ethical questions, which I elucidate, that relate to (1) the impact of the form of influence on autonomy and (2) the relationship between the influencer and the influenced. Finally, I apply my analysis to two examples of nonargumentative influence in health care and health policy: (1) governmental agencies such as the Food and Drug Administration (FDA) trying to influence the public to be healthier using nonargumentative measures such as vivid images on cigarette packages to make more salient the negative effects of smoking, and (2) a physician framing a surgery in terms of survival rates instead of mortality rates to influence her patient to consent to the surgery. (shrink)
Cognitive scientists have identified a wide range of biases and heuristics in human decision making over the past few decades. Only recently have bioethicists begun to think seriously about the implications of these findings for topics such as agency, autonomy, and consent. This article aims to provide an overview of biases and heuristics that have been identified and a framework in which to think comprehensively about the impact of them on the exercise of autonomous decision making. I analyze the impact (...) that these biases and heuristics have on the following dimensions of autonomy: understanding, intentionality, absence of alienating or controlling influence, and match between formally autonomous preferences or decisions and actual choices or actions. (shrink)
Bioethicists often draw sharp distinctions between hope and states like denial, self-deception, and unrealistic optimism. But what, exactly, is the difference between hope and its more suspect cousins? One common way of drawing the distinction focuses on accuracy of belief about the desired outcome: Hope, though perhaps sometimes misplaced, does not involve inaccuracy in the way that these other states do. Because inaccurate beliefs are thought to compromise informed decision making, bioethicists have considered these states to be ones where intervention (...) is needed either to correct the person’s mental state or to persuade the person to behave differently, or even to deny the person certain options. In this article, we argue that it is difficult to determine whether a patient is really in denial, self-deceived, or unrealistically optimistic. Moreover, even when we are confident that beliefs are unrealistic, they are not always as harmful as critics contend. As a result, we need to be more permissive in our approach to patients who we believe are unrealistically optimistic, in denial, or self-deceived—that is, unless patients significantly misunderstand their situation and thus make decisions that are clearly bad for them, we should not intervene by trying to change their mental states or persuade them to behave differently, or by paternalistically denying them certain options. (shrink)
When applied in the health sector, AI-based applications raise not only ethical but legal and safety concerns, where algorithms trained on data from majority populations can generate less accurate or reliable results for minorities and other disadvantaged groups.
The past four decades of research in the social sciences have shed light on two important phenomena. One is that human decision-making is full of predicable errors and biases that often lead individuals to make choices that defeat their own ends (i.e., the bad choice phenomenon), and the other is that individuals’ decisions and behaviors are powerfully shaped by their environment (i.e., the influence phenomenon). Some have argued that it is ethically defensible that the influence phenomenon be utilized to address (...) the bad choice phenomenon. They propose that “choice architects” learn about the various ways in which choices can be influenced and directed by the environment, and then work to design environments, broadly construed, that influence individuals towards choices that make them better off. Those who advocate intentionally creating choice environments that lead people to better choices believe that doing so is ethically permissible because (1) it makes people better off, and (2) it does so in a way that is entirely compatible with individual liberty. The evaluation of these two claims is the main focus of this paper. (shrink)
“Situationists” such as Gilbert Harman and John Doris have accused virtue ethicists as having an “empirically inadequate” theory, arguing that much of social science research suggests that people do not have robust character traits as traditionally thought. By far, the most common response to this challenge has been what I refer to as “the rarity response” or the “rarity thesis”. Rarity responders deny that situationism poses any sort of threat to virtue ethics since there is no reason to suppose that (...) the moral virtues are typical or widespread. But, far from being its saving grace, I will argue, the rarity thesis forces virtue ethicists into positions that are incompatible with their theoretical foundations or render their theory normatively irrelevant. The more the virtue ethicists modify their thesis to fit the empirical evidence and to be normatively relevant, the less they retain a virtue ethical theory. This is also the case for virtue epistemologists. (shrink)
This paper deals with the ethics of using knowledge about a person’s particular psychological make-up, or about the psychology of judgment and decision-making in general, to shape that person’s decisions and behaviors. Various moral concerns emerge about this practice, but one of the more elusive and underdeveloped concerns is the charge of manipulation. It is this concern that is the focus of this paper. I argue that it is not the case that any of the practices traditionally labeled as “manipulation” (...) are ipso facto morally wrong, nor is it even the case that any of these practices always has a single wrong making feature (e.g., infringement on autonomy) that is always present but may be outweighed by other morally relevant factors and be all things considered ethically permissible or morally right. I argue that the moral status depends on the extent to which the instance of influence (1) threatens or promotes autonomy, (2) has good aims and virtuous overtones or bad ones, and (3) fulfills or fails to fulfill duties, obligations, and expectations that arise out of the relationship between the influencer and influenced. I will explain in detail the moral relevance of these factors, showing why each is necessary, criteria for evaluation each, and demonstrating how they work in specific cases. (shrink)
In Death and the Afterlife, Samuel Scheffler argues that the assumption of a “collective afterlife” plays an essential role in us valuing much of what we do. If a collective afterlife did not exist, our value structures would be radically different according to Scheffler. We would cease to value much of what we do. In Part I of the paper, I argue that there is something to Scheffler’s afterlife conjecture, but that Scheffler has misplaced the mattering of a collective afterlife. (...) Its significance lies not in the realm of axiology but more importantly in coming to terms with the fact of death and in viewing our lives as having meaning. In Part II of the paper, I outline three views on the sort of collective afterlife that matters and argue in favor of the view that it must involve creatures that recognize our existence, reasons, values, and contributions and the view that it must involve creatures that value similar things to us —but argue against the view that it necessarily be a human collective afterlife. (shrink)
The introduction of the Diagnostic and statistical manual of mental disorders in May 2013 is being hailed as the biggest event in psychiatry in the last 10 years. In this paper I examine three important issues that arise from the new manual: Expanding nosology: Psychiatry has again broadened its nosology to include human experiences not previously under its purview . Consequence-based ethical concerns about this expansion are addressed, along with conceptual concerns about a confusion of “construct validity” and “conceptual validity” (...) and a failure to distinguish between “disorder” and “non disordered conditions for which we help people.” The role of claims about societal impact in changes in nosology: Several changes in the DSM-5 involved claims about societal impact in their rationales. This is due in part to a new online open comment period during DSM development. Examples include advancement of science, greater access to treatment, greater public awareness of condition, loss of identify or harm to those with removed disorders, stigmatization, offensiveness, etc. I identify and evaluate four importantly distinct ways in which claims about societal impact might operate in DSM development. Categorisation nosology to spectrum nosology: The move to “degrees of severity” of mental disorders, a major change for DSM-5, raises concerns about conceptual clarity and uniformity concerning what it means to have a severe form of a disorder, and ethical concerns about communication. (shrink)
The introduction of the Diagnostic and Statistical Manual of Mental Disorders in May 2013 is being hailed as the biggest event in psychiatry in the last 10 years. In this paper I examine three important issues that arise from the new manual: Expanding nosology: Psychiatry has again broadened its nosology to include human experiences not previously under its purview. Consequence-based ethical concerns about this expansion are addressed, along with conceptual concerns about a confusion of "construct validity" and "conceptual validity" and (...) a failure to distinguish between "disorder" and "nondisordered conditions for which we help people." The role of claims about societal impact in changes in nosology: Several changes in the DSM-5 involved claims about societal impact in their rationales. This is due in part to a new online open comment period during DSM development. Examples include advancement of science, greater access to treatment, greater public awareness of condition, loss of identify or harm to those with removed disorders, stigmatization, offensiveness, etc. I identify and evaluate four importantly distinct ways in which claims about societal impact might operate in DSM development. Categorisation nosology to spectrum nosology: The move to "degrees of severity" of mental disorders, a major change for DSM-5, raises concerns about conceptual clarity and uniformity concerning what it means to have a severe form of a disorder, and ethical concerns about communication. (shrink)
In their article “The Concept of Voluntary Consent,” Robert Nelson and colleagues (2011) argue for two necessary and jointly sufficient conditions for voluntary action: intentionality, and substantial freedom from controlling influences. They propose an instrument to empirically measure voluntariness, the Decision Making Control Instrument. I argue that (1) their conceptual analysis of intentionality and controlling influences needs expansion in light of the growing use of behavioral economics principles to change individual and public health behaviors (growing in part by the designation (...) of “The Science of Behavior Change” as a new National Institutes of Health [NIH] Roadmap Activity); and (2) that their measure of voluntariness that relies on self-perceived intentionality and extent of control is unreliable, given findings from behavioral economics and cognitive science that show that our perceptions about the intentionality and control of our own and others’ decisions and actions are remarkably skewed and un-insightful. (shrink)
In our commentary we briefly review the work on the neurological differences between the rational ethical analysis used in professional contexts and the reflexive emotional responses of our daily moral reasoning, and discuss the implications for the claim that our normative arguments should not rely on the emotion of repugnance.
In their paper, “Behavioral Equipoise: A Way to Resolve Ethical Stalemates in Clinical Research, “ Peter Ubel and Robert Silbergleit (2011) propose that we adopt another principle, the principle of behavioral equipoise, whereby RCTs are also morally justified in cases where they are expected to address the controversy, disagreement, or behavioral resistance surrounding a particular treatment. Adopting this ethical standard would allow for research to move forward and, as a result, for the resolution of stalemates between clinicians who hold opposing (...) views. There are two points that I would like to make in terms of objections to Ubel and Silbergleit’s argument, and then I want to emphasize what I think is particularly valuable about their argument. First, I dispute the move from the claim that the principle of clinical equipoise creates (or does not resolve) stalemates to the conclusion that adopting the principle of behavioral equipoise would dissolve stalemates. My second objection concerns the distinctness of the concept of behavioral equipoise fromthe concept of clinical equipoise. That said, I do think that Ubel and Silbergleit make an important point, and that is that one cause of equipoise is the behavioral and psychological factors of those responding to data. (shrink)
ABSTRACT:Bioethicists today are taking a greater role in the design and implementation of emerging technologies by "embedding" within the development teams and providing their direct guidance and recommendations. Ideally, these collaborations allow ethical considerations to be addressed in an active, iterative, and ongoing process through regular exchanges between ethicists and members of the technological development team. This article discusses a challenge to this embedded ethics approach—namely, that bioethical guidance, even if embraced by the development team in theory, is not easily (...) actionable in situ. Many of the ethical problems at issue in emerging technologies are associated with preexisting structural, socioeconomic, and political factors, making compliance with ethical recommendations sometimes less a matter of choice and more a matter of feasibility. Moreover, incentive structures within these systemic factors maintain them against reform efforts. The authors recommend that embedded bioethicists utilize principles from behavioral science (such as behavioral economics) to better understand and account for these incentive structures so as to encourage the ethically responsible uptake of technological innovations. (shrink)