I address Sinnott-Armstrong's argument that evidence of framing effects in moral psychology shows that moral intuitions are unreliable and therefore not noninferentially justified. I begin by discussing what it is to be epistemically unreliable and clarify how framing effects render moral intuitions unreliable. This analysis calls for a modification of Sinnott-Armstrong's argument if it is to remain valid. In particular, he must claim that framing is sufficiently likely to determine the content of moral intuitions. I then re-examine the evidence which (...) is supposed to support this claim. In doing so, I provide a novel suggestion for how to analyze the reliability of intuitions in empirical studies. Analysis of the evidence suggests that moral intuitions subject to framing effects are in fact much more reliable than perhaps was thought, and that Sinnott-Armstrong has not succeeded in showing that noninferential justification has been defeated. (shrink)
Consent governs innumerable everyday social interactions, including sex, medical exams, the use of property, and economic transactions. Yet little is known about how ordinary people reason about the validity of consent. Across the domains of sex, medicine, and police entry, Study 1 showed that when agents lack autonomous decision-making capacities, participants are less likely to view their consent as valid; however, failing to exercise this capacity and deciding in a nonautonomous way did not reduce consent judgments. Study 2 found that (...) specific and concrete incapacities reduced judgments of valid consent, but failing to exercise these specific capacities did not, even when the consenter makes an irrational and inauthentic decision. Finally, Study 3 showed that the effect of autonomy on judgments of valid consent carries important downstream consequences for moral reasoning about the rights and obligations of third parties, even when the consented-to action is morally wrong. Overall, these findings suggest that laypeople embrace a normative, domain-general concept of valid consent that depends consistently on the possession of autonomous capacities, but not on the exercise of these capacities. Autonomous decisions and autonomous capacities thus play divergent roles in moral reasoning about consent interactions: while the former appears relevant for assessing the wrongfulness of consented-to acts, the latter plays a role in whether consent is regarded as authoritative and therefore as transforming moral rights. (shrink)
Why do we find agents less blameworthy when they face mitigating circumstances, and what does this show about philosophical theories of moral responsibility? We present novel evidence that the tendency to mitigate the blameworthiness of agents is driven both by the perception that they are less normatively competent—in particular, less able to know that what they are doing is wrong—and by the perception that their behavior is less attributable to their deep selves. Consequently, we argue that philosophers cannot rely on (...) the case strategy to support the Normative Competence theory of moral responsibility over the Deep Self theory. However, we also outline ways in which further empirical and philosophical work would shift the debate, by showing that there is a significant departure between ordinary concepts and corresponding philosophical concepts, or by focusing on a different type of coherence with ordinary judgments. (shrink)
This entry summarizes an emerging subdiscipline of both empirical bioethics and experimental philosophy (“x-phi”) which has variously been referred to as experimental philosophical bioethics, experimental bioethics, or simply “bioxphi”. Like empirical bioethics, bioxphi uses data-driven research methods to capture what various stakeholders think (feel, judge, etc.) about moral issues of relevance to bioethics. However, like its other parent discipline of x-phi, bioxphi tends to favor experiment-based designs drawn from the cognitive sciences – including psychology, neuroscience, and behavioral economics – to (...) tease out why and how stakeholders think as they do. (shrink)
This chapter examines the relevance of the cognitive science of morality to moral epistemology, with special focus on the issue of the reliability of moral judgments. It argues that the kind of empirical evidence of most importance to moral epistemology is at the psychological rather than neural level. The main theories and debates that have dominated the cognitive science of morality are reviewed with an eye to their epistemic significance.
May assumes that if moral beliefs are counterfactually dependent on irrelevant factors, then those moral beliefs are based on defective belief-forming processes. This assumption is false. Whether influence by irrelevant factors is debunking depends on the mechanisms through which this influence occurs. This raises the empirical bar for debunkers and helps May avoid an objection to his Debunker’s Dilemma.
Background: Allocation of scarce organs for transplantation is ethically challenging. Artificial intelligence (AI) has been proposed to assist in liver allocation, however the ethics of this remains unexplored and the view of the public unknown. The aim of this paper was to assess public attitudes on whether AI should be used in liver allocation and how it should be implemented. Methods: We first introduce some potential ethical issues concerning AI in liver allocation, before analysing a pilot survey including online responses (...) from 172 UK laypeople, recruited through Prolific Academic. Findings: Most participants found AI in liver allocation acceptable (69.2%) and would not be less likely to donate their organs if AI was used in allocation (72.7%). Respondents thought AI was more likely to be consistent and less biased compared to humans, although were concerned about the “dehumanisation of healthcare” and whether AI could consider important nuances in allocation decisions. Participants valued accuracy, impartiality, and consistency in a decision-maker, more than interpretability and empathy. Respondents were split on whether AI should be trained on previous decisions or programmed with specific objectives. Whether allocation decisions were made by transplant committee or AI, participants valued consideration of urgency, survival likelihood, life years gained, age, future medication compliance, quality of life, future alcohol use and past alcohol use. On the other hand, the majority thought the following factors were not relevant to prioritisation: past crime, future crime, future societal contribution, social disadvantage, and gender. Conclusions: There are good reasons to use AI in liver allocation, and our sample of participants appeared to support its use. If confirmed, this support would give democratic legitimacy to the use of AI in this context and reduce the risk that donation rates could be affected negatively. Our findings on specific ethical concerns also identify potential expectations and reservations laypeople have regarding AI in this area, which can inform how AI in liver allocation could be best implemented. (shrink)
The demands of morality can seem straightforward. Be kind to others. Do not lie. Do not murder. But moral life is not so simple. We are often confronted with difficult situations in which someone is going to get hurt no matter what we do, in which we cannot meet all of our obligations, in which loyalties come into conflict, in which we cannot help everyone who needs it, or in which we must compromise on important values. It is natural to (...) describe such situations as moral dilemmas. This chapter is about the psychology of how we represent, process, and make decisions about what to do when moral life is difficult in this way. Our first aim is to provide some conceptual clarity on what exactly turns a choice situation into a moral dilemma. Here, we propose a normative account of moral dilemmas in terms of moral appropriate feelings of conflict in response to strongly conflicting reasons. Our second aim is to critically survey existing psychological work, providing an overview of some important findings, while raising questions for future research. (shrink)