In this paper, we report on an experiment with The Walking Dead (TWD), which is a narrative-driven adventure game with morally charged decisions set in a post-apocalyptic world filled with zombies. This study aimed to identify physiological markers of moral decisions and non-moral decisions using infrared thermal imaging (ITI). ITI is a non-invasive tool used to capture thermal variations due to blood flow in specific body regions that might be caused by sympathetic activity. Results show that moral decisions seem to (...) elicit a significant decrease in temperature in the chin region 20 seconds after participants are presented with a moral decision. However, given the small sample involved, and the lack of significance in other regions, future studies might be needed to confirm the results obtained in this work. (shrink)
This chapter examines the possibility of using AI technologies to improve human moral reasoning and decision-making, especially in the context of purchasing and consumer decisions. We characterize such AI technologies as artificial ethics assistants (AEAs). We focus on just one part of the AI-aided moral improvement question: the case of the individual who wants to improve their morality, where what constitutes an improvement is evaluated by the individual’s own values. We distinguish three broad areas in which an individual might think (...) their own moral reasoning and decision-making could be improved: one’s actions, character, or other evaluable attributes fall short of one’s values and moral beliefs; one sometimes misjudges or is uncertain about what the right thing to do is in particular situations, given one’s values; one is uncertain about some fundamental moral questions or recognizes a possibility that some of one’s core moral beliefs and values are mistaken. We sketch why one might think that AI tools could be used to support moral improvement in those areas, and describe two types of assistance: preparatory assistance, including advice and training supplied in advance of moral deliberation; and on-the-spot assistance, including on-the-spot advice and facilitation of moral functioning over the course of moral deliberation. Then, we turn to some of the ethical issues that AEAs might raise, looking in particular at three under-appreciated problems posed by the use of AI for moral self-improvement: namely, reliance on sensitive moral data; the inescapability of outside influences on AEAs; and AEA usage prompting the user to adopt beliefs and make decisions without adequate reasons. (shrink)
In this paper, we report on an experiment with The Walking Dead (TWD), which is a narrative-driven adventure game where players have to survive in a post-apocalyptic world filled with zombies. We used OpenFace software to extract action unit (AU) intensities of facial expressions characteristic of decision-making processes and then we implemented a simple convolution neural network (CNN) to see which AUs are predictive of decision-making. Our results provide evidence that the pre-decision variations in action units 17 (chin raiser), 23 (...) (lip tightener), and 25 (parting of lips) are predictive of decision-making processes. Furthermore, when combined, their predictive power increased up to 0.81 accuracy on the test set; we offer speculations about why it is that these particular three AUs were found to be connected to decision-making. Our results also suggest that machine learning methods in combination with video games may be used to accurately and automatically identify complex decision-making processes using AU intensity alone. Finally, our study offers a new method to test specific hypotheses about the relationships between higher-order cognitive processes and behavior, which relies on both narrative video games and easily accessible software, like OpenFace. (shrink)
Engineering an artificial intelligence to play an advisory role in morally charged decision making will inevitably introduce meta-ethical positions into the design. Some of these positions, by informing the design and operation of the AI, will introduce risks. This paper offers an analysis of these potential risks along the realism/anti-realism dimension in metaethics and reveals that realism poses greater risks, but, on the other hand, anti-realism undermines the motivation for engineering a moral AI in the first place.
It is not clear to what the projects of creating an artificial intelligence (AI) that does ethics, is moral, or makes moral judgments amounts. In this paper we discuss some of the extant metaethical theories and debates in moral philosophy by which such projects should be informed, specifically focusing on the project of creating an AI that makes moral judgments. We argue that the scope and aims of that project depend a great deal on antecedent metaethical commitments. Metaethics, therefore, plays (...) the role of an Archimedean fulcrum in this context, very much like the Archimedean role that it is often taken to take in context of normative ethics (Dworkin 1996; Dreier 2002; Fantl 2006; Ehrenberg 2008). (shrink)
In this paper, I offer an account of the dependence relation between perception of change and the subjective flow of time that is consistent with some extant empirical evidence from priming by unconscious change. This view is inspired by the one offered by William James, but it is articulated in the framework of contemporary functionalist accounts of mental qualities and higher-order theories of consciousness. An additional advantage of this account of the relationship between perception of change and subjective time is (...) that is makes sense of instances where we are not consciously aware of changes but still experience the flow of time. (shrink)
Quality Space Theory is a holistic model of qualitative states. On this view, individual mental qualities are defined by their locations in a space of relations, which reflects a similar space of relations among perceptible properties. This paper offers an extension of Quality Space Theory to temporal perception. Unconscious segmentation of events, the involvement of early sensory areas, and asymmetries of dominance in multi-modal perception of time are presented as evidence for the view.
Kunstmatige intelligentie (AI) en systemen die met machine learning (ML) werken, kunnen veel onderdelen van het medische besluitvormingsproces ondersteunen of vervangen. Ook zouden ze artsen kunnen helpen bij het omgaan met klinische, morele dilemma’s. AI/ML-beslissingen kunnen zo in de plaats komen van professionele beslissingen. We betogen dat dit belangrijke gevolgen heeft voor de relatie tussen een patiënt en de medische professie als instelling, en dat dit onvermijdelijk zal leiden tot uitholling van het institutionele vertrouwen in de geneeskunde.
In my dissertation I critically survey existing theories of time consciousness, and draw on recent work in neuroscience and philosophy to develop an original theory. My view depends on a novel account of temporal perception based on the notion of temporal qualities, which are mental properties that are instantiated whenever we detect change in the environment. When we become aware of these temporal qualities in an appropriate way, our conscious experience will feature the distinct temporal phenomenology that is associated with (...) the passing of time. The temporal qualities model of perception makes two predictions about the mechanisms of time perception; one that time perception is modality specific and the other that it can occur without awareness. My argument for this view partially depends on a number of psychophysical experiments that I designed and implemented myself and which investigate subjective time distortions caused by looming visual stimuli. These results show that the mechanisms of conscious experience of time are distinct from the mechanisms of time perception, as my theory of temporal qualities predicts. (shrink)
Predictions about autonomous weapon systems are typically thought to channel fears that drove all the myths about intelligence embodied in matter. One of these is the idea that the technology can get out of control and ultimately lead to horrifi c consequences, as is the case in Mary Shelley’s classic Frankenstein. Given this, predictions about AWS are sometimes dismissed as science-fiction fear-mongering. This paper considers several analogies between AWS and other weapon systems and ultimately offers an argument that nuclear weapons (...) and their effect on the development of modern asymmetrical warfare are the best analogy to the introduction of AWS. The fi nal section focuses on this analogy and offers speculations about the likely consequences of AWS being hacked. These speculations tacitly draw on myths and tropes about technology and AI from popular fi ction, such as Frankenstein, to project a convincing model of the risks and benefi ts of AWS deployment. (shrink)
Programming computers to engage in moral reasoning is not a new idea (Anderson and Anderson 2011a). Work on the subject has yielded concrete examples of computable linguistic structures for a moral grammar (Mikhail 2007), the ethical governor architecture for autonomous weapon systems (Arkin 2009), rule-based systems that implement deontological principles (Anderson and Anderson 2011b), systems that implement utilitarian principles, and a hybrid approach to programming ethical machines (Wallach and Allen 2008). This chapter considers two philosophically informed strategies for engineering software (...) that can engage in moral reasoning: algorithms based on philosophical moral theories and analogical reasoning from standard cases.1 Based on the challenges presented to the algorithmic approach, I argue that a combination of these two strategies holds the most promise and show concrete examples of how such an architecture could be built using contemporary engineering techniques. (shrink)
This paper provides an analysis of the way in which two foundational principles of medical ethics–the trusted doctor and patient autonomy–can be undermined by the use of machine learning (ML) algorithms and addresses its legal significance. This paper can be a guide to both health care providers and other stakeholders about how to anticipate and in some cases mitigate ethical conflicts caused by the use of ML in healthcare. It can also be read as a road map as to what (...) needs to be done to achieve an acceptable level of explainability in an ML algorithm when it is used in a healthcare context. (shrink)