Autonomy has been central to moral and political philosophy for millenia, and has been positioned as a critical aspect of both justice and wellbeing. Research in psychology supports this position, providing empirical evidence that autonomy is critical to motivation, personal growth and psychological wellness. Responsible AI will require an understanding of, and ability to effectively design for, human autonomy (rather than just machine autonomy) if it is to genuinely benefit humanity. Yet the effects on human autonomy of digital experiences are (...) neither straightforward nor consistent, and are complicated by commercial interests and tensions around compulsive overuse. This multi-layered reality requires an analysis that is itself multidimensional and that takes into account human experience at various levels of resolution. We borrow from HCI and psychological research to apply a model (“METUX”) that identifies six distinct spheres of technology experience. We demonstrate the value of the model for understanding human autonomy in a technology ethics context at multiple levels by applying it to the real-world case study of an AI-enhanced video recommender system. In the process we argue for the following three claims: 1) There are autonomy-related consequences to algorithms representing the interests of third parties, and they are not impartial and rational extensions of the self, as is often perceived; 2) Designing for autonomy is an ethical imperative critical to the future design of responsible AI; and 3) Autonomy-support must be analysed from at least six spheres of experience in order to approriately capture contradictory and downstream effects. (shrink)
In 2019, the IEEE launched the P7000 standards projects intended to address ethical issues in the design of autonomous and intelligent systems. This move came amidst a growing public concern over the unintended consequences of artificial intelligence (AI), compounded by the lack of an anticipatory process for attending to ethical impact within professional practice. However, the difficulty in moving from principles to practice presents a significant challenge to the implementation of ethical guidelines. Herein, we describe two complementary frameworks for integrating (...) ethical analysis into engineering practice to help address this challenge. We then provide the outcomes of an ethical analysis informed by these frameworks, conducted within the specific context of internetdelivered therapy in digital mental health. We hope both the frameworks and analysis can provide tools and insights, not only for the context of digital healthcare, but for data-enabled and intelligent technology development more broadly. (shrink)
Humans and AI systems are usually portrayed as separate sys- tems that we need to align in values and goals. However, there is a great deal of AI technology found in non-autonomous systems that are used as cognitive tools by humans. Under the extended mind thesis, the functional contributions of these tools become as essential to our cognition as our brains. But AI can take cognitive extension towards totally new capabil- ities, posing new philosophical, ethical and technical chal- lenges. To (...) analyse these challenges better, we define and place AI extenders in a continuum between fully-externalized systems, loosely coupled with humans, and fully-internalized processes, with operations ultimately performed by the brain, making the tool redundant. We dissect the landscape of cog- nitive capabilities that can foreseeably be extended by AI and examine their ethical implications. We suggest that cognitive extenders using AI be treated as distinct from other cognitive enhancers by all relevant stakeholders, including developers, policy makers, and human users. (shrink)
Vehicle externalism maintains that the vehicles of our mental representations can be located outside of the head, that is, they need not be instantiated by neurons located inside the brain of the cogniser. But some disagree, insisting that ‘non-derived’, or ‘original’, content is the mark of the cognitive and that only biologically instantiated representational vehicles can have non-derived content, while the contents of all extra-neural representational vehicles are derived and thus lie outside the scope of the cognitive. In this paper (...) we develop one aspect of Menary’s vehicle externalist theory of cognitive integration—the process of enculturation—to respond to this longstanding objection. We offer examples of how expert mathematicians introduce new symbols to represent new mathematical possibilities that are not yet understood, and we argue that these new symbols have genuine non-derived content, that is, content that is not dependent on an act of interpretation by a cognitive agent and that does not derive from conventional associations, as many linguistic representations do. (shrink)
Many authors have proposed constraining the behaviour of intelligent systems with ‘machine ethics’ to ensure positive social outcomes from the development of such systems. This paper critically analyses the prospects for machine ethics, identifying several inherent limitations. While machine ethics may increase the probability of ethical behaviour in some situations, it cannot guarantee it due to the nature of ethics, the computational limitations of computational agents and the complexity of the world. In addition, machine ethics, even if it were to (...) be ‘solved’ at a technical level, would be insufficient to ensure positive social outcomes from intelligent systems. (shrink)
Andy Clark and David Chalmers (1998) argue that certain mental states and processes can be partially constituted by objects located beyond one’s brain and body: this is their extended mind thesis (EM). But they maintain that consciousness relies on processing that is too high in speed and bandwidth to be realized outside the body (see Chalmers, 2008, and Clark, 2009). I evaluate Clark’s and Chalmers’ reason for denying that consciousness extends while still supporting unconscious state extension. I argue that their (...) reason is not well grounded and does not hold up against foreseeable advances in technology. I conclude that their current position needs re-evaluation. If their original parity argument works as a defence of EM, they have yet to identify a good reason why it does not also work as a defence of extended consciousness. I end by advancing a parity argument for extended consciousness and consider some possible replies. (shrink)
Technological advances are bringing new light to privacy issues and changing the reasons for why privacy is important. These advances have changed not only the kind of personal data that is available to be collected, but also how that personal data can be used by those who have access to it. We are particularly concerned with how information about personal attributes inferred from collected data (such as online behaviour), can be used to tailor messages and services to specific individuals or (...) groups. This kind of ‘personalised targeting’ has the potential to influence individuals’ perceptions, attitudes, and choices in unprecedented ways. In this paper, we argue that because it is becoming easier for companies to use collected data for influence, threats to privacy are increasingly also threats to personal autonomy—an individual’s ability to reflect on and decide freely about their values, actions, and behaviour, and to act on those choices.4 While increasing attention is directed to the ethics of how personal data is collected, we make the case that a new ethics of privacy needs to also think more rigorously about how personal data may be used, and its potential impact on personal autonomy. (shrink)
Legal theorists have characterized physical evidence of brain dysfunction as a double-edged sword, wherein the very quality that reduces the defendant’s responsibility for his transgression could simultaneously increase motivations to punish him by virtue of his apparently increased dangerousness. However, empirical evidence of this pattern has been elusive, perhaps owing to a heavy reliance on singular measures that fail to distinguish between plural, often competing internal motivations for punishment. The present study employed a test of the theorized double-edge pattern using (...) a novel approach designed to separate such motivations. We asked a large sample of participants (N = 330) to render criminal sentencing judgments under varying conditions of the defendant’s mental health status (Healthy, Neurobiological Disorder, Psychological Disorder) and the disorder’s treatability (Treatable, Untreatable). As predicted, neurobiological evidence simultaneously elicited shorter prison sentences (i.e., mitigating) and longer terms of involuntary hospitalization (i.e., aggravating) than equivalent psychological evidence. However, these effects were not well explained by motivations to restore treatable defendants to health or to protect society from dangerous persons but instead by deontological motivations pertaining to the defendant’s level of deservingness and possible obligation to provide medical care. This is the first study of its kind to quantitatively demonstrate the paradoxical effect of neuroscientific trial evidence and raises implications for how such evidence is presented and evaluated. (shrink)
The extended mind thesis maintains that while minds may be centrally located in one’s brain-and-body, they are sometimes partly constituted by tools in our environment. Critics argue that we have no reason to move from the claim that cognition is embedded in the environment to the stronger claim that cognition can be constituted by the environment. I will argue that there are normative reasons, both scientific and ethical, for preferring the extended account of the mind to the rival embedded account.
Objective: To examine the role of explainability in machine learning for healthcare (MLHC), and its necessity and significance with respect to effective and ethical MLHC application. Study Design and Setting: This commentary engages with the growing and dynamic corpus of literature on the use of MLHC and artificial intelligence (AI) in medicine, which provide the context for a focused narrative review of arguments presented in favour of and opposition to explainability in MLHC. Results: We find that concerns regarding explainability are (...) not limited to MLHC, but rather extend to numerous well-validated treatment interventions as well as to human clinical judgment itself. We examine the role of evidence-based medicine in evaluating unexplainable treatments and technologies, and highlight the analogy between the concept of explainability in MLHC and the related concept of mechanistic reasoning in evidence-based medicine. Conclusion: Ultimately, we conclude that the value of explainability in MLHC is not intrinsic, but is instead instrumental to achieving greater imperatives such as performance and trust. We caution against the uncompromising pursuit of explainability, and advocate instead for the development of robust empirical methods to successfully evaluate increasingly inexplicable algorithmic systems. (shrink)
The extended mind thesis maintains that the functional contributions of tools and artefacts can become so essential for our cognition that they can be constitutive parts of our minds. In other words, our tools can be on a par with our brains: our minds and cognitive processes can literally ‘extend’ into the tools. Several extended mind theorists have argued that this ‘extended’ view of the mind offers unique insights into how we understand, assess, and treat certain cognitive conditions. In this (...) chapter we suggest that using AI extenders, i.e., tightly coupled cognitive extenders that are imbued with machine learning and other ‘artificially intelligent’ tools, presents both new ethical challenges and opportunities for mental health. We focus on several mental health conditions that can develop differently by the use of AI extenders for people with cognitive disorders and then discuss some of the related opportunities and challenges. (shrink)
The extended mind thesis maintains that while minds may be centrally located in one?s brain-and-body, they are sometimes partly constituted by tools in our environment. Critics argue that we have no reason to move from the claim that cognition is embedded in the environment to the stronger claim that cognition can be constituted by the environment. I will argue that there are normative reasons, both scientific and ethical, for preferring the extended account of the mind to the rival embedded account. (...) nema. (shrink)
Alan Turing, one of the fathers of computing, warned that Artificial Intelligence (AI) could one day pose an existential risk to humanity. Today, recent advancements in the field AI have been accompanied by a renewed set of existential warnings. But what exactly constitutes an existential risk? And how exactly does AI pose such a threat? In this chapter we aim to answer these questions. In particular, we will critically explore three commonly cited reasons for thinking that AI poses an existential (...) threat to humanity: the control problem, the possibility of global disruption from an AI race dynamic, and the weaponization of AI. (shrink)
Despite there being little consensus on what intelligence is or how to measure it, the media and the public have become increasingly preoccupied with the concept owing to recent accomplishments in machine learning and research on artificial intelligence (AI). Governments and corporations are investing billions of dollars to fund researchers who are keen to produce an ever‐expanding range of artificial intelligent systems. More than 30 countries have announced such research initiatives over the past 3 years 1. For example, the EU (...) Commission pledged to increase the investment in AI research to €1.5 billion by 2020 (from €500 million in 2017), while China has committed $2.1 billion towards an AI technology park in Beijing alone 1. This global investment in AI is astonishing and prompts several questions: What are the true possibilities and limitations of AI? What do AI researchers and developers mean by “intelligence”? How does this compare to the everyday concept of intelligence and how the term is other branches of cognitive science? And can machine learning produce anything that is truly “intelligent”? (shrink)
Your smartphone is much more than just a phone. It can tell a more intimate story about you than your best friend. No other piece of hardware in history, not even your brain, contains the quality or quantity of information held on your phone: it ‘knows’ whom you speak to, when you speak to them, what you said, where you have been, your purchases, photos, biometric data, even your notes to yourself – and all this dating back years. In this (...) piece I ask whether, given the role they play in our lives, our devices deserve the same legal protections as our brains and bodies? (shrink)
We present nine facets for the analysis of the past and future evolution of AI. Each facet has also a set of edges that can summarise different trends and contours in AI. With them, we first conduct a quantitative analysis using the information from two decades of AAAI/IJCAI conferences and around 50 years of documents from AI topics, an official database from the AAAI, illustrated by several plots. We then perform a qualitative analysis using the facets and edges, locating AI (...) systems in the intelligence landscape and the discipline as a whole. This analytical framework provides a more structured and systematic way of looking at the shape and boundaries of AI. (shrink)
The extended mind thesis prompted philosophers to think about the different shapes our minds can take as they reach beyond our brains and stretch into new technologies. Some of us rely heavily on the environment to scaffold our cognition, reorganizing our homes into rich cognitive niches, for example, or using our smartphones as swiss-army knives for cognition. But the thesis also prompts us to think about other varieties of minds and the unique forms they take. What are we to make (...) of the exotic distributed nervous systems we see in octopuses, for example, or the complex collectives of bees? In this paper, I will argue for a robust version of the extended mind thesis that includes the possibility of extended consciousness. This thesis will open up new ways of understanding the different forms that conscious minds can take, whether human or nonhuman. The thesis will also challenge the popular belief that consciousness exists exclusively in the brain. Furthermore, despite the attention that the extended mind thesis has received, there has been relatively less written about the possibility of extended consciousness. A number of prominent defenders of the extended mind thesis have even called the idea of extended consciousness implausible. I will argue, however, that extended consciousness is a viable theory and it follows from the same ‘parity argument’ that Clark and Chalmers first advanced to support the extended mind thesis. What is more, it may even provide us with a valuable paradigm for how we understand some otherwise puzzling behaviors in certain neurologically abnormal patients as well as in some nonhuman animals. (shrink)