This paper proposes an innovative ducted fan aerial manipulator, which is particularly suitable for the tasks in confined environment, where traditional multirotors and helicopters would be inaccessible. The dynamic model of the aerial manipulator is established by comprehensive mechanism and parametric frequency-domain identification. On this basis, a composite controller of the aerial platform is proposed. A basic static robust controller is designed via H-infinity synthesis to achieve basic performance, and an adaptive auxiliary loop is designed to estimate and compensate for (...) the effect acting on the vehicle from the manipulator. The computer simulation analyses show good stability of the aerial vehicle under the manipulator motion and good tracking performance of the manipulator end effector, which verify the feasibility of the proposed aerial manipulator design and the effectiveness of the proposed controller, indicating that the system can meet the requirements of high precision operation tasks well. (shrink)
Artificial intelligence (AI) research enjoyed an initial period of enthusiasm in the 1970s and 80s. But this enthusiasm was tempered by a long interlude of frustration when genuinely useful AI applications failed to be forthcoming. Today, we are experiencing once again a period of enthusiasm, fired above all by the successes of the technology of deep neural networks or deep machine learning. In this paper we draw attention to what we take to be serious problems underlying current views of artificial (...) intelligence encouraged by these successes, especially in the domain of language processing. We then show an alternative approach to language-centric AI, in which we identify a role for philosophy. (shrink)
What is the ethical impact of artificial intelligence assistants on human lives, and specifically how much do they threaten our individual autonomy? Recently, as part of forming an ethical framework for thinking about the impact of AI assistants on our lives, John Danaher claims that if the external automaticity generated by the use of AI assistants threatens our autonomy and is therefore ethically problematic, then the internal automaticity we already live with should be viewed in the same way. He takes (...) advantage of this paradox of internal automaticity to downplay the threats of external automaticity to our autonomy. We respond in this paper by challenging the legitimacy of the paradox. While Danaher assumes that internal and external automaticity are roughly equivalent, we argue that there are reasons why we should accept a large degree of internal automaticity, that it is actually essential to our sense of autonomy, and as such it is ethically good; however, the same does not go for external automaticity. Therefore, the similarity between the two is not as powerful as the paradox presumes. In conclusion, we make practical recommendations for how to better manage the integration of AI assistants into society. (shrink)
Using artificial intelligence (AI) to make decisions in human resource management (HRM) raises questions of how fair employees perceive these decisions to be and whether they experience respectful treatment (i.e., interactional justice). In this experimental survey study with open-ended qualitative questions, we examine decision making in six HRM functions and manipulate the decision maker (AI or human) and decision valence (positive or negative) to determine their impact on individuals’ experiences of interactional justice, trust, dehumanization, and perceptions of decision-maker role appropriate- (...) ness. In terms of decision makers, the use of human decision makers over AIs generally resulted in better perceptions of respectful treatment. In terms of decision valence, people experiencing positive over negative decisions generally resulted in better perceptions of respectful treatment. In instances where these cases conflict, on some indicators people preferred positive AI decisions over negative human decisions. Qualitative responses show how people identify justice concerns with both AI and human decision making. We outline implications for theory, practice, and future research. (shrink)
One of the main difficulties in assessing artificial intelligence is the tendency for people to anthropomorphise it. This becomes particularly problematic when we attach human moral activities to AI. For example, the European Commission’s High-level Expert Group on AI have adopted the position that we should establish a relationship of trust with AI and should cultivate trustworthy AI. Trust is one of the most important and defining activities in human relationships, so proposing that AI should be trusted, is a very (...) serious claim. This paper will show that AI cannot be something that has the capacity to be trusted according to the most prevalent definitions of trust because it does not possess emotive states or can be held responsible for their actions—requirements of the affective and normative accounts of trust. While AI meets all of the requirements of the rational account of trust, it will be shown that this is not actually a type of trust at all, but is instead, a form of reliance. Ultimately, even complex machines such as AI should not be viewed as trustworthy as this undermines the value of interpersonal trust, anthropomorphises AI, and diverts responsibility from those developing and using them. (shrink)
Does AI conform to humans, or will we conform to AI? An ethical evaluation of AI-intensive companies will allow investors to knowledgeably participate in the decision. The evaluation is built from nine performance indicators that can be analyzed and scored to reflect a technology’s human-centering. When summed, the scores convert into objective investment guidance. The strategy of incorporating ethics into financial decisions will be recognizable to participants in environmental, social, and governance investing, however, this paper argues that conventional ESG frameworks (...) are inadequate for AI-intensive companies. To fully account for contemporary technology, the following categories of evaluation will be developed and featured as vital investing criteria: autonomy, dignity, privacy, performance. With these priorities established, the larger goal is a model for humanitarian investing in AI-intensive companies that is intellectually robust, manageable for analysts, useful for portfolio managers, and credible for investors. (shrink)
Fueled by ever-growing amounts of data and advances in artificial intelligence, decision-making in contemporary societies is increasingly delegated to automated processes. Drawing from social science theories and from the emerging body of research about algorithmic appreciation and algorithmic perceptions, the current study explores the extent to which personal characteristics can be linked to perceptions of automated decision-making by AI, and the boundary conditions of these perceptions, namely the extent to which such perceptions differ across media, health, and judicial contexts. Data (...) from a scenario-based survey experiment with a national sample show that people are by and large concerned about risks and have mixed opinions about fairness and usefulness of automated decision-making at a societal level, with general attitudes influenced by individual characteristics. Interestingly, decisions taken automatically by AI were often evaluated on par or even better than human experts for specific decisions. Theoretical and societal implications about these findings are discussed. (shrink)
The rise of AI-based systems has been accompanied by the belief that these systems are impartial and do not suffer from the biases that humans and older technologies express. It becomes evident, however, that gender and racial biases exist in some AI algorithms. The question is where the bias is rooted—in the training dataset or in the algorithm? Is it a linguistic issue or a broader sociological current? Works in feminist philosophy of technology and behavioral economics reveal the gender bias (...) in AI technologies as a multi-faceted phenomenon, and the linguistic explanation as too narrow. The next step moves from the linguistic aspects to the relational ones, with postphenomenology. One of the analytical tools of this theory is the “I-technology-world” formula that models our relations with technologies, and through them—with the world. Realizing that AI technologies give rise to new types of relations in which the technology has an “enhanced technological intentionality”, a new formula is suggested: “I-algorithm-dataset.” In the third part of the article, four types of solutions to the gender bias in AI are reviewed: ignoring any reference to gender, revealing the considerations that led the algorithm to decide, designing algorithms that are not biased, or lastly, involving humans in the process. In order to avoid gender bias, we can recall a feminist basic understanding—visibility matters. Users and developers should be aware of the possibility of gender and racial biases, and try to avoid them, bypass them, or exterminates them altogether. (shrink)
Ethics has powerful teeth, but these are barely being used in the ethics of AI today – it is no wonder the ethics of AI is then blamed for having no teeth. This article argues that ‘ethics’ in the current AI ethics field is largely ineffective, trapped in an ‘ethical principles’ approach and as such particularly prone to manipulation, especially by industry actors. Using ethics as a substitute for law risks its abuse and misuse. This significantly limits what ethics can (...) achieve and is a great loss to the AI field and its impacts on individuals and society. This article discusses these risks and then highlights the teeth of ethics and the essential value they can – and should – bring to AI ethics now. (shrink)
An AI winter may be defined as the stage when technology, business, and the media come to terms with what AI can or cannot really do as a technology without exaggeration. Through discussion of previous AI winters, this paper examines the hype cycle (which by turn characterises AI as a social panacea or a nightmare of apocalyptic proportions) and argues that AI should be treated as a normal technology, neither as a miracle nor as a plague, but rather as of (...) the many solutions that human ingenuity has managed to devise. (shrink)
This paper explores the important role of critical science, and in particular of post-colonial and decolonial theories, in understanding and shaping the ongoing advances in artificial intelligence. Artificial intelligence is viewed as amongst the technological advances that will reshape modern societies and their relations. While the design and deployment of systems that continually adapt holds the promise of far-reaching positive change, they simultaneously pose significant risks, especially to already vulnerable peoples. Values and power are central to this discussion. Decolonial theories (...) use historical hindsight to explain patterns of power that shape our intellectual, political, economic, and social world. By embedding a decolonial critical approach within its technical practice, AI communities can develop foresight and tactics that can better align research and technology development with established ethical principles, centring vulnerable peoples who continue to bear the brunt of negative impacts of innovation and scientific progress. We highlight problematic applications that are instances of coloniality, and using a decolonial lens, submit three tactics that can form a decolonial field of artificial intelligence: creating a critical technical practice of AI, seeking reverse tutelage and reverse pedagogies, and the renewal of affective and political communities. The years ahead will usher in a wave of new scientific breakthroughs and technologies driven by AI research, making it incumbent upon AI communities to strengthen the social contract through ethical foresight and the multiplicity of intellectual perspectives available to us, ultimately supporting future technologies that enable greater well-being, with the goal of beneficence and justice for all. (shrink)
The aim of this literature review was to compose a narrative review supported by a systematic approach to critically identify and examine concerns about accountability and the allocation of responsibility and legal liability as applied to the clinician and the technologist as applied the use of opaque AI-powered systems in clinical decision making. This review questions if it is permissible for a clinician to use an opaque AI system in clinical decision making and if a patient was harmed as a (...) result of using a clinician using an AIS’s suggestion, how would responsibility and legal liability be allocated? Literature was systematically searched, retrieved, and reviewed from nine databases, which also included items from three clinical professional regulators, as well as relevant grey literature from governmental and non-governmental organisations. This literature was subjected to inclusion/exclusion criteria; those items found relevant to this review underwent data extraction. This review found that there are multiple concerns about opacity, accountability, responsibility and liability when considering the stakeholders of technologists and clinicians in the creation and use of AIS in clinical decision making. Accountability is challenged when the AIS used is opaque, and allocation of responsibility is somewhat unclear. Legal analysis would help stakeholders to understand their obligations and prepare should an undesirable scenario of patient harm eventuate when AIS were used. (shrink)
In this article we analyse the role that artificial intelligence (AI) could play, and is playing, to combat global climate change. We identify two crucial opportunities that AI offers in this domain: it can help improve and expand current understanding of climate change and it contribute to combating the climate crisis effectively. However, the development of AI also raises two sets of problems when considering climate change: the possible exacerbation of social and ethical challenges already associated with AI, and the (...) contribution to climate change of the greenhouse gases emitted by training data and computation-intensive AI systems. We assess the carbon footprint of AI research, and the factors that influence AI’s greenhouse gas (GHG) emissions in this domain. We find that the carbon footprint of AI research may be significant and highlight the need for more evidence concerning the trade-off between the GHG emissions generated by AI research and the energy and resource efficiency gains that AI can offer. In light of our analysis, we argue that leveraging the opportunities offered by AI for global climate change whilst limiting its risks is a gambit which requires responsive, evidence-based and effective governance to become a winning strategy. We conclude by identifying the European Union as being especially well-placed to play a leading role in this policy response and provide 13 recommendations that are designed to identify and harness the opportunities of AI for combating climate change, while reducing its impact on the environment. (shrink)
By looking at the politics of classification within machine learning systems, this article demonstrates why the automated interpretation of images is an inherently social and political project. We begin by asking what work images do in computer vision systems, and what is meant by the claim that computers can “recognize” an image? Next, we look at the method for introducing images into computer systems and look at how taxonomies order the foundational concepts that will determine how a system interprets the (...) world. Then we turn to the question of labeling: how humans tell computers which words will relate to a given image. What is at stake in the way AI systems use these labels to classify humans, including by race, gender, emotions, ability, sexuality, and personality? Finally, we turn to the purposes that computer vision is meant to serve in our society—the judgments, choices, and consequences of providing computers with these capacities. Methodologically, we call this an archeology of datasets: studying the material layers of training images and labels, cataloguing the principles and values by which taxonomies are constructed, and analyzing how these taxonomies create the parameters of intelligibility for an AI system. By doing this, we can critically engage with the underlying politics and values of a system, and analyze which normative patterns of life are assumed, supported, and reproduced. (shrink)
Humans and AI systems are usually portrayed as separate sys- tems that we need to align in values and goals. However, there is a great deal of AI technology found in non-autonomous systems that are used as cognitive tools by humans. Under the extended mind thesis, the functional contributions of these tools become as essential to our cognition as our brains. But AI can take cognitive extension towards totally new capabil- ities, posing new philosophical, ethical and technical chal- lenges. To (...) analyse these challenges better, we define and place AI extenders in a continuum between fully-externalized systems, loosely coupled with humans, and fully-internalized processes, with operations ultimately performed by the brain, making the tool redundant. We dissect the landscape of cog- nitive capabilities that can foreseeably be extended by AI and examine their ethical implications. We suggest that cognitive extenders using AI be treated as distinct from other cognitive enhancers by all relevant stakeholders, including developers, policy makers, and human users. (shrink)
What is the ethical impact of artificial intelligence assistants on human lives, and specifically how much do they threaten our individual autonomy? Recently, as part of forming an ethical framework for thinking about the impact of AI assistants on our lives, John Danaher claims that if the external automaticity generated by the use of AI assistants threatens our autonomy and is therefore ethically problematic, then the internal automaticity we already live with should be viewed in the same way. He takes (...) advantage of this paradox of internal automaticity to downplay the threats of external automaticity to our autonomy. We respond in this paper by challenging the legitimacy of the paradox. While Danaher assumes that internal and external automaticity are roughly equivalent, we argue that there are reasons why we should accept a large degree of internal automaticity, that it is actually essential to our sense of autonomy, and as such it is ethically good; however, the same does not go for external automaticity. Therefore, the similarity between the two is not as powerful as the paradox presumes. In conclusion, we make practical recommendations for how to better manage the integration of AI assistants into society. (shrink)
In 2017, Tom Gruber held a TED talk, in which he presented a vision of improving and enhancing humanity with AI technology. Specifically, Gruber suggested that an AI-improved personal memory (APM) would benefit people by improving their “mental gain”, making us more creative, improving our “social grace”, enabling us to do “science on our own data about what makes us feel good and stay healthy”, and, for people suffering from dementia, it “could make a difference between a life of isolation (...) and a life of dignity and connection”. -/- In this paper, Gruber’s idea will be critically assessed. Firstly, it will be argued that most of his pro-arguments for the APM are questionable. Secondly, the APM will also be criticized for other reasons, including the risks and affects to the users’ and other’s privacy and the users’ autonomy. (shrink)
Two leaders in the field offer a compelling analysis of the current state of the art and reveal the steps we must take to achieve a truly robust artificial intelligence. Despite the hype surrounding AI, creating an intelligence that rivals or exceeds human levels is far more complicated than we have been led to believe. Professors Gary Marcus and Ernest Davis have spent their careers at the forefront of AI research and have witnessed some of the greatest milestones in the (...) field, but they argue that a computer beating a human in Jeopardy! does not signal that we are on the doorstep of fully autonomous cars or superintelligent machines. The achievements in the field thus far have occurred in closed systems with fixed sets of rules, and these approaches are too narrow to achieve genuine intelligence. The real world, in contrast, is wildly complex and open-ended. How can we bridge this gap? What will the consequences be when we do? Taking inspiration from the human mind, Marcus and Davis explain what we need to advance AI to the next level, and suggest that if we are wise along the way, we won't need to worry about a future of machine overlords. If we focus on endowing machines with common sense and deep understanding, rather than simply focusing on statistical analysis and gatherine ever larger collections of data, we will be able to create an AI we can trust--in our homes, our cars, and our doctors' offices. Rebooting AI provides a lucid, clear-eyed assessment of the current science and offers an inspiring vision of how a new generation of AI can make our lives better. (shrink)
As artificial intelligence technologies become increasingly prominent in our daily lives, media coverage of the ethical considerations of these technologies has followed suit. Since previous research has shown that media coverage can drive public discourse about novel technologies, studying how the ethical issues of AI are portrayed in the media may lead to greater insight into the potential ramifications of this public discourse, particularly with regard to development and regulation of AI. This paper expands upon previous research by systematically analyzing (...) and categorizing the media portrayal of the ethical issues of AI to better understand how media coverage of these issues may shape public debate about AI. Our results suggest that the media has a fairly realistic and practical focus in its coverage of the ethics of AI, but that the coverage is still shallow. A multifaceted approach to handling the social, ethical and policy issues of AI technology is needed, including increasing the accessibility of correct information to the public in the form of fact sheets and ethical value statements on trusted webpages, collaboration and inclusion of ethics and AI experts in both research and public debate, and consistent government policies or regulatory frameworks for AI technology. (shrink)
Current advances in research, development and application of artificial intelligence systems have yielded a far-reaching discourse on AI ethics. In consequence, a number of ethics guidelines have been released in recent years. These guidelines comprise normative principles and recommendations aimed to harness the “disruptive” potentials of new AI technologies. Designed as a semi-systematic evaluation, this paper analyzes and compares 22 guidelines, highlighting overlaps but also omissions. As a result, I give a detailed overview of the field of AI ethics. Finally, (...) I also examine to what extent the respective ethical principles and values are implemented in the practice of research, development and application of AI systems—and how the effectiveness in the demands of AI ethics can be improved. (shrink)
With Artificial Intelligence entering our lives in novel ways—both known and unknown to us—there is both the enhancement of existing ethical issues associated with AI as well as the rise of new ethical issues. There is much focus on opening up the ‘black box’ of modern machine-learning algorithms to understand the reasoning behind their decisions—especially morally salient decisions. However, some applications of AI which are no doubt beneficial to society rely upon these black boxes. Rather than requiring algorithms to be (...) transparent we should focus on constraining AI and those machines powered by AI within microenvironments—both physical and virtual—which allow these machines to realize their function whilst preventing harm to humans. In the field of robotics this is called ‘envelopment’. However, to put an ‘envelope’ around AI-powered machines we need to know some basic things about them which we are often in the dark about. The properties we need to know are the: training data, inputs, functions, outputs, and boundaries. This knowledge is a necessary first step towards the envelopment of AI-powered machines. It is only with this knowledge that we can responsibly regulate, use, and live in a world populated by these machines. (shrink)
The psychological contract refers to the implicit and subjective beliefs regarding a reciprocal exchange agreement, predominantly examined between employees and employers. While contemporary contract research is investigating a wider range of exchanges employees may hold, such as with team members and clients, it remains silent on a rapidly emerging form of workplace relationship: employees’ increasing engagement with technically, socially, and emotionally sophisticated forms of artificially intelligent (AI) technologies. In this paper we examine social robots (also termed humanoid robots) as likely (...) future psychological contract partners for human employees, given these entities transform notions of workplace technology from being a tool to being an active partner. We first overview the increasing role of robots in the workplace, particularly through the advent of sociable AI, and synthesize the literature on human–robot interaction. We then develop an account of a human-social robot psychological contract and zoom in on the implications of this exchange for the enactment of reciprocity. Given the future focused nature of our work we utilize a thought experiment, a commonly used form of conceptual and mental model reasoning, to expand on our theorizing. We then outline potential implications of human-social robot psychological contracts and offer a range of pathways for future research. (shrink)
An advanced artificial intelligence could pose a significant existential risk to humanity. Several research institutes have been set-up to address those risks. And there is an increasing number of academic publications analysing and evaluating their seriousness. Nick Bostrom’s superintelligence: paths, dangers, strategies represents the apotheosis of this trend. In this article, I argue that in defending the credibility of AI risk, Bostrom makes an epistemic move that is analogous to one made by so-called sceptical theists in the debate about the (...) existence of God. And while this analogy is interesting in its own right, what is more interesting are its potential implications. It has been repeatedly argued that sceptical theism has devastating effects on our beliefs and practices. Could it be that AI-doomsaying has similar effects? I argue that it could. Specifically, and somewhat paradoxically, I argue that it could amount to either a reductio of the doomsayers position, or an important and additional reason to join their cause. I use this paradox to suggest that the modal standards for argument in the superintelligence debate need to be addressed. (shrink)
Real engines of the artificial intelligence revolution, machine learning models, and algorithms are embedded nowadays in many services and products around us. As a society, we argue it is now necessary to transition into a phronetic paradigm focused on the ethical dilemmas stemming from the conception and application of AIs to define actionable recommendations as well as normative solutions. However, both academic research and society-driven initiatives are still quite far from clearly defining a solid program of study and intervention. In (...) this contribution, we will focus on selected ethical investigations around AI by proposing an incremental model of trust that can be applied to both human-human and human-AI interactions. Starting with a quick overview of the existing accounts of trust, with special attention to Taddeo’s concept of “e-trust,” we will discuss all the components of the proposed model and the reasons to trust in human-AI interactions in an example of relevance for business organizations. We end this contribution with an analysis of the epistemic and pragmatic reasons of trust in human-AI interactions and with a discussion of kinds of normativity in trustworthiness of AIs. (shrink)
A critically important ethical issue facing the AI research community is how AI research and AI products can be responsibly conceptualised and presented to the public. A good deal of fear and concern about uncontrollable AI is now being displayed in public discourse. Public understanding of AI is being shaped in a way that may ultimately impede AI research. The public discourse as well as discourse among AI researchers leads to at least two problems: a confusion about the notion of (...) ‘autonomy’ that induces people to attribute to machines something comparable to human autonomy, and a ‘sociotechnical blindness’ that hides the essential role played by humans at every stage of the design and deployment of an AI system. Here our purpose is to develop and use a language with the aim to reframe the discourse in AI and shed light on the real issues in the discipline. (shrink)
In 2019, the IEEE launched the P7000 standards projects intended to address ethical issues in the design of autonomous and intelligent systems. This move came amidst a growing public concern over the unintended consequences of artificial intelligence (AI), compounded by the lack of an anticipatory process for attending to ethical impact within professional practice. However, the difficulty in moving from principles to practice presents a significant challenge to the implementation of ethical guidelines. Herein, we describe two complementary frameworks for integrating (...) ethical analysis into engineering practice to help address this challenge. We then provide the outcomes of an ethical analysis informed by these frameworks, conducted within the specific context of internetdelivered therapy in digital mental health. We hope both the frameworks and analysis can provide tools and insights, not only for the context of digital healthcare, but for data-enabled and intelligent technology development more broadly. (shrink)
The extended mind thesis maintains that the functional contributions of tools and artefacts can become so essential for our cognition that they can be constitutive parts of our minds. In other words, our tools can be on a par with our brains: our minds and cognitive processes can literally ‘extend’ into the tools. Several extended mind theorists have argued that this ‘extended’ view of the mind offers unique insights into how we understand, assess, and treat certain cognitive conditions. In this (...) chapter we suggest that using AI extenders, i.e., tightly coupled cognitive extenders that are imbued with machine learning and other ‘artificially intelligent’ tools, presents both new ethical challenges and opportunities for mental health. We focus on several mental health conditions that can develop differently by the use of AI extenders for people with cognitive disorders and then discuss some of the related opportunities and challenges. (shrink)
Artificial intelligence is set to transform healthcare. Key ethical issues to emerge with this transformation encompass the accountability and transparency of the decisions made by AI-based systems, the potential for group harms arising from algorithmic bias and the professional roles and integrity of clinicians. These concerns must be balanced against the imperatives of generating public benefit with more efficient healthcare systems from the vastly higher and accurate computational power of AI. In weighing up these issues, this paper applies the deliberative (...) balancing approach of the Ethics Framework for Big Data in Health and Research. The analysis applies relevant values identified from the framework to demonstrate how decision-makers can draw on them to develop and implement AI-assisted support systems into healthcare and clinical practice ethically and responsibly. Please refer to Xafis et al. in this special issue of the Asian Bioethics Review for more information on how this framework is to be used, including a full explanation of the key values involved and the balancing approach used in the case study at the end of this paper. (shrink)
This paper focuses on the fact that AI is predominantly portrayed as white—in colour, ethnicity, or both. We first illustrate the prevalent Whiteness of real and imagined intelligent machines in four categories: humanoid robots, chatbots and virtual assistants, stock images of AI, and portrayals of AI in film and television. We then offer three interpretations of the Whiteness of AI, drawing on critical race theory, particularly the idea of the White racial frame. First, we examine the extent to which this (...) Whiteness might simply reflect the predominantly White milieus from which these artefacts arise. Second, we argue that to imagine machines that are intelligent, professional, or powerful is to imagine White machines because the White racial frame ascribes these attributes predominantly to White people. Third, we argue that AI racialised as White allows for a full erasure of people of colour from the White utopian imaginary. Finally, we examine potential consequences of the racialisation of AI, arguing it could exacerbate bias and misdirect concern. (shrink)
This paper reviews the history of AI & Law research from the perspective of argument schemes. It starts with the observation that logic, although very well applicable to legal reasoning when there is uncertainty, vagueness and disagreement, is too abstract to give a fully satisfactory classification of legal argument types. It therefore needs to be supplemented with an argument-scheme approach, which classifies arguments not according to their logical form but according to their content, in particular, according to the roles that (...) the various elements of an argument can play. This approach is then applied to legal reasoning, to identify some of the main legal argument schemes. It is also argued that much AI & Law research in fact employs the argument-scheme approach, although it usually is not presented as such. Finally, it is argued that the argument-scheme approach and the way it has been employed in AI & Law respects some of the main lessons to be learnt from Toulmin’s The Uses of Argument. (shrink)
This chapter evaluates whether AI systems are or will be rights-holders, explaining the conditions under which people should recognize AI systems as rights-holders. It develops a skeptical stance toward the idea that current forms of artificial intelligence are holders of moral rights, beginning with an articulation of one of the most prominent and most plausible theories of moral rights: the Interest Theory of rights. On the Interest Theory, AI systems will be rights-holders only if they have interests or a well-being. (...) Current AI systems are not bearers of well-being, and so fail to meet the necessary condition for being rights-holders. This argument is robust against a range of different objections. However, the chapter also shows why difficulties in assessing whether future AI systems might have interests or be bearers of well-being—and so be rights-holders—raise difficult ethical challenges for certain developments in AI. (shrink)
The growing number of ‘smart’ instruments, those equipped with AI, has raised concerns because these instruments make autonomous decisions; that is, they act beyond the guidelines provided them by programmers. Hence, the question the makers and users of smart instrument face is how to ensure that these instruments will not engage in unethical conduct. The article suggests that to proceed we need a new kind of AI program—oversight programs—that will monitor, audit, and hold operational AI programs accountable.
This paper examines an insoluble Cartesian problem for classical AI, namely, how linguistic understanding involves knowledge and awareness of u’s meaning, a cognitive process that is irreducible to algorithms. As analyzed, Descartes’ view about reason and intelligence has paradoxically encouraged certain classical AI researchers to suppose that linguistic understanding suffices for machine intelligence. Several advocates of the Turing Test, for example, assume that linguistic understanding only comprises computational processes which can be recursively decomposed into algorithmic mechanisms. Against this background, in (...) the first section, I explain Descartes’ view about language and mind. To show that Turing bites the bullet with his imitation game, in the second section I analyze this method to assess intelligence. Then, in the third section, I elaborate on Schank and Abelsons’ Script Applier Mechanism (SAM, hereby), which supposedly casts doubt on Descartes’ denial that machines can think. Finally, in the fourth section, I explore a challenge that any algorithmic decomposition of linguistic understanding faces. This challenge, I argue, is the core of the Cartesian problem: knowledge and awareness of meaning require a first-person viewpoint which is irreducible to the decomposition of algorithmic mechanisms. (shrink)
The concept of agency as applied to technological artifacts has become an object of heated debate in the context of AI research because some AI researchers ascribe to programs the type of agency traditionally associated with humans. Confusion about agency is at the root of misconceptions about the possibilities for future AI. We introduce the concept of a triadic agency that includes the causal agency of artifacts and the intentional agency of humans to better describe what happens in AI as (...) it functions in real-world contexts. We use the VW emission fraud case to explain triadic agency since in this case a technological artifact, namely software, was an essential part of the wrongdoing and the software might be said to have agency in the wrongdoing. We then extend the case to include futuristic AI, imagining AI that becomes more and more autonomous. (shrink)
Apocalyptic AI, the hope that we might one day upload our minds into machines and live forever in cyberspace, has become commonplace. This view now affects robotics and AI funding, play in online games, and philosophical and theological conversations about morality and human dignity.
The applications of Artificial Intelligence lie all around us; in our homes, schools and offices, in our cinemas, in art galleries and - not least - on the Internet. The results of Artificial Intelligence have been invaluable to biologists, psychologists, and linguists in helping to understand the processes of memory, learning, and language from a fresh angle.As a concept, Artificial Intelligence has fuelled and sharpened the philosophical debates concerning the nature of the mind, intelligence, and the uniqueness of human beings. (...) Margaret A. Boden reviews the philosophical and technological challenges raised by Artificial Intelligence, considering whether programs could ever be really intelligent, creative or even conscious, and shows how the pursuit of Artificial Intelligence has helped us to appreciate how human and animal minds are possible. (shrink)
Personal AI assistants are now nearly ubiquitous. Every leading smartphone operating system comes with a personal AI assistant that promises to help you with basic cognitive tasks: searching, planning, messaging, scheduling and so on. Usage of such devices is effectively a form of algorithmic outsourcing: getting a smart algorithm to do something on your behalf. Many have expressed concerns about this algorithmic outsourcing. They claim that it is dehumanising, leads to cognitive degeneration, and robs us of our freedom and autonomy. (...) Some people have a more subtle view, arguing that it is problematic in those cases where its use may degrade important interpersonal virtues. In this article, I assess these objections to the use of AI assistants. I will argue that the ethics of their use is complex. There are no quick fixes or knockdown objections to the practice, but there are some legitimate concerns. By carefully analysing and evaluating the objections that have been lodged to date, we can begin to articulate an ethics of personal AI use that navigates those concerns. In the process, we can locate some paradoxes in our thinking about outsourcing and technological dependence, and we can think more clearly about what it means to live a good life in the age of smart machines. (shrink)
A defining aspect of our modern age is our tenacious belief in technology in all walks of life, not least in education. It could be argued that this infatuation with technology or ‘techno-philia’ in education has had a deep impact in the classroom changing the relationship between teacher and student, as well as between students; that is, these relations have become increasingly more I–It than I–Thou based because the capacity to form bonds, the level of connectedness between teacher and students, (...) and between students has either decreased or become impaired by the increasing technologisation of education. Running parallel to this and perhaps exacerbating the problem is the so-called process of ‘learnification’, which understands that teachers are mere facilitators of the learning process, rather than someone with an expertise who has something to teach others. In this article, I first assess the current technologisation of education and the impact it has had in relations within the classroom; second, I characterise Buber’s I–It and I–Thou relations and its implications for education; finally, I investigate through a thought experiment if the development of AI could 1 day successfully replace human teachers in the classroom. (shrink)
The idea of artificial intelligence for social good is gaining traction within information societies in general and the AI community in particular. It has the potential to tackle social problems through the development of AI-based solutions. Yet, to date, there is only limited understanding of what makes AI socially good in theory, what counts as AI4SG in practice, and how to reproduce its initial successes in terms of policies. This article addresses this gap by identifying seven ethical factors that are (...) essential for future AI4SG initiatives. The analysis is supported by 27 case examples of AI4SG projects. Some of these factors are almost entirely novel to AI, while the significance of other factors is heightened by the use of AI. From each of these factors, corresponding best practices are formulated which, subject to context and balance, may serve as preliminary guidelines to ensure that well-designed AI is more likely to serve the social good. (shrink)
By mid-2019 there were more than 80 AI ethics guides available in the public domain. Despite this, 2020 saw numerous news stories break related to ethically questionable uses of AI. In part, this is because AI ethics theory remains highly abstract, and of limited practical applicability to those actually responsible for designing algorithms and AI systems. Our previous research sought to start closing this gap between the ‘what’ and the ‘how’ of AI ethics through the creation of a searchable typology (...) of tools and methods designed to translate between the five most common AI ethics principles and implementable design practices. Whilst a useful starting point, that research rested on the assumption that all AI practitioners are aware of the ethical implications of AI, understand their importance, and are actively seeking to respond to them. In reality, it is unclear whether this is the case. It is this limitation that we seek to overcome here by conducting a mixed-methods qualitative analysis to answer the following four questions: what do AI practitioners understand about the need to translate ethical principles into practice? What motivates AI practitioners to embed ethical principles into design practices? What barriers do AI practitioners face when attempting to translate ethical principles into practice? And finally, what assistance do AI practitioners want and need when translating ethical principles into practice? (shrink)
Implicit stochastic models, including both ‘deep neural networks’ (dNNs) and the more recent unsupervised foundational models, cannot be explained. That is, it cannot be determined how they work, because the interactions of the millions or billions of terms that are contained in their equations cannot be captured in the form of a causal model. Because users of stochastic AI systems would like to understand how they operate in order to be able to use them safely and reliably, there has emerged (...) a new field called ‘explainable AI’ (XAI). When we examine the XAI literature, however, it becomes apparent that its protagonists have redefined the term ‘explanation’ to mean something else, namely: ‘interpretation’. Interpretations are indeed sometimes possible, but we show that they give at best only a subjective understanding of how a model works. We propose an alternative to XAI, namely certified AI (CAI), and describe how an AI can be specified, realized, and tested in order to become certified. The resulting approach combines ontologies and formal logic with statistical learning to obtain reliable AI systems which can be safely used in technical applications. (shrink)