1. WHAT IS ARTIFICIALINTELLIGENCE? One of the fascinating aspects of the field of artificialintelligence (AI) is that the precise nature of its subject ..
In this book, the author examines the ethical implications of ArtificialIntelligence systems as they integrate and replace traditional social structures in new sociocognitive-technological environments. She discusses issues related to the integrity of researchers, technologists, and manufacturers as they design, construct, use, and manage artificially intelligent systems; formalisms for reasoning about moral decisions as part of the behavior of artificial autonomous systems such as agents and robots; and design methodologies for social agents based on societal, moral, and (...) legal values. Throughout the book the author discusses related work, conscious of both classical, philosophical treatments of ethical issues and the implications in modern, algorithmic systems, and she combines regular references and footnotes with suggestions for further reading. This short overview is suitable for undergraduate students, in both technical and non-technical courses, and for interested and concerned researchers, practitioners, and citizens. (shrink)
In two experiments (total N=693) we explored whether people are willing to consider paintings made by AI-driven robots as art, and robots as artists. Across the two experiments, we manipulated three factors: (i) agent type (AI-driven robot v. human agent), (ii) behavior type (intentional creation of a painting v. accidental creation), and (iii) object type (abstract v. representational painting). We found that people judge robot paintings and human painting as art to roughly the same extent. However, people are much less (...) willing to consider robots as artists than humans, which is partially explained by the fact that they are less disposed to attribute artistic intentions to robots. (shrink)
ArtificialIntelligence and Scientific Method examines the remarkable advances made in the field of AI over the past twenty years, discussing their profound implications for philosophy. Taking a clear, non-technical approach, Donald Gillies shows how current views on scientific method are challenged by this recent research, and suggests a new framework for the study of logic. Finally, he draws on work by such seminal thinkers as Bacon, Gdel, Popper, Penrose, and Lucas, to address the hotly-contested question of whether (...) computers might become intellectually superior to human beings. (shrink)
Presupposing no familiarity with the technical concepts of either philosophy or computing, this clear introduction reviews the progress made in AI since the inception of the field in 1956. Copeland goes on to analyze what those working in AI must achieve before they can claim to have built a thinking machine and appraises their prospects of succeeding.There are clear introductions to connectionism and to the language of thought hypothesis which weave together material from philosophy, artificialintelligence and neuroscience. (...) John Searle's attacks on AI and cognitive science are countered and close attention is given to foundational issues, including the nature of computation, Turing Machines, the Church-Turing Thesis and the difference between classical symbol processing and parallel distributed processing. The book also explores the possibility of machines having free will and consciousness and concludes with a discussion of in what sense the human brain may be a computer. (shrink)
Over the coming decades, ArtificialIntelligence will profoundly impact the way we live, work, wage war, play, seek a mate, educate our young, and care for our elderly. It is likely to greatly increase our aggregate wealth, but it will also upend our labor markets, reshuffle our social order, and strain our private and public institutions. Eventually it may alter how we see our place in the universe, as machines pursue goals independent of their creators and outperform us (...) in domains previously believed to be the sole dominion of humans. Whether we regard them as conscious or unwitting, revere them as a new form of life or dismiss them as mere clever appliances, is beside the point. They are likely to play an increasingly critical and intimate role in many aspects of our lives. The emergence of systems capable of independent reasoning and action raises serious questions about just whose interests they are permitted to serve, and what limits our society should place on their creation and use. Deep ethical questions that have bedeviled philosophers for ages will suddenly arrive on the steps of our courthouses. Can a machine be held accountable for its actions? Should intelligent systems enjoy independent rights and responsibilities, or are they simple property? Who should be held responsible when a self-driving car kills a pedestrian? Can your personal robot hold your place in line, or be compelled to testify against you? If it turns out to be possible to upload your mind into a machine, is that still you? The answers may surprise you. (shrink)
This interdisciplinary collection of classical and contemporary readings provides a clear and comprehensive guide to the many hotly-debated philosophical issues at the heart of artificialintelligence.
In Logics for ArtificialIntelligence, Raymond Turner leads us on a whirl-wind tour of nonstandard logics and their general applications to Al and computer science.
In October 2016, the White House, the European Parliament, and the UK House of Commons each issued a report outlining their visions on how to prepare society for the widespread use of artificialintelligence. In this article, we provide a comparative assessment of these three reports in order to facilitate the design of policies favourable to the development of a ‘good AI society’. To do so, we examine how each report addresses the following three topics: the development of (...) a ‘good AI society’; the role and responsibility of the government, the private sector, and the research community in pursuing such a development; and where the recommendations to support such a development may be in need of improvement. Our analysis concludes that the reports address adequately various ethical, social, and economic topics, but come short of providing an overarching political vision and long-term strategy for the development of a ‘good AI society’. In order to contribute to fill this gap, in the conclusion we suggest a two-pronged approach. (shrink)
The enduring innovations in artificialintelligence and robotics offer the promised capacity of computer consciousness, sentience and rationality. The development of these advanced technologies have been considered to merit rights, however these can only be ascribed in the context of commensurate responsibilities and duties. This represents the discernable next-step for evolution in this field. Addressing these needs requires attention to the philosophical perspectives of moral responsibility for artificialintelligence and robotics. A contrast to the moral status (...) of animals may be considered. At a practical level, the attainment of responsibilities by artificialintelligence and robots can benefit from the established responsibilities and duties of human society, as their subsistence exists within this domain. These responsibilities can be further interpreted and crystalized through legal principles, many of which have been conserved from ancient Roman law. The ultimate and unified goal of stipulating these responsibilities resides through the advancement of mankind and the enduring preservation of the core tenets of humanity. (shrink)
Highly sophisticated capabilities of artificialintelligence have skyrocketed its popularity across many industry sectors globally. The public sector is one of these. Many cities around the world are trying to position themselves as leaders of urban innovation through the development and deployment of AI systems. Likewise, increasing numbers of local government agencies are attempting to utilise AI technologies in their operations to deliver policy and generate efficiencies in highly uncertain and complex urban environments. While the popularity of AI (...) is on the rise in urban policy circles, there is limited understanding and lack of empirical studies on the city manager perceptions concerning urban AI systems. Bridging this gap is the rationale of this study. The methodological approach adopted in this study is twofold. First, the study collects data through semi-structured interviews with city managers from Australia and the US. Then, the study analyses the data using the summative content analysis technique with two data analysis software. The analysis identifies the following themes and generates insights into local government services: AI adoption areas, cautionary areas, challenges, effects, impacts, knowledge basis, plans, preparedness, roadblocks, technologies, deployment timeframes, and usefulness. The study findings inform city managers in their efforts to deploy AI in their local government operations, and offer directions for prospective research. (shrink)
Seit 2014 erscheinen die Bände der renommierten Wiener Reihe bei De Gruyter. Das äußere Layout der Bände wurde modernisiert, inhaltlich und personell jedoch ist das Profil der seit mehr als zwei Jahrzehnten erscheinenden Buchreihe von Kontinuität geprägt. Die Bände sind jeweils einer aktuellen philosophischen Fragestellung gewidmet. Eine internationale Autorenschaft und die Veröffentlichung fremdsprachiger Beiträge sind Elemente des Programms. Die Reihe will dazu beitragen, dogmatische Abgrenzungen zwischen philosophischen Schulen und Traditionen abzubauen.
This paper discusses the problem of responsibility attribution raised by the use of artificialintelligence technologies. It is assumed that only humans can be responsible agents; yet this alone already raises many issues, which are discussed starting from two Aristotelian conditions for responsibility. Next to the well-known problem of many hands, the issue of “many things” is identified and the temporal dimension is emphasized when it comes to the control condition. Special attention is given to the epistemic condition, (...) which draws attention to the issues of transparency and explainability. In contrast to standard discussions, however, it is then argued that this knowledge problem regarding agents of responsibility is linked to the other side of the responsibility relation: the addressees or “patients” of responsibility, who may demand reasons for actions and decisions made by using AI. Inspired by a relational approach, responsibility as answerability thus offers an important additional, if not primary, justification for explainability based, not on agency, but on patiency. (shrink)
Advanced AI systems are rapidly making their way into medical research and practice, and, arguably, it is only a matter of time before they will surpass human practitioners in terms of accuracy, reliability, and knowledge. If this is true, practitioners will have a prima facie epistemic and professional obligation to align their medical verdicts with those of advanced AI systems. However, in light of their complexity, these AI systems will often function as black boxes: the details of their contents, calculations, (...) and procedures cannot be meaningfully understood by human practitioners. When AI systems reach this level of complexity, we can also speak of black-box medicine. In this paper, we want to argue that black-box medicine conflicts with core ideals of patient-centered medicine. In particular, we claim, black-box medicine is not conducive for supporting informed decision-making based on shared information, shared deliberation, and shared mind between practitioner and patient. (shrink)
The purpose of this book, originally published in 1987, was to contribute to the advance of artificialintelligence by clarifying and removing the major sources of philosophical confusion at the time which continued to preoccupy scientists and thereby impede research. Unlike the vast majority of philosophical critiques of AI, however, each of the authors in this volume has made a serious attempt to come to terms with the scientific theories that have been developed, rather than attacking superficial 'straw (...) men' which bear scant resemblance to the complex theories that have been developed. For each is convinced that the philosopher's responsibility is to contribute from his own special intellectual point of view to the progress of such an important field, rather than sitting in lofty judgement dismissing the efforts of their scientific peers. The aim of this book is thus to correct some of the common misunderstandings of its subject. The technical term ArtificialIntelligence has created considerable unnecessary confusion because of the ordinary meanings associated with it, and for that very reason, the term is endlessly misused and abused. The essays collected here all aim to expound the true nature of AI, and to remove the ill-conceived philosophical discussions which seek answers to the wrong questions in the wrong ways. Philosophical discussions and decisions about the proper use of AI need to be based on a proper understanding of the manner in which AI-scientists achieve their results; in particular, in their dependence on the initial planning input of human beings. The collection combines the Anglo-Saxon school of analytical philosophy with scientific and psychological methods of investigation. The distinguished authors in this volume represent a cross-section of philosophers, psychologists, and computer scientists from all over the world. The result is a fascinating study in the nature and future of AI, written in a style which is certain to appeal and inform laymen and specialists alike. has created considerable unnecessary confusion because of the ordinary meanings associated with it, and for that very reason, the term is endlessly misused and abused. The essays collected here all aim to expound the true nature of AI, and to remove the ill-conceived philosophical discussions which seek answers to the wrong questions in the wrong ways. Philosophical discussions and decisions about the proper use of AI need to be based on a proper understanding of the manner in which AI-scientists achieve their results; in particular, in their dependence on the initial planning input of human beings. The collection combines the Anglo-Saxon school of analytical philosophy with scientific and psychological methods of investigation. The distinguished authors in this volume represent a cross-section of philosophers, psychologists, and computer scientists from all over the world. The result is a fascinating study in the nature and future of AI, written in a style which is certain to appeal and inform laymen and specialists alike. inform laymen and specialists alike. (shrink)
Applications of artificialintelligence (AI) for cybersecurity tasks are attracting greater attention from the private and the public sectors. Estimates indicate that the market for AI in cybersecurity will grow from US$1 billion in 2016 to a US$34.8 billion net worth by 2025. The latest national cybersecurity and defence strategies of several governments explicitly mention AI capabilities. At the same time, initiatives to define new standards and certification procedures to elicit users’ trust in AI are emerging on a (...) global scale. However, trust in AI (both machine learning and neural networks) to deliver cybersecurity tasks is a double edged sword: it can improve substantially cybersecurity practices, but can also facilitate new forms of attacks to the AI applications themselves, which may pose severe security threats. We argue that trust in AI for cybersecurity is unwarranted and that, to reduce security risks, some form of control to ensure the deployment of ‘reliable AI’ for cybersecurity is necessary. To this end, we offer three recommendations focusing on the design, development and deployment of AI for cybersecurity. (shrink)
Some artificialintelligence systems can display algorithmic bias, i.e. they may produce outputs that unfairly discriminate against people based on their social identity. Much research on this topic focuses on algorithmic bias that disadvantages people based on their gender or racial identity. The related ethical problems are significant and well known. Algorithmic bias against other aspects of people’s social identity, for instance, their political orientation, remains largely unexplored. This paper argues that algorithmic bias against people’s political orientation can (...) arise in some of the same ways in which algorithmic gender and racial biases emerge. However, it differs importantly from them because there are strong social norms against gender and racial biases. This does not hold to the same extent for political biases. Political biases can thus more powerfully influence people, which increases the chances that these biases become embedded in algorithms and makes algorithmic political biases harder to detect and eradicate than gender and racial biases even though they all can produce similar harm. Since some algorithms can now also easily identify people’s political orientations against their will, these problems are exacerbated. Algorithmic political bias thus raises substantial and distinctive risks that the AI community should be aware of and examine. (shrink)
Presupposing no familiarity with the technical concepts of either philosophy or computing, this clear introduction reviews the progress made in AI since the inception of the field in 1956. Copeland goes on to analyze what those working in AI must achieve before they can claim to have built a thinking machine and appraises their prospects of succeeding. There are clear introductions to connectionism and to the language of thought hypothesis which weave together material from philosophy, artificialintelligence and (...) neuroscience. John Searle's attacks on AI and cognitive science are countered and close attention is given to foundational issues, including the nature of computation, Turing Machines, the Church-Turing Thesis and the difference between classical symbol processing and parallel distributed processing. The book also explores the possibility of machines having free will and consciousness and concludes with a discussion of in what sense the human brain may be a computer. (shrink)
Artificialintelligence research and regulation seek to balance the benefits of innovation against any potential harms and disruption. However, one unintended consequence of the recent surge in AI research is the potential re-orientation of AI technologies to facilitate criminal acts, term in this article AI-Crime. AIC is theoretically feasible thanks to published experiments in automating fraud targeted at social media users, as well as demonstrations of AI-driven manipulation of simulated markets. However, because AIC is still a relatively young (...) and inherently interdisciplinary area—spanning socio-legal studies to formal science—there is little certainty of what an AIC future might look like. This article offers the first systematic, interdisciplinary literature analysis of the foreseeable threats of AIC, providing ethicists, policy-makers, and law enforcement organisations with a synthesis of the current problems, and a possible solution space. (shrink)
Artificialintelligence, or AI, is a cross-disciplinary approach to understanding, modeling, and creating intelligence of various forms. It is a critical branch of cognitive science, and its influence is increasingly being felt in other areas, including the humanities. AI applications are transforming the way we interact with each other and with our environment, and work in artificially modeling intelligence is offering new insights into the human mind and revealing new forms mentality can take. This volume of (...) original essays presents the state of the art in AI, surveying the foundations of the discipline, major theories of mental architecture, the principal areas of research, and extensions of AI such as artificial life. With a focus on theory rather than technical and applied issues, the volume will be valuable not only to people working in AI, but also to those in other disciplines wanting an authoritative and up-to-date introduction to the field. (shrink)
Semantic Engines: An Introduction to Mind Design, John C. Haugeland; Computer Science as Empirical Inquiry: Symbols and Search, Alan Newell and Herbert A. Simon; Complexity and the Study of Artificial and Human Intelligence, Zenon Pylyshyn; A Framework for Representing Knowledge, Marvin Minsky; ArtificialIntelligence---A Personal View, David Marr; ArtificialIntelligence Meets Natural Stupidity, Drew McDermott; From Micro-Worlds to Knowledge Representation: AI at an Impasse, Hubert L. Dreyfus; Reductionism and the Nature of Psychology, Hilary Putnam; (...) Intentional Systems, Daniel C. Dennett; The Nature and Plausibility of Cognitivism, John C. Haugeland; Minds, Brains, and Programs, John R. Searle; MethodologicalSolipsism Considered as a Research Strategy in Cognitive Psychology, Jerry A. Fodor; The Material Mind, Donald Davidson. (shrink)
This paper argues that the Value Sensitive Design (VSD) methodology provides a principled approach to embedding common values in to AI systems both early and throughout the design process. To do so, it draws on an important case study: the evidence and final report of the UK Select Committee on ArtificialIntelligence. This empirical investigation shows that the different and often disparate stakeholder groups that are implicated in AI design and use share some common values that can be (...) used to further strengthen design coordination efforts. VSD is shown to be both able to distill these common values as well as provide a framework for stakeholder coordination. (shrink)
The increasing use of ArtificialIntelligence for making decisions in public affairs has sparked a lively debate on the benefits and potential harms of self-learning technologies, ranging from the hopes of fully informed and objectively taken decisions to fear for the destruction of mankind. To prevent the negative outcomes and to achieve accountable systems, many have argued that we need to open up the “black box” of AI decision-making and make it more transparent. Whereas this debate has primarily (...) focused on how transparency can secure high-quality, fair, and reliable decisions, far less attention has been devoted to the role of transparency when it comes to how the general public come to perceive AI decision-making as legitimate and worthy of acceptance. Since relying on coercion is not only normatively problematic but also costly and highly inefficient, perceived legitimacy is fundamental to the democratic system. This paper discusses how transparency in and about AI decision-making can affect the public’s perception of the legitimacy of decisions and decision-makers and produce a framework for analyzing these questions. We argue that a limited form of transparency that focuses on providing justifications for decisions has the potential to provide sufficient ground for perceived legitimacy without producing the harms full transparency would bring. (shrink)
The development of artificialintelligence in medicine raises fundamental ethical issues. As one example, AI systems in the field of mental health successfully detect signs of mental disorders...
Today’s capitalist economy has forced the human person to seek work as a means of survival, by so doing stripping from work its value as a good intrinsically connected to the nature and dignity of the human person. Modern science and technology has been a fundamental tool in the advancement and sustainability of this orientation of capitalist economy. Hence, the advancement in the research in Artificialintelligence (AI), is not only redefining the meaning of work but more so (...) it questions the metaphysical notion of the human person and the theological notion of work as an intrinsic part in the selfhood and dignity of the human person. This work aims at exposing the possible implications of the development of ArtificialIntelligence on the selfhood and dignity of the human person in respect to the social teachings of the Catholic Church. This work shall be an interplay of philosophy and theology of ArtificialIntelligence. (shrink)
For the philosopher, the most critical and fundamental question in the project of ArtificialIntelligence is the question of intelligence or cognition in general. From the beginning of the research in “thinking Machining”, or ArtificialIntelligence as it later became known, the key question is: What makes a thing intelligent or what constitutes intelligence? Since, intelligence, is a fundamental activity of the mind, the question, has been: Whether the mind is a computer or (...) is the computer a mind? Many philosophers who have and are engaging and interrogating these problematics, do it from the perspective of modern and contemporary philosophy of mind, consciousness and language. The objective of this work is to interrogate the question of “intelligence” in ArtificialIntelligence from the perspective of the Scholastics’ notion of Intellectus. (shrink)
Two leaders in the field offer a compelling analysis of the current state of the art and reveal the steps we must take to achieve a truly robust artificialintelligence. Despite the hype surrounding AI, creating an intelligence that rivals or exceeds human levels is far more complicated than we have been led to believe. Professors Gary Marcus and Ernest Davis have spent their careers at the forefront of AI research and have witnessed some of the greatest (...) milestones in the field, but they argue that a computer beating a human in Jeopardy! does not signal that we are on the doorstep of fully autonomous cars or superintelligent machines. The achievements in the field thus far have occurred in closed systems with fixed sets of rules, and these approaches are too narrow to achieve genuine intelligence. The real world, in contrast, is wildly complex and open-ended. How can we bridge this gap? What will the consequences be when we do? Taking inspiration from the human mind, Marcus and Davis explain what we need to advance AI to the next level, and suggest that if we are wise along the way, we won't need to worry about a future of machine overlords. If we focus on endowing machines with common sense and deep understanding, rather than simply focusing on statistical analysis and gatherine ever larger collections of data, we will be able to create an AI we can trust--in our homes, our cars, and our doctors' offices. Rebooting AI provides a lucid, clear-eyed assessment of the current science and offers an inspiring vision of how a new generation of AI can make our lives better. (shrink)
The moral enhancement of human beings is a constant theme in the history of humanity. Today, faced with the threats of a new, globalised world, concern over this matter is more pressing. For this reason, the use of biotechnology to make human beings more moral has been considered. However, this approach is dangerous and very controversial. The purpose of this article is to argue that the use of another new technology, AI, would be preferable to achieve this goal. Whilst several (...) proposals have been made on how to use AI for moral enhancement, we present an alternative that we argue to be superior to other proposals that have been developed. (shrink)
AI, especially in the case of Deepfakes, has the capacity to undermine our confidence in the original, genuine, authentic nature of what we see and hear. And yet digital technologies, in the form of databases and other detection tools also make it easier to spot forgeries and to establish the authenticity of a work. Using the notion of ectypes, this paper discusses current conceptions of authenticity and reproduction and examines how, in the future, these might be adapted for use in (...) the digital sphere. (shrink)
An argument with roots in ancient Greek philosophy claims that only humans are capable of a certain class of thought termed conceptual, as opposed to perceptual thought, which is common to humans, the higher animals, and some machines. We outline the most detailed modern version of this argument due to Mortimer Adler, who in the 1960s argued for the uniqueness of the human power of conceptual thought. He also admitted that if conceptual thought were ever manifested by machines, such an (...) achievement would contradict his conclusion. We revisit Adler’s criterion in the light of the past five decades of artificial-intelligence research, and refine it in view of the classical definitions of perceptual and conceptual thought. We then examine two well-publicized examples of creative works produced by AI systems and show that evidence for conceptual thought appears to be lacking in them. Although clearer evidence for conceptual thought on the part of AI systems may arise in the near future, especially if the global neuronal workspace theory of consciousness prevails over its rival, integrated information theory, the question of whether AI systems can engage in conceptual thought appears to be still open. (shrink)
In July 2017, China’s State Council released the country’s strategy for developing artificialintelligence, entitled ‘New Generation ArtificialIntelligence Development Plan’. This strategy outlined China’s aims to become the world leader in AI by 2030, to monetise AI into a trillion-yuan industry, and to emerge as the driving force in defining ethical norms and standards for AI. Several reports have analysed specific aspects of China’s AI policies or have assessed the country’s technical capabilities. Instead, in this (...) article, we focus on the socio-political background and policy debates that are shaping China’s AI strategy. In particular, we analyse the main strategic areas in which China is investing in AI and the concurrent ethical debates that are delimiting its use. By focusing on the policy backdrop, we seek to provide a more comprehensive and critical understanding of China’s AI policy by bringing together debates and analyses of a wide array of policy documents. (shrink)
Some recent developments in ArtificialIntelligence—especially the use of machine learning systems, trained on big data sets and deployed in socially significant and ethically weighty contexts—have led to a number of calls for “transparency”. This paper explores the epistemological and ethical dimensions of that concept, as well as surveying and taxonomising the variety of ways in which it has been invoked in recent discussions. Whilst “outward” forms of transparency may be straightforwardly achieved, what I call “functional” transparency about (...) the inner workings of a system is, in many cases, much harder to attain. In those situations, I argue that contestability may be a possible, acceptable, and useful alternative so that even if we cannot understand how a system came up with a particular output, we at least have the means to challenge it. (shrink)
Purpose The purpose of this paper is clearly illustrate this convergence and the prescriptive recommendations that such documents entail. There is a significant amount of research into the ethical consequences of artificialintelligence. This is reflected by many outputs across academia, policy and the media. Many of these outputs aim to provide guidance to particular stakeholder groups. It has recently been shown that there is a large degree of convergence in terms of the principles upon which these guidance (...) documents are based. Despite this convergence, it is not always clear how these principles are to be translated into practice. Design/methodology/approach In this paper, the authors move beyond the high-level ethical principles that are common across the AI ethics guidance literature and provide a description of the normative content that is covered by these principles. The outcome is a comprehensive compilation of normative requirements arising from existing guidance documents. This is not only required for a deeper theoretical understanding of AI ethics discussions but also for the creation of practical and implementable guidance for developers and users of AI. Findings In this paper, the authors therefore provide a detailed explanation of the normative implications of existing AI ethics guidelines but directed towards developers and organisational users of AI. The authors believe that the paper provides the most comprehensive account of ethical requirements in AI currently available, which is of interest not only to the research and policy communities engaged in the topic but also to the user communities that require guidance when developing or deploying AI systems. Originality/value The authors believe that they have managed to compile the most comprehensive document collecting existing guidance which can guide practical action but will hopefully also support the consolidation of the guidelines landscape. The authors’ findings should also be of academic interest and inspire philosophical research on the consistency and justification of the various normative statements that can be found in the literature. (shrink)
This paper critically assesses the possibility of moral enhancement with ambient intelligence technologies and artificialintelligence presented in Savulescu and Maslen (2015). The main problem with their proposal is that it is not robust enough to play a normative role in users’ behavior. A more promising approach, and the one presented in the paper, relies on an artifi-cial moral reasoning engine, which is designed to present its users with moral arguments grounded in first-order normative theories, such as (...) Kantianism or utilitarianism, that reason-responsive people can be persuaded by. This proposal can play a normative role and it is also a more promising avenue towards moral enhancement. It is more promising because such a system can be designed to take advantage of the sometimes undue trust that people put in automated technologies. We could therefore expect a well-designed moral reasoner system to be able to persuade people that may not be persuaded by similar arguments from other people. So, all things considered, there is hope in artificial intelli-gence for moral enhancement, but not in artificialintelligence that relies solely on ambient intelligence technologies. (shrink)
Insofar as artificialintelligence is to be used to guide automated systems in their interactions with humans, the dominant view is probably that it would be appropriate to programme them to maximize (expected) utility. According to utilitarianism, which is a characteristically western conception of moral reason, machines should be programmed to do whatever they could in a given circumstance to produce in the long run the highest net balance of what is good for human beings minus what is (...) bad for them. In this essay, I appeal to values that are characteristically African––but that will resonate with those from a variety of moral-philosophical traditions, particularly in the Global South––to cast doubt on a utilitarian approach. Drawing on norms salient in sub-Saharan ethics, I provide four reasons for thinking it would be immoral for automated systems governed by artificialintelligence to maximize utility. In catchphrases, I argue that utilitarianism cannot make adequate sense of the ways that human dignity, group rights, family first, and (surprisingly) self-sacrifice should determine the behaviour of smart machines. (shrink)
: Artificial intelligences and robots increasingly mimic human mental powers and intelligent behaviour. However, many authors claim that ascribing human mental powers to them is both conceptually mistaken and morally dangerous. This article defends the view that artificial intelligences can have human-like mental powers, by claiming that both human and artificial minds can be seen as extended minds – along the lines of Chalmers and Clark’s view of mind and cognition. The main idea of this article is (...) that the Extended Mind Model is independently plausible and can easily be extended to artificial intelligences, providing a solid base for concluding that artificial intelligences possess minds. This may warrant viewing them as morally responsible agents. Keywords: ArtificialIntelligence; Mind; Moral Responsibility; Extended Cognition Intelligenze artificiali come menti estese. Perché no? Riassunto: Intelligenze artificiali e robot simulano in misura sempre crescente le capacità mentali e i comportamenti intelligenti umani. Molti autori, tuttavia, sostengono che attribuire loro capacità mentali umane sia concettualmente errato e moralmente pericoloso. In questo lavoro si difende l’idea per cui le intelligenze artificiali possano avere capacità mentali simili a quelle umane, sostenendo che menti umane e artificiali possano essere considerate come menti estese – sulla scorta della prospettiva di Chalmers e Clark circa la mente e la cognizione. L’idea principale alla base di questo lavoro è che il Modello della Mente Estesa abbia plausibilità a prescindere e che possa essere facilmente esteso alle intelligenze artificiali, fornendo una base solida per concludere che le intelligenze artificiali possiedano delle menti e si possano considerare come agenti moralmente responsabili. Parole chiave: Intelligenza artificiale; Mente; Responsabilità morale; Conoscenza estesa. (shrink)
This paper looks at philosophical questions that arise in the context of AI alignment. It defends three propositions. First, normative and technical aspects of the AI alignment problem are interrelated, creating space for productive engagement between people working in both domains. Second, it is important to be clear about the goal of alignment. There are significant differences between AI that aligns with instructions, intentions, revealed preferences, ideal preferences, interests and values. A principle-based approach to AI alignment, which combines these elements (...) in a systematic way, has considerable advantages in this context. Third, the central challenge for theorists is not to identify ‘true’ moral principles for AI; rather, it is to identify fair principles for alignment that receive reflective endorsement despite widespread variation in people’s moral beliefs. The final part of the paper explores three ways in which fair principles for AI alignment could potentially be identified. (shrink)
There is, in some quarters, concern about high–level machine intelligence and superintelligent AI coming up in a few decades, bringing with it significant risks for humanity. In other quarters, these issues are ignored or considered science fiction. We wanted to clarify what the distribution of opinions actually is, what probability the best experts currently assign to high–level machine intelligence coming up within a particular time–frame, which risks they see with that development, and how fast they see these developing. (...) We thus designed a brief questionnaire and distributed it to four groups of experts in 2012/2013. The median estimate of respondents was for a one in two chance that high-level machine intelligence will be developed around 2040-2050, rising to a nine in ten chance by 2075. Experts expect that systems will move on to superintelligence in less than 30 years thereafter. They estimate the chance is about one in three that this development turns out to be ‘bad’ or ‘extremely bad’ for humanity. (shrink)
In this article we analyse the role that artificialintelligence (AI) could play, and is playing, to combat global climate change. We identify two crucial opportunities that AI offers in this domain: it can help improve and expand current understanding of climate change and it contribute to combating the climate crisis effectively. However, the development of AI also raises two sets of problems when considering climate change: the possible exacerbation of social and ethical challenges already associated with AI, (...) and the contribution to climate change of the greenhouse gases emitted by training data and computation-intensive AI systems. We assess the carbon footprint of AI research, and the factors that influence AI’s greenhouse gas (GHG) emissions in this domain. We find that the carbon footprint of AI research may be significant and highlight the need for more evidence concerning the trade-off between the GHG emissions generated by AI research and the energy and resource efficiency gains that AI can offer. In light of our analysis, we argue that leveraging the opportunities offered by AI for global climate change whilst limiting its risks is a gambit which requires responsive, evidence-based and effective governance to become a winning strategy. We conclude by identifying the European Union as being especially well-placed to play a leading role in this policy response and provide 13 recommendations that are designed to identify and harness the opportunities of AI for combating climate change, while reducing its impact on the environment. (shrink)
Artificialintelligence (AI) and robotics are digital technologies that will have significant impact on the development of humanity in the near future. They have raised fundamental questions about what we should do with these systems, what the systems themselves should do, what risks they involve, and how we can control these. - After the Introduction to the field (§1), the main themes (§2) of this article are: Ethical issues that arise with AI systems as objects, i.e., tools made (...) and used by humans. This includes issues of privacy (§2.1) and manipulation (§2.2), opacity (§2.3) and bias (§2.4), human-robot interaction (§2.5), employment (§2.6), and the effects of autonomy (§2.7). Then AI systems as subjects, i.e., ethics for the AI systems themselves in machine ethics (§2.8) and artificial moral agency (§2.9). Finally, the problem of a possible future AI superintelligence leading to a “singularity” (§2.10). We close with a remark on the vision of AI (§3). - For each section within these themes, we provide a general explanation of the ethical issues, outline existing positions and arguments, then analyse how these play out with current technologies and finally, what policy consequences may be drawn. (shrink)
An exploration of the important philosophical issues and concerns related to artificialintelligence. The book focuses on the philosphical, rather than the technical or technological aspects of artificialintelligence.