Switch to: References

Citations of:

Machine Ethics

Cambridge Univ. Press (2011)

Add citations

You must login to add citations.
  1. Information technology and moral values.John Sullins - forthcoming - Stanford Encyclopedia of Philosophy.
    A encyclopedia entry on the moral impacts that happen when information technologies are used to record, communicate and organize information. including the moral challenges of information technology, specific moral and cultural challenges such as online games, virtual worlds, malware, the technology transparency paradox, ethical issues in AI and robotics, and the acceleration of change in technologies. It concludes with a look at information technology as a model for moral change, moral systems and moral agents.
    Direct download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Moralische Roboter: Humanistisch-philosophische Grundlagen und didaktische Anwendungen.André Schmiljun & Iga Maria Schmiljun - 2024 - transcript Verlag.
    Brauchen Roboter moralische Kompetenz? Die Antwort lautet ja. Einerseits benötigen Roboter moralische Kompetenz, um unsere Welt aus Regeln, Vorschriften und Werten zu begreifen, andererseits um von ihrem Umfeld akzeptiert zu werden. Wie aber lässt sich moralische Kompetenz in Roboter implementieren? Welche philosophischen Herausforderungen sind zu erwarten? Und wie können wir uns und unsere Kinder auf Roboter vorbereiten, die irgendwann über moralische Kompetenz verfügen werden? André und Iga Maria Schmiljun skizzieren aus einer humanistisch-philosophischen Perspektive erste Antworten auf diese Fragen und entwickeln (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • Embedding Values in Artificial Intelligence (AI) Systems.Ibo van de Poel - 2020 - Minds and Machines 30 (3):385-409.
    Organizations such as the EU High-Level Expert Group on AI and the IEEE have recently formulated ethical principles and (moral) values that should be adhered to in the design and deployment of artificial intelligence (AI). These include respect for autonomy, non-maleficence, fairness, transparency, explainability, and accountability. But how can we ensure and verify that an AI system actually respects these values? To help answer this question, I propose an account for determining when an AI system can be said to embody (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   37 citations  
  • On the computational complexity of ethics: moral tractability for minds and machines.Jakob Stenseke - 2024 - Artificial Intelligence Review 57 (105):90.
    Why should moral philosophers, moral psychologists, and machine ethicists care about computational complexity? Debates on whether artificial intelligence (AI) can or should be used to solve problems in ethical domains have mainly been driven by what AI can or cannot do in terms of human capacities. In this paper, we tackle the problem from the other end by exploring what kind of moral machines are possible based on what computational systems can or cannot do. To do so, we analyze normative (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  • The Heart of an AI: Agency, Moral Sense, and Friendship.Evandro Barbosa & Thaís Alves Costa - 2024 - Unisinos Journal of Philosophy 25 (1):01-16.
    The article presents an analysis centered on the emotional lapses of artificial intelligence (AI) and the influence of these lapses on two critical aspects. Firstly, the article explores the ontological impact of emotional lapses, elucidating how they hinder AI’s capacity to develop a moral sense. The absence of a moral emotion, such as sympathy, creates a barrier for machines to grasp and ethically respond to specific situations. This raises fundamental questions about machines’ ability to act as moral agents in the (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • On the Matter of Robot Minds.Brian P. McLaughlin & David Rose - forthcoming - Oxford Studies in Experimental Philosophy.
    The view that phenomenally conscious robots are on the horizon often rests on a certain philosophical view about consciousness, one we call “nomological behaviorism.” The view entails that, as a matter of nomological necessity, if a robot had exactly the same patterns of dispositions to peripheral behavior as a phenomenally conscious being, then the robot would be phenomenally conscious; indeed it would have all and only the states of phenomenal consciousness that the phenomenally conscious being in question has. We experimentally (...)
    Direct download  
     
    Export citation  
     
    Bookmark   7 citations  
  • Why robots should not be treated like animals.Deborah G. Johnson & Mario Verdicchio - 2018 - Ethics and Information Technology 20 (4):291-301.
    Responsible Robotics is about developing robots in ways that take their social implications into account, which includes conceptually framing robots and their role in the world accurately. We are now in the process of incorporating robots into our world and we are trying to figure out what to make of them and where to put them in our conceptual, physical, economic, legal, emotional and moral world. How humans think about robots, especially humanoid social robots, which elicit complex and sometimes disconcerting (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   16 citations  
  • Ethics of Artificial Intelligence and Robotics.Vincent C. Müller - 2012 - In Peter Adamson (ed.), Stanford Encyclopedia of Philosophy. Stanford Encyclopedia of Philosophy. pp. 1-70.
    Artificial intelligence (AI) and robotics are digital technologies that will have significant impact on the development of humanity in the near future. They have raised fundamental questions about what we should do with these systems, what the systems themselves should do, what risks they involve, and how we can control these. - After the Introduction to the field (§1), the main themes (§2) of this article are: Ethical issues that arise with AI systems as objects, i.e., tools made and used (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   30 citations  
  • Nonhuman Moral Agency: A Practice-Focused Exploration of Moral Agency in Nonhuman Animals and Artificial Intelligence.Dorna Behdadi - 2023 - Dissertation, University of Gothenburg
    Can nonhuman animals and artificial intelligence (AI) entities be attributed moral agency? The general assumption in the philosophical literature is that moral agency applies exclusively to humans since they alone possess free will or capacities required for deliberate reflection. Consequently, only humans have been taken to be eligible for ascriptions of moral responsibility in terms of, for instance, blame or praise, moral criticism, or attributions of vice and virtue. Animals and machines may cause harm, but they cannot be appropriately ascribed (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • A Conceptual and Computational Model of Moral Decision Making in Human and Artificial Agents.Wendell Wallach, Stan Franklin & Colin Allen - 2010 - Topics in Cognitive Science 2 (3):454-485.
    Recently, there has been a resurgence of interest in general, comprehensive models of human cognition. Such models aim to explain higher-order cognitive faculties, such as deliberation and planning. Given a computational representation, the validity of these models can be tested in computer simulations such as software agents or embodied robots. The push to implement computational models of this kind has created the field of artificial general intelligence (AGI). Moral decision making is arguably one of the most challenging tasks for computational (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   24 citations  
  • How to feel about emotionalized artificial intelligence? When robot pets, holograms, and chatbots become affective partners.Eva Weber-Guskar - 2021 - Ethics and Information Technology 23 (4):601-610.
    Interactions between humans and machines that include artificial intelligence are increasingly common in nearly all areas of life. Meanwhile, AI-products are increasingly endowed with emotional characteristics. That is, they are designed and trained to elicit emotions in humans, to recognize human emotions and, sometimes, to simulate emotions. The introduction of such systems in our lives is met with some criticism. There is a rather strong intuition that there is something wrong about getting attached to a machine, about having certain emotions (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  • Robot minds and human ethics: the need for a comprehensive model of moral decision making. [REVIEW]Wendell Wallach - 2010 - Ethics and Information Technology 12 (3):243-250.
    Building artificial moral agents (AMAs) underscores the fragmentary character of presently available models of human ethical behavior. It is a distinctly different enterprise from either the attempt by moral philosophers to illuminate the “ought” of ethics or the research by cognitive scientists directed at revealing the mechanisms that influence moral psychology, and yet it draws on both. Philosophers and cognitive scientists have tended to stress the importance of particular cognitive mechanisms, e.g., reasoning, moral sentiments, heuristics, intuitions, or a moral grammar, (...)
    Direct download (7 more)  
     
    Export citation  
     
    Bookmark   24 citations  
  • When Doctors and AI Interact: on Human Responsibility for Artificial Risks.Mario Verdicchio & Andrea Perin - 2022 - Philosophy and Technology 35 (1):1-28.
    A discussion concerning whether to conceive Artificial Intelligence systems as responsible moral entities, also known as “artificial moral agents”, has been going on for some time. In this regard, we argue that the notion of “moral agency” is to be attributed only to humans based on their autonomy and sentience, which AI systems lack. We analyze human responsibility in the presence of AI systems in terms of meaningful control and due diligence and argue against fully automated systems in medicine. With (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Service robots, care ethics, and design.A. van Wynsberghe - 2016 - Ethics and Information Technology 18 (4):311-321.
    It should not be a surprise in the near future to encounter either a personal or a professional service robot in our homes and/or our work places: according to the International Federation for Robots, there will be approx 35 million service robots at work by 2018. Given that individuals will interact and even cooperate with these service robots, their design and development demand ethical attention. With this in mind I suggest the use of an approach for incorporating ethics into the (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   29 citations  
  • Drones in humanitarian contexts, robot ethics, and the human–robot interaction.Aimee van Wynsberghe & Tina Comes - 2020 - Ethics and Information Technology 22 (1):43-53.
    There are two dominant trends in the humanitarian care of 2019: the ‘technologizing of care’ and the centrality of the humanitarian principles. The concern, however, is that these two trends may conflict with one another. Faced with the growing use of drones in the humanitarian space there is need for ethical reflection to understand if this technology undermines humanitarian care. In the humanitarian space, few agree over the value of drone deployment; one school of thought believes drones can provide a (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  • Critiquing the Reasons for Making Artificial Moral Agents.Aimee van Wynsberghe & Scott Robbins - 2018 - Science and Engineering Ethics:1-17.
    Many industry leaders and academics from the field of machine ethics would have us believe that the inevitability of robots coming to have a larger role in our lives demands that robots be endowed with moral reasoning capabilities. Robots endowed in this way may be referred to as artificial moral agents. Reasons often given for developing AMAs are: the prevention of harm, the necessity for public trust, the prevention of immoral use, such machines are better moral reasoners than humans, and (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   44 citations  
  • Artificial wisdom: a philosophical framework.Cheng-Hung Tsai - 2020 - AI and Society:937-944.
    Human excellences such as intelligence, morality, and consciousness are investigated by philosophers as well as artificial intelligence researchers. One excellence that has not been widely discussed by AI researchers is practical wisdom, the highest human excellence, or the highest, seventh, stage in Dreyfus’s model of skill acquisition. In this paper, I explain why artificial wisdom matters and how artificial wisdom is possible (in principle and in practice) by responding to two philosophical challenges to building artificial wisdom systems. The result is (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  • Ethics and consciousness in artificial agents.Steve Torrance - 2008 - AI and Society 22 (4):495-521.
    In what ways should we include future humanoid robots, and other kinds of artificial agents, in our moral universe? We consider the Organic view, which maintains that artificial humanoid agents, based on current computational technologies, could not count as full-blooded moral agents, nor as appropriate targets of intrinsic moral concern. On this view, artificial humanoids lack certain key properties of biological organisms, which preclude them from having full moral status. Computationally controlled systems, however advanced in their cognitive or informational capacities, (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   58 citations  
  • Should autonomous robots be pacifists?Ryan Tonkens - 2013 - Ethics and Information Technology 15 (2):109-123.
    Currently, the central questions in the philosophical debate surrounding the ethics of automated warfare are (1) Is the development and use of autonomous lethal robotic systems for military purposes consistent with (existing) international laws of war and received just war theory?; and (2) does the creation and use of such machines improve the moral caliber of modern warfare? However, both of these approaches have significant problems, and thus we need to start exploring alternative approaches. In this paper, I ask whether (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  • A challenge for machine ethics.Ryan Tonkens - 2009 - Minds and Machines 19 (3):421-438.
    That the successful development of fully autonomous artificial moral agents (AMAs) is imminent is becoming the received view within artificial intelligence research and robotics. The discipline of Machines Ethics, whose mandate is to create such ethical robots, is consequently gaining momentum. Although it is often asked whether a given moral framework can be implemented into machines, it is never asked whether it should be. This paper articulates a pressing challenge for Machine Ethics: To identify an ethical framework that is both (...)
    Direct download (9 more)  
     
    Export citation  
     
    Bookmark   38 citations  
  • Levels of Trust in the Context of Machine Ethics.Herman T. Tavani - 2015 - Philosophy and Technology 28 (1):75-90.
    Are trust relationships involving humans and artificial agents possible? This controversial question has become a hotly debated topic in the emerging field of machine ethics. Employing a model of trust advanced by Buechner and Tavani :39–51, 2011), I argue that the “short answer” to this question is yes. However, I also argue that a more complete and nuanced answer will require us to articulate the various levels of trust that are also possible in environments comprising both human agents and AAs. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  • On the complexity of input/output logic.Xin Sun & Livio Robaldo - 2017 - Journal of Applied Logic 25:69-88.
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  • To Each Technology Its Own Ethics: The Problem of Ethical Proliferation.Henrik Skaug Sætra & John Danaher - 2022 - Philosophy and Technology 35 (4):1-26.
    Ethics plays a key role in the normative analysis of the impacts of technology. We know that computers in general and the processing of data, the use of artificial intelligence, and the combination of computers and/or artificial intelligence with robotics are all associated with ethically relevant implications for individuals, groups, and society. In this article, we argue that while all technologies are ethically relevant, there is no need to create a separate ‘ethics of X’ or ‘X ethics’ for each and (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  • Why machines cannot be moral.Robert Sparrow - 2021 - AI and Society (3):685-693.
    The fact that real-world decisions made by artificial intelligences (AI) are often ethically loaded has led a number of authorities to advocate the development of “moral machines”. I argue that the project of building “ethics” “into” machines presupposes a flawed understanding of the nature of ethics. Drawing on the work of the Australian philosopher, Raimond Gaita, I argue that ethical dilemmas are problems for particular people and not (just) problems for everyone who faces a similar situation. Moreover, the force of (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  • ETHICA EX MACHINA. Exploring artificial moral agency or the possibility of computable ethics.Rodrigo Sanz - 2020 - Zeitschrift Für Ethik Und Moralphilosophie 3 (2):223-239.
    Since the automation revolution of our technological era, diverse machines or robots have gradually begun to reconfigure our lives. With this expansion, it seems that those machines are now faced with a new challenge: more autonomous decision-making involving life or death consequences. This paper explores the philosophical possibility of artificial moral agency through the following question: could a machine obtain the cognitive capacities needed to be a moral agent? In this regard, I propose to expose, under a normative-cognitive perspective, the (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • The social impact of intelligent artefacts.Richard S. Rosenberg - 2008 - AI and Society 22 (3):367-383.
    The simplistic assumption that replacing humans by intelligent artifacts or introducing such artifacts, or robots, into all aspects of human society will necessarily benefit society at large must be continually re-evaluated. Clearly, contributing factors will involve concerns of efficiency, the role of work as a component in human self-worth, the distribution of wealth generated by advanced technologies, the potential for growing divisions in society resulting from gross inequities in income and from the loss of work as a central fact of (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • On the problem of making autonomous vehicles conform to traffic law.Henry Prakken - 2017 - Artificial Intelligence and Law 25 (3):341-363.
    Autonomous vehicles are one of the most spectacular recent developments of Artificial Intelligence. Among the problems that still need to be solved before they can fully autonomously participate in traffic is the one of making their behaviour conform to the traffic laws. This paper discusses this problem by way of a case study of Dutch traffic law. First it is discussed to what extent Dutch traffic law exhibits features that are traditionally said to pose challenges for AI & Law models, (...)
    Direct download (7 more)  
     
    Export citation  
     
    Bookmark   14 citations  
  • Formal models of the scientific community and the value-ladenness of science.Vincenzo Politi - 2021 - European Journal for Philosophy of Science 11 (4):1-23.
    In the past few years, social epistemologists have developed several formal models of the social organisation of science. While their robustness and representational adequacy has been analysed at length, the function of these models has begun to be discussed in more general terms only recently. In this article, I will interpret many of the current formal models of the scientific community as representing the latest development of what I will call the ‘Kuhnian project’. These models share with Kuhn a number (...)
    Direct download (9 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Synthetic Deliberation: Can Emulated Imagination Enhance Machine Ethics?Robert Pinka - 2020 - Minds and Machines 31 (1):121-136.
    Artificial intelligence is becoming increasingly entwined with our daily lives: AIs work as assistants through our phones, control our vehicles, and navigate our vacuums. As these objects become more complex and work within our societies in ways that affect our well-being, there is a growing demand for machine ethics—we want a guarantee that the various automata in our lives will behave in a way that minimizes the amount of harm they create. Though many technologies exist as moral artifacts, the development (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • Formalizing preference utilitarianism in physical world models.Caspar Oesterheld - 2016 - Synthese 193 (9).
    Most ethical work is done at a low level of formality. This makes practical moral questions inaccessible to formal and natural sciences and can lead to misunderstandings in ethical discussion. In this paper, we use Bayesian inference to introduce a formalization of preference utilitarianism in physical world models, specifically cellular automata. Even though our formalization is not immediately applicable, it is a first step in providing ethics and ultimately the question of how to “make the world better” with a formal (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Artificial Intelligence as a Means to Moral Enhancement.Michał Klincewicz - 2016 - Studies in Logic, Grammar and Rhetoric 48 (1):171-187.
    This paper critically assesses the possibility of moral enhancement with ambient intelligence technologies and artificial intelligence presented in Savulescu and Maslen (2015). The main problem with their proposal is that it is not robust enough to play a normative role in users’ behavior. A more promising approach, and the one presented in the paper, relies on an artifi-cial moral reasoning engine, which is designed to present its users with moral arguments grounded in first-order normative theories, such as Kantianism or utilitarianism, (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   14 citations  
  • Automatisierte Ungleichheit: Ethik der Künstlichen Intelligenz in der biopolitischen Wende des Digitalen Kapitalismus.Rainer Mühlhoff - 2020 - Deutsche Zeitschrift für Philosophie 68 (6):867-890.
    This paper sets out the notion of a current “biopolitical turn of digital capitalism” resulting from the increasing deployment of AI and data analytics technologies in the public sector. With applications of AI-based automated decisions currently shifting from the domain of business to customer (B2C) relations to government to citizen (G2C) relations, a new form of governance arises that operates through “algorithmic social selection”. Moreover, the paper describes how the ethics of AI is at an impasse concerning these larger societal (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • Computer Says I Don’t Know: An Empirical Approach to Capture Moral Uncertainty in Artificial Intelligence.Andreia Martinho, Maarten Kroesen & Caspar Chorus - 2021 - Minds and Machines 31 (2):215-237.
    As AI Systems become increasingly autonomous, they are expected to engage in decision-making processes that have moral implications. In this research we integrate theoretical and empirical lines of thought to address the matters of moral reasoning and moral uncertainty in AI Systems. We reconceptualize the metanormative framework for decision-making under moral uncertainty and we operationalize it through a latent class choice model. The core idea being that moral heterogeneity in society can be codified in terms of a small number of (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Integrating robot ethics and machine morality: the study and design of moral competence in robots.Bertram F. Malle - 2016 - Ethics and Information Technology 18 (4):243-256.
    Robot ethics encompasses ethical questions about how humans should design, deploy, and treat robots; machine morality encompasses questions about what moral capacities a robot should have and how these capacities could be computationally implemented. Publications on both of these topics have doubled twice in the past 10 years but have often remained separate from one another. In an attempt to better integrate the two, I offer a framework for what a morally competent robot would look like and discuss a number (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   19 citations  
  • Where Bioethics Meets Machine Ethics.Anna C. F. Lewis - 2020 - American Journal of Bioethics 20 (11):22-24.
    Char et al. question the extent and degree to which machine learning applications should be treated as exceptional by ethicists. It is clear that of the suite of ethical issues raised by mac...
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • A Rawlsian algorithm for autonomous vehicles.Derek Leben - 2017 - Ethics and Information Technology 19 (2):107-115.
    Autonomous vehicles must be programmed with procedures for dealing with trolley-style dilemmas where actions result in harm to either pedestrians or passengers. This paper outlines a Rawlsian algorithm as an alternative to the Utilitarian solution. The algorithm will gather the vehicle’s estimation of probability of survival for each person in each action, then calculate which action a self-interested person would agree to if he or she were in an original bargaining position of fairness. I will employ Rawls’ assumption that the (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   31 citations  
  • Why a Virtual Assistant for Moral Enhancement When We Could have a Socrates?Francisco Lara - 2021 - Science and Engineering Ethics 27 (4):1-27.
    Can Artificial Intelligence be more effective than human instruction for the moral enhancement of people? The author argues that it only would be if the use of this technology were aimed at increasing the individual's capacity to reflectively decide for themselves, rather than at directly influencing behaviour. To support this, it is shown how a disregard for personal autonomy, in particular, invalidates the main proposals for applying new technologies, both biomedical and AI-based, to moral enhancement. As an alternative to these (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  • From a variety of ethics to the integrity and congruence of research on biodiversity conservation.Claire Lajaunie - 2018 - Asian Bioethics Review 10 (4):313-332.
    This article aims to find the elements that are required for a common ethical approach that is suitable for the different perspectives adopted in integrative biodiversity conservation research. A general reflection on the integrity of research is a priority worldwide, with a common aim to promote good research practice. Beyond the relationship between researcher and research subject, the integrity of research is considered in a broader perspective which entails scientific integrity towards society. In research involving a variety of disciplines and (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • Phronetic Ethics in Social Robotics: A New Approach to Building Ethical Robots.Roman Krzanowski & Paweł Polak - 2020 - Studies in Logic, Grammar and Rhetoric 63 (1):165-183.
    Social robotics are autonomous robots or Artificial Moral Agents (AMA), that will interact respect and embody human ethical values. However, the conceptual and practical problems of building such systems have not yet been resolved, playing a role of significant challenge for computational modeling. It seems that the lack of success in constructing robots, ceteris paribus, is due to the conceptual and algorithmic limitations of the current design of ethical robots. This paper proposes a new approach for developing ethical capacities in (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • Technology with No Human Responsibility?Deborah G. Johnson - 2015 - Journal of Business Ethics 127 (4):707-715.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   36 citations  
  • The machine’s role in human’s service automation and knowledge sharing.Mihály Héder - 2014 - AI and Society 29 (2):185-192.
    The possibility of interacting with remote services in natural language opens up new opportunities for sharing knowledge and for automating services. Easy-to-use, text-based interfaces might provide more democratic access to legal information, government services, and everyday knowledge as well. However, the methodology of engineering robust natural language interfaces is very diverse, and widely deployed solutions are still yet to come. The main contribution is a detailed problem analysis on the theoretical level, which reveals that a text-based interface is best understood (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • The Ethics of AI Ethics: An Evaluation of Guidelines.Thilo Hagendorff - 2020 - Minds and Machines 30 (1):99-120.
    Current advances in research, development and application of artificial intelligence systems have yielded a far-reaching discourse on AI ethics. In consequence, a number of ethics guidelines have been released in recent years. These guidelines comprise normative principles and recommendations aimed to harness the “disruptive” potentials of new AI technologies. Designed as a semi-systematic evaluation, this paper analyzes and compares 22 guidelines, highlighting overlaps but also omissions. As a result, I give a detailed overview of the field of AI ethics. Finally, (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   133 citations  
  • The other question: can and should robots have rights?David J. Gunkel - 2018 - Ethics and Information Technology 20 (2):87-99.
    This essay addresses the other side of the robot ethics debate, taking up and investigating the question “Can and should robots have rights?” The examination of this subject proceeds by way of three steps or movements. We begin by looking at and analyzing the form of the question itself. There is an important philosophical difference between the two modal verbs that organize the inquiry—can and should. This difference has considerable history behind it that influences what is asked about and how. (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   55 citations  
  • Mind the gap: responsible robotics and the problem of responsibility.David J. Gunkel - 2020 - Ethics and Information Technology 22 (4):307-320.
    The task of this essay is to respond to the question concerning robots and responsibility—to answer for the way that we understand, debate, and decide who or what is able to answer for decisions and actions undertaken by increasingly interactive, autonomous, and sociable mechanisms. The analysis proceeds through three steps or movements. It begins by critically examining the instrumental theory of technology, which determines the way one typically deals with and responds to the question of responsibility when it involves technology. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   37 citations  
  • Introduction: Machine Ethics and the Ethics of Building Intelligent Machines. [REVIEW]Marcello Guarini - 2013 - Topoi 32 (2):213-215.
  • What we owe to decision-subjects: beyond transparency and explanation in automated decision-making.David Gray Grant, Jeff Behrends & John Basl - 2023 - Philosophical Studies 2003:1-31.
    The ongoing explosion of interest in artificial intelligence is fueled in part by recently developed techniques in machine learning. Those techniques allow automated systems to process huge amounts of data, utilizing mathematical methods that depart from traditional statistical approaches, and resulting in impressive advancements in our ability to make predictions and uncover correlations across a host of interesting domains. But as is now widely discussed, the way that those systems arrive at their outputs is often opaque, even to the experts (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • What do we owe to intelligent robots?John-Stewart Gordon - 2020 - AI and Society 35 (1):209-223.
    Great technological advances in such areas as computer science, artificial intelligence, and robotics have brought the advent of artificially intelligent robots within our reach within the next century. Against this background, the interdisciplinary field of machine ethics is concerned with the vital issue of making robots “ethical” and examining the moral status of autonomous robots that are capable of moral reasoning and decision-making. The existence of such robots will deeply reshape our socio-political life. This paper focuses on whether such highly (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   19 citations  
  • Review of Artificial Intelligence: Reflections in Philosophy, Theology and the Social Sciences by Benedikt P. Göcke and Astrid Rosenthal-von der Pütten. [REVIEW]John-Stewart Gordon - 2021 - AI and Society 36 (2):655-659.
  • Building Moral Robots: Ethical Pitfalls and Challenges.John-Stewart Gordon - 2020 - Science and Engineering Ethics 26 (1):141-157.
    This paper examines the ethical pitfalls and challenges that non-ethicists, such as researchers and programmers in the fields of computer science, artificial intelligence and robotics, face when building moral machines. Whether ethics is “computable” depends on how programmers understand ethics in the first place and on the adequacy of their understanding of the ethical problems and methodological challenges in these fields. Researchers and programmers face at least two types of problems due to their general lack of ethical knowledge or expertise. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  • Artificial moral and legal personhood.John-Stewart Gordon - forthcoming - AI and Society:1-15.
    This paper considers the hotly debated issue of whether one should grant moral and legal personhood to intelligent robots once they have achieved a certain standard of sophistication based on such criteria as rationality, autonomy, and social relations. The starting point for the analysis is the European Parliament’s resolution on Civil Law Rules on Robotics and its recommendation that robots be granted legal status and electronic personhood. The resolution is discussed against the background of the so-called Robotics Open Letter, which (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   15 citations