Switch to: Citations

Add references

You must login to add references.
  1. A Vindication of the Rights of Machines.David J. Gunkel - 2014 - Philosophy and Technology 27 (1):113-132.
    This essay responds to the machine question in the affirmative, arguing that artifacts, like robots, AI, and other autonomous systems, can no longer be legitimately excluded from moral consideration. The demonstration of this thesis proceeds in four parts or movements. The first and second parts approach the subject by investigating the two constitutive components of the ethical relationship—moral agency and patiency. In the process, they each demonstrate failure. This occurs not because the machine is somehow unable to achieve what is (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   47 citations  
  • Kantian Ethics, Dignity and Perfection.Paul Formosa - 2017 - Cambridge: Cambridge University Press.
    In this volume Paul Formosa sets out a novel approach to Kantian ethics as an ethics of dignity by focusing on the Formula of Humanity as a normative principle distinct from the Formula of Universal Law. By situating the Kantian conception of dignity within the wider literature on dignity, he develops an important distinction between status dignity, which all rational agents have, and achievement dignity, which all rational agents should aspire to. He then explores constructivist and realist views on the (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  • Robot minds and human ethics: the need for a comprehensive model of moral decision making. [REVIEW]Wendell Wallach - 2010 - Ethics and Information Technology 12 (3):243-250.
    Building artificial moral agents (AMAs) underscores the fragmentary character of presently available models of human ethical behavior. It is a distinctly different enterprise from either the attempt by moral philosophers to illuminate the “ought” of ethics or the research by cognitive scientists directed at revealing the mechanisms that influence moral psychology, and yet it draws on both. Philosophers and cognitive scientists have tended to stress the importance of particular cognitive mechanisms, e.g., reasoning, moral sentiments, heuristics, intuitions, or a moral grammar, (...)
    Direct download (7 more)  
     
    Export citation  
     
    Bookmark   24 citations  
  • Robot Morals and Human Ethics.Wendell Wallach - 2010 - Teaching Ethics 11 (1):87-92.
    Building artificial moral agents (AMAs) underscores the fragmentary character of presently available models of human ethical behavior. It is a distinctly different enterprise from either the attempt by moral philosophers to illuminate the “ought” of ethics or the research by cognitive scientists directed at revealing the mechanisms that influence moral psychology, and yet it draws on both. Philosophers and cognitive scientists have tended to stress the importance of particular cognitive mechanisms, e.g., reasoning, moral sentiments, heuristics, intuitions, or a moral grammar, (...)
    Direct download (7 more)  
     
    Export citation  
     
    Bookmark   25 citations  
  • Critiquing the Reasons for Making Artificial Moral Agents.Aimee van Wynsberghe & Scott Robbins - 2018 - Science and Engineering Ethics:1-17.
    Many industry leaders and academics from the field of machine ethics would have us believe that the inevitability of robots coming to have a larger role in our lives demands that robots be endowed with moral reasoning capabilities. Robots endowed in this way may be referred to as artificial moral agents. Reasons often given for developing AMAs are: the prevention of harm, the necessity for public trust, the prevention of immoral use, such machines are better moral reasoners than humans, and (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   44 citations  
  • Critiquing the Reasons for Making Artificial Moral Agents.Aimee van Wynsberghe & Scott Robbins - 2019 - Science and Engineering Ethics 25 (3):719-735.
    Many industry leaders and academics from the field of machine ethics would have us believe that the inevitability of robots coming to have a larger role in our lives demands that robots be endowed with moral reasoning capabilities. Robots endowed in this way may be referred to as artificial moral agents. Reasons often given for developing AMAs are: the prevention of harm, the necessity for public trust, the prevention of immoral use, such machines are better moral reasoners than humans, and (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   45 citations  
  • Moral Deskilling and Upskilling in a New Machine Age: Reflections on the Ambiguous Future of Character.Shannon Vallor - 2015 - Philosophy and Technology 28 (1):107-124.
    This paper explores the ambiguous impact of new information and communications technologies on the cultivation of moral skills in human beings. Just as twentieth century advances in machine automation resulted in the economic devaluation of practical knowledge and skillsets historically cultivated by machinists, artisans, and other highly trained workers , while also driving the cultivation of new skills in a variety of engineering and white collar occupations, ICTs are also recognized as potential causes of a complex pattern of economic deskilling, (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   60 citations  
  • Ethics and consciousness in artificial agents.Steve Torrance - 2008 - AI and Society 22 (4):495-521.
    In what ways should we include future humanoid robots, and other kinds of artificial agents, in our moral universe? We consider the Organic view, which maintains that artificial humanoid agents, based on current computational technologies, could not count as full-blooded moral agents, nor as appropriate targets of intrinsic moral concern. On this view, artificial humanoids lack certain key properties of biological organisms, which preclude them from having full moral status. Computationally controlled systems, however advanced in their cognitive or informational capacities, (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   58 citations  
  • Out of character: on the creation of virtuous machines. [REVIEW]Ryan Tonkens - 2012 - Ethics and Information Technology 14 (2):137-149.
    The emerging discipline of Machine Ethics is concerned with creating autonomous artificial moral agents that perform ethically significant actions out in the world. Recently, Wallach and Allen (Moral machines: teaching robots right from wrong, Oxford University Press, Oxford, 2009) and others have argued that a virtue-based moral framework is a promising tool for meeting this end. However, even if we could program autonomous machines to follow a virtue-based moral framework, there are certain pressing ethical issues that need to be taken (...)
    Direct download (9 more)  
     
    Export citation  
     
    Bookmark   23 citations  
  • A challenge for machine ethics.Ryan Tonkens - 2009 - Minds and Machines 19 (3):421-438.
    That the successful development of fully autonomous artificial moral agents (AMAs) is imminent is becoming the received view within artificial intelligence research and robotics. The discipline of Machines Ethics, whose mandate is to create such ethical robots, is consequently gaining momentum. Although it is often asked whether a given moral framework can be implemented into machines, it is never asked whether it should be. This paper articulates a pressing challenge for Machine Ethics: To identify an ethical framework that is both (...)
    Direct download (9 more)  
     
    Export citation  
     
    Bookmark   38 citations  
  • Morality Play: A Model for Developing Games of Moral Expertise.Dan Staines, Paul Formosa & Malcolm Ryan - 2019 - Games and Culture 14 (4):410-429.
    According to cognitive psychologists, moral decision-making is a dual-process phenomenon involving two types of cognitive processes: explicit reasoning and implicit intuition. Moral development involves training and integrating both types of cognitive processes through a mix of instruction, practice, and reflection. Serious games are an ideal platform for this kind of moral training, as they provide safe spaces for exploring difficult moral problems and practicing the skills necessary to resolve them. In this article, we present Morality Play, a model for the (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  • Robots and Respect: Assessing the Case Against Autonomous Weapon Systems.Robert Sparrow - 2016 - Ethics and International Affairs 30 (1):93-116.
    There is increasing speculation within military and policy circles that the future of armed conflict is likely to include extensive deployment of robots designed to identify targets and destroy them without the direct oversight of a human operator. My aim in this paper is twofold. First, I will argue that the ethical case for allowing autonomous targeting, at least in specific restricted domains, is stronger than critics have acknowledged. Second, I will attempt to uncover, explicate, and defend the intuition that (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   29 citations  
  • “Trust but Verify”: The Difficulty of Trusting Autonomous Weapons Systems.Heather M. Roff & David Danks - 2018 - Journal of Military Ethics 17 (1):2-20.
    ABSTRACTAutonomous weapons systems pose many challenges in complex battlefield environments. Previous discussions of them have largely focused on technological or policy issues. In contrast, we focus here on the challenge of trust in an AWS. One type of human trust depends only on judgments about the predictability or reliability of the trustee, and so are suitable for all manner of artifacts. However, AWSs that are worthy of the descriptor “autonomous” will not exhibit the required strong predictability in the complex, changing (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  • AI and the path to envelopment: knowledge as a first step towards the responsible regulation and use of AI-powered machines.Scott Robbins - 2020 - AI and Society 35 (2):391-400.
    With Artificial Intelligence entering our lives in novel ways—both known and unknown to us—there is both the enhancement of existing ethical issues associated with AI as well as the rise of new ethical issues. There is much focus on opening up the ‘black box’ of modern machine-learning algorithms to understand the reasoning behind their decisions—especially morally salient decisions. However, some applications of AI which are no doubt beneficial to society rely upon these black boxes. Rather than requiring algorithms to be (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  • Constructions of reason: explorations of Kant's practical philosophy.Onora O'Neill - 1989 - New York: Cambridge University Press.
    Two centuries after they were published, Kant's ethical writings are as much admired and imitated as they have ever been, yet serious and long-standing accusations of internal incoherence remain unresolved. Onora O'Neill traces the alleged incoherences to attempts to assimilate Kant's ethical writings to modern conceptions of rationality, action and rights. When the temptation to assimilate is resisted, a strikingly different and more cohesive account of reason and morality emerges. Kant offers a "constructivist" vindication of reason and a moral vision (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   240 citations  
  • The ethics of crashes with self‐driving cars: A roadmap, I.Sven Nyholm - 2018 - Philosophy Compass 13 (7):e12507.
    Self‐driving cars hold out the promise of being much safer than regular cars. Yet they cannot be 100% safe. Accordingly, they need to be programmed for how to deal with crash scenarios. Should cars be programmed to always prioritize their owners, to minimize harm, or to respond to crashes on the basis of some other type of principle? The article first discusses whether everyone should have the same “ethics settings.” Next, the oft‐made analogy with the trolley problem is examined. Then (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   28 citations  
  • This “Ethical Trap” Is for Roboticists, Not Robots: On the Issue of Artificial Agent Ethical Decision-Making.Keith W. Miller, Marty J. Wolf & Frances Grodzinsky - 2017 - Science and Engineering Ethics 23 (2):389-401.
    In this paper we address the question of when a researcher is justified in describing his or her artificial agent as demonstrating ethical decision-making. The paper is motivated by the amount of research being done that attempts to imbue artificial agents with expertise in ethical decision-making. It seems clear that computing systems make decisions, in that they make choices between different options; and there is scholarship in philosophy that addresses the distinction between ethical decision-making and general decision-making. Essentially, the qualitative (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  • AI armageddon and the three laws of robotics.Lee McCauley - 2007 - Ethics and Information Technology 9 (2):153-164.
    After 50 years, the fields of artificial intelligence and robotics capture the imagination of the general public while, at the same time, engendering a great deal of fear and skepticism. Isaac Asimov recognized this deep-seated misconception of technology and created the Three Laws of Robotics. The first part of this paper examines the underlying fear of intelligent robots, revisits Asimov’s response, and reports on some current opinions on the use of the Three Laws by practitioners. Finally, an argument against robotic (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  • Artificial agents among us: Should we recognize them as agents proper?Migle Laukyte - 2017 - Ethics and Information Technology 19 (1):1-17.
    In this paper, I discuss whether in a society where the use of artificial agents is pervasive, these agents should be recognized as having rights like those we accord to group agents. This kind of recognition I understand to be at once social and legal, and I argue that in order for an artificial agent to be so recognized, it will need to meet the same basic conditions in light of which group agents are granted such recognition. I then explore (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  • Never Mind the Trolley: The Ethics of Autonomous Vehicles in Mundane Situations.Johannes Himmelreich - 2018 - Ethical Theory and Moral Practice 21 (3):669-684.
    Trolley cases are widely considered central to the ethics of autonomous vehicles. We caution against this by identifying four problems. Trolley cases, given technical limitations, rest on assumptions that are in tension with one another. Furthermore, trolley cases illuminate only a limited range of ethical issues insofar as they cohere with a certain design framework. Furthermore, trolley cases seem to demand a moral answer when a political answer is called for. Finally, trolley cases might be epistemically problematic in several ways. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   29 citations  
  • Artificial agency, consciousness, and the criteria for moral agency: What properties must an artificial agent have to be a moral agent? [REVIEW]Kenneth Einar Himma - 2009 - Ethics and Information Technology 11 (1):19-29.
    In this essay, I describe and explain the standard accounts of agency, natural agency, artificial agency, and moral agency, as well as articulate what are widely taken to be the criteria for moral agency, supporting the contention that this is the standard account with citations from such widely used and respected professional resources as the Stanford Encyclopedia of Philosophy, Routledge Encyclopedia of Philosophy, and the Internet Encyclopedia of Philosophy. I then flesh out the implications of some of these well-settled theories (...)
    Direct download (9 more)  
     
    Export citation  
     
    Bookmark   70 citations  
  • Responsibility for Crashes of Autonomous Vehicles: An Ethical Analysis.Alexander Hevelke & Julian Nida-Rümelin - 2015 - Science and Engineering Ethics 21 (3):619-630.
    A number of companies including Google and BMW are currently working on the development of autonomous cars. But if fully autonomous cars are going to drive on our roads, it must be decided who is to be held responsible in case of accidents. This involves not only legal questions, but also moral ones. The first question discussed is whether we should try to design the tort liability for car manufacturers in a way that will help along the development and improvement (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   51 citations  
  • Mind the gap: responsible robotics and the problem of responsibility.David J. Gunkel - 2020 - Ethics and Information Technology 22 (4):307-320.
    The task of this essay is to respond to the question concerning robots and responsibility—to answer for the way that we understand, debate, and decide who or what is able to answer for decisions and actions undertaken by increasingly interactive, autonomous, and sociable mechanisms. The analysis proceeds through three steps or movements. It begins by critically examining the instrumental theory of technology, which determines the way one typically deals with and responds to the question of responsibility when it involves technology. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   37 citations  
  • Autonomous Cars: In Favor of a Mandatory Ethics Setting.Jan Gogoll & Julian F. Müller - 2017 - Science and Engineering Ethics 23 (3):681-700.
    The recent progress in the development of autonomous cars has seen ethical questions come to the forefront. In particular, life and death decisions regarding the behavior of self-driving cars in trolley dilemma situations are attracting widespread interest in the recent debate. In this essay we want to ask whether we should implement a mandatory ethics setting for the whole of society or, whether every driver should have the choice to select his own personal ethics setting. While the consensus view seems (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   51 citations  
  • On the morality of artificial agents.Luciano Floridi & J. W. Sanders - 2004 - Minds and Machines 14 (3):349-379.
    Artificial agents (AAs), particularly but not only those in Cyberspace, extend the class of entities that can be involved in moral situations. For they can be conceived of as moral patients (as entities that can be acted upon for good or evil) and also as moral agents (as entities that can perform actions, again for good or evil). In this paper, we clarify the concept of agent and go on to separate the concerns of morality and responsibility of agents (most (...)
    Direct download (17 more)  
     
    Export citation  
     
    Bookmark   287 citations  
  • AI assisted ethics.Amitai Etzioni & Oren Etzioni - 2016 - Ethics and Information Technology 18 (2):149-156.
    The growing number of ‘smart’ instruments, those equipped with AI, has raised concerns because these instruments make autonomous decisions; that is, they act beyond the guidelines provided them by programmers. Hence, the question the makers and users of smart instrument face is how to ensure that these instruments will not engage in unethical conduct. The article suggests that to proceed we need a new kind of AI program—oversight programs—that will monitor, audit, and hold operational AI programs accountable.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   24 citations  
  • Robots, Law and the Retribution Gap.John Danaher - 2016 - Ethics and Information Technology 18 (4):299–309.
    We are living through an era of increased robotisation. Some authors have already begun to explore the impact of this robotisation on legal rules and practice. In doing so, many highlight potential liability gaps that might arise through robot misbehaviour. Although these gaps are interesting and socially significant, they do not exhaust the possible gaps that might be created by increased robotisation. In this article, I make the case for one of those alternative gaps: the retribution gap. This gap arises (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   57 citations  
  • Patiency is not a virtue: the design of intelligent systems and systems of ethics.Joanna J. Bryson - 2018 - Ethics and Information Technology 20 (1):15-26.
    The question of whether AI systems such as robots can or should be afforded moral agency or patiency is not one amenable either to discovery or simple reasoning, because we as societies constantly reconstruct our artefacts, including our ethical systems. Consequently, the place of AI systems in society is a matter of normative, not descriptive ethics. Here I start from a functionalist assumption, that ethics is the set of behaviour that maintains a society. This assumption allows me to exploit the (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   42 citations  
  • Embedded ethics: some technical and ethical challenges.Vincent Bonnemains, Claire Saurel & Catherine Tessier - 2018 - Ethics and Information Technology 20 (1):41-58.
    This paper pertains to research works aiming at linking ethics and automated reasoning in autonomous machines. It focuses on a formal approach that is intended to be the basis of an artificial agent’s reasoning that could be considered by a human observer as an ethical reasoning. The approach includes some formal tools to describe a situation and models of ethical principles that are designed to automatically compute a judgement on possible decisions that can be made in a given situation and (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   14 citations  
  • Artificial morality: Top-down, bottom-up, and hybrid approaches. [REVIEW]Colin Allen, Iva Smit & Wendell Wallach - 2005 - Ethics and Information Technology 7 (3):149-155.
    A principal goal of the discipline of artificial morality is to design artificial agents to act as if they are moral agents. Intermediate goals of artificial morality are directed at building into AI systems sensitivity to the values, ethics, and legality of activities. The development of an effective foundation for the field of artificial morality involves exploring the technological and philosophical issues involved in making computers into explicit moral reasoners. The goal of this paper is to discuss strategies for implementing (...)
    Direct download (9 more)  
     
    Export citation  
     
    Bookmark   62 citations  
  • Computational Modeling in Cognitive Science: A Manifesto for Change.Caspar Addyman & Robert M. French - 2012 - Topics in Cognitive Science 4 (3):332-341.
    Computational modeling has long been one of the traditional pillars of cognitive science. Unfortunately, the computer models of cognition being developed today have not kept up with the enormous changes that have taken place in computer technology and, especially, in human-computer interfaces. For all intents and purposes, modeling is still done today as it was 25, or even 35, years ago. Everyone still programs in his or her own favorite programming language, source code is rarely made available, accessibility of models (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  • Constructions of Reason: Explorations of Kant's Practical Philosophy.Onora O'Neill - 1989 - New York: Cambridge University Press.
    Two centuries after they were published, Kant's ethical writings are as much admired and imitated as they have ever been, yet serious and long-standing accusations of internal incoherence remain unresolved. Onora O'Neill traces the alleged incoherences to attempt to assimilate Kant's ethical writings to modern conceptions of rationality, action and rights. When the temptation to assimilate is resisted, a strikingly different and more cohesive account of reason and morality emerges. Kant offers a `constructivist' vindication of reason and a moral vision (...)
     
    Export citation  
     
    Bookmark   159 citations  
  • Superintelligence: paths, dangers, strategies.Nick Bostrom (ed.) - 2014 - Oxford University Press.
    The human brain has some capabilities that the brains of other animals lack. It is to these distinctive capabilities that our species owes its dominant position. Other animals have stronger muscles or sharper claws, but we have cleverer brains. If machine brains one day come to surpass human brains in general intelligence, then this new superintelligence could become very powerful. As the fate of the gorillas now depends more on us humans than on the gorillas themselves, so the fate of (...)
    No categories
  • Machine Ethics.Michael Anderson & Susan Leigh Anderson (eds.) - 2011 - Cambridge Univ. Press.
    The essays in this volume represent the first steps by philosophers and artificial intelligence researchers toward explaining why it is necessary to add an ...
    Direct download  
     
    Export citation  
     
    Bookmark   65 citations  
  • How Machines Can Advance Ethics.Susan Leigh Anderson & Michael Anderson - 2009 - Philosophy Now 72:17-19.
  • Four Kinds of Ethical Robots.James Moor - 2009 - Philosophy Now 72:12-14.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   18 citations  
  • What should we want from a robot ethic.Peter M. Asaro - 2006 - International Review of Information Ethics 6 (12):9-16.
    There are at least three things we might mean by "ethics in robotics": the ethical systems built into robots, the ethics of people who design and use robots, and the ethics of how people treat robots. This paper argues that the best approach to robot ethics is one which addresses all three of these, and to do this it ought to consider robots as socio-technical systems. By so doing, it is possible to think of a continuum of agency that lies (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   38 citations  
  • Designing People to Serve.Steve Petersen - 2011 - In Patrick Lin, George Bekey & Keith Abney (eds.), Robot Ethics. MIT Press.
    I argue that, contrary to intuition, it would be both possible and permissible to design people - whether artificial or organic - who by their nature desire to do tasks we find unpleasant.
    Direct download  
     
    Export citation  
     
    Bookmark   21 citations  
  • The singularity: A philosophical analysis.David J. Chalmers - 2010 - Journal of Consciousness Studies 17 (9-10):9 - 10.
    What happens when machines become more intelligent than humans? One view is that this event will be followed by an explosion to ever-greater levels of intelligence, as each generation of machines creates more intelligent machines in turn. This intelligence explosion is now often known as the “singularity”. The basic argument here was set out by the statistician I.J. Good in his 1965 article “Speculations Concerning the First Ultraintelligent Machine”: Let an ultraintelligent machine be defined as a machine that can far (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   114 citations  
  • Can machines be people? Reflections on the Turing triage test.Robert Sparrow - 2012 - In Patrick Lin, Keith Abney & George Bekey (eds.), Robot Ethics: The Ethical and Social Implications of Robotics. MIT Press. pp. 301-315.
    In, “The Turing Triage Test”, published in Ethics and Information Technology, I described a hypothetical scenario, modelled on the famous Turing Test for machine intelligence, which might serve as means of testing whether or not machines had achieved the moral standing of people. In this paper, I: (1) explain why the Turing Triage Test is of vital interest in the context of contemporary debates about the ethics of AI; (2) address some issues that complexify the application of this test; and, (...)
    Direct download  
     
    Export citation  
     
    Bookmark   14 citations  
  • The Nature, Importance, and Difficulty of Machine Ethics.James Moor - 2006 - IEEE Intelligent Systems 21:18-21.
     
    Export citation  
     
    Bookmark   111 citations  
  • When AI meets PC: exploring the implications of workplace social robots and a human-robot psychological contract.Sarah Bankins & Paul Formosa - 2019 - European Journal of Work and Organizational Psychology 2019.
    The psychological contract refers to the implicit and subjective beliefs regarding a reciprocal exchange agreement, predominantly examined between employees and employers. While contemporary contract research is investigating a wider range of exchanges employees may hold, such as with team members and clients, it remains silent on a rapidly emerging form of workplace relationship: employees’ increasing engagement with technically, socially, and emotionally sophisticated forms of artificially intelligent (AI) technologies. In this paper we examine social robots (also termed humanoid robots) as likely (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  • Prospects for a Kantian machine.Thomas M. Powers - 2006 - IEEE Intelligent Systems 21 (4):46-51.
    This paper is reprinted in the book Machine Ethics, eds. M. Anderson and S. Anderson, Cambridge University Press, 2011.
     
    Export citation  
     
    Bookmark   39 citations