Switch to: Citations

Add references

You must login to add references.
  1. The Case for Ethical Autonomy in Unmanned Systems.Ronald C. Arkin - 2010 - Journal of Military Ethics 9 (4):332-341.
    The underlying thesis of the research in ethical autonomy for lethal autonomous unmanned systems is that they will potentially be capable of performing more ethically on the battlefield than are human soldiers. In this article this hypothesis is supported by ongoing and foreseen technological advances and perhaps equally important by an assessment of the fundamental ability of human warfighters in today's battlespace. If this goal of better-than-human performance is achieved, even if still imperfect, it can result in a reduction in (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   42 citations  
  • Artificial morality: Top-down, bottom-up, and hybrid approaches. [REVIEW]Colin Allen, Iva Smit & Wendell Wallach - 2005 - Ethics and Information Technology 7 (3):149-155.
    A principal goal of the discipline of artificial morality is to design artificial agents to act as if they are moral agents. Intermediate goals of artificial morality are directed at building into AI systems sensitivity to the values, ethics, and legality of activities. The development of an effective foundation for the field of artificial morality involves exploring the technological and philosophical issues involved in making computers into explicit moral reasoners. The goal of this paper is to discuss strategies for implementing (...)
    Direct download (9 more)  
     
    Export citation  
     
    Bookmark   62 citations  
  • Killer robots.Robert Sparrow - 2007 - Journal of Applied Philosophy 24 (1):62–77.
    The United States Army’s Future Combat Systems Project, which aims to manufacture a “robot army” to be ready for deployment by 2012, is only the latest and most dramatic example of military interest in the use of artificially intelligent systems in modern warfare. This paper considers the ethics of a decision to send artificially intelligent robots into war, by asking who we should hold responsible when an autonomous weapon system is involved in an atrocity of the sort that would normally (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   210 citations  
  • Learning robots interacting with humans: from epistemic risk to responsibility. [REVIEW]Matteo Santoro, Dante Marino & Guglielmo Tamburrini - 2008 - AI and Society 22 (3):301-314.
    The import of computational learning theories and techniques on the ethics of human-robot interaction is explored in the context of recent developments of personal robotics. An epistemological reflection enables one to isolate a variety of background hypotheses that are needed to achieve successful learning from experience in autonomous personal robots. The conjectural character of these background hypotheses brings out theoretical and practical limitations in our ability to predict and control the behaviour of learning robots in their interactions with humans. Responsibility (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  • Ethical regulations on robotics in Europe.Michael Nagenborg, Rafael Capurro, Jutta Weber & Christoph Pingel - 2008 - AI and Society 22 (3):349-366.
    There are only a few ethical regulations that deal explicitly with robots, in contrast to a vast number of regulations, which may be applied. We will focus on ethical issues with regard to “responsibility and autonomous robots”, “machines as a replacement for humans”, and “tele-presence”. Furthermore we will examine examples from special fields of application (medicine and healthcare, armed forces, and entertainment). We do not claim to present a complete list of ethical issue nor of regulations in the field of (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  • The responsibility gap: Ascribing responsibility for the actions of learning automata. [REVIEW]Andreas Matthias - 2004 - Ethics and Information Technology 6 (3):175-183.
    Traditionally, the manufacturer/operator of a machine is held (morally and legally) responsible for the consequences of its operation. Autonomous, learning machines, based on neural networks, genetic algorithms and agent architectures, create a new situation, where the manufacturer/operator of the machine is in principle not capable of predicting the future machine behaviour any more, and thus cannot be held morally responsible or liable for it. The society must decide between not using this kind of machine any more (which is not a (...)
    Direct download (11 more)  
     
    Export citation  
     
    Bookmark   165 citations  
  • The responsibility gap: Ascribing responsibility for the actions of learning automata.Andreas Matthias - 2004 - Ethics and Information Technology 6 (3):175-183.
    Traditionally, the manufacturer/operator of a machine is held (morally and legally) responsible for the consequences of its operation. Autonomous, learning machines, based on neural networks, genetic algorithms and agent architectures, create a new situation, where the manufacturer/operator of the machine is in principle not capable of predicting the future machine behaviour any more, and thus cannot be held morally responsible or liable for it. The society must decide between not using this kind of machine any more (which is not a (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   172 citations  
  • Computer systems: Moral entities but not moral agents. [REVIEW]Deborah G. Johnson - 2006 - Ethics and Information Technology 8 (4):195-204.
    After discussing the distinction between artifacts and natural entities, and the distinction between artifacts and technology, the conditions of the traditional account of moral agency are identified. While computer system behavior meets four of the five conditions, it does not and cannot meet a key condition. Computer systems do not have mental states, and even if they could be construed as having mental states, they do not have intendings to act, which arise from an agent’s freedom. On the other hand, (...)
    Direct download (10 more)  
     
    Export citation  
     
    Bookmark   84 citations  
  • On the moral responsibility of military robots.Thomas Hellström - 2013 - Ethics and Information Technology 15 (2):99-107.
    This article discusses mechanisms and principles for assignment of moral responsibility to intelligent robots, with special focus on military robots. We introduce the concept autonomous power as a new concept, and use it to identify the type of robots that call for moral considerations. It is furthermore argued that autonomous power, and in particular the ability to learn, is decisive for assignment of moral responsibility to robots. As technological development will lead to robots with increasing autonomous power, we should be (...)
    Direct download (11 more)  
     
    Export citation  
     
    Bookmark   33 citations  
  • The Social Construction of Technological Systems: New Directions in Sociology and History of Technology (25th Anniversary Edition with new preface).Wiebe E. Bijker, Thomas P. Hughes & Trevor Pinch (eds.) - 1987 - MIT Press.
  • The Ethics of Information Technology and Business.Richard T. De George - 2002 - Malden, MA: Wiley-Blackwell.
    This is the first study of business ethics to take into consideration the plethora of issues raised by the Information Age. The first study of business ethics to take into consideration the plethora of issues raised by the Information Age. Explores a wide range of topics including marketing, privacy, and the protection of personal information; employees and communication privacy; intellectual property issues; the ethical issues of e-business; Internet-related business ethics problems; and the ethical dimension of information technology on society. Uncovers (...)
    Direct download  
     
    Export citation  
     
    Bookmark   6 citations  
  • The Social Shaping of Technology.Donald A. MacKenzie & Judy Wajcman - 1999 - Guilford Press.
    Technological change is often seen as something that follows its own logic -- something we may welcome, or about which we may protest, but which we are unable to alter fundamentally. This reader challenges that assumption and its distinguished contributors demonstrate that technology is affected at a fundamental level by the social context in which it develops. General arguments are introduced about the relation of technology to society and different types of technology are examined: the technology of production: domestic and (...)
    Direct download  
     
    Export citation  
     
    Bookmark   89 citations  
  • The ethics of information technology and business.Richard T. De George - 2003 - Malden, MA: Blackwell.
    This is the first study of business ethics to take into consideration the plethora of issues raised by the Information Age. The first study of business ethics to take into consideration the plethora of issues raised by the Information Age. Explores a wide range of topics including marketing, privacy, and the protection of personal information; employees and communication privacy; intellectual property issues; the ethical issues of e-business; Internet-related business ethics problems; and the ethical dimension of information technology on society. Uncovers (...)
    Direct download  
     
    Export citation  
     
    Bookmark   17 citations  
  • Machine Ethics.Michael Anderson & Susan Leigh Anderson (eds.) - 2011 - Cambridge Univ. Press.
    The essays in this volume represent the first steps by philosophers and artificial intelligence researchers toward explaining why it is necessary to add an ...
    Direct download  
     
    Export citation  
     
    Bookmark   65 citations  
  • When is a robot a moral agent.John P. Sullins - 2006 - International Review of Information Ethics 6 (12):23-30.
    In this paper Sullins argues that in certain circumstances robots can be seen as real moral agents. A distinction is made between persons and moral agents such that, it is not necessary for a robot to have personhood in order to be a moral agent. I detail three requirements for a robot to be seen as a moral agent. The first is achieved when the robot is significantly autonomous from any programmers or operators of the machine. The second is when (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   70 citations  
  • The ethics of robot servitude.Stephen Petersen - 2007 - Journal of Experimental and Theoretical Artificial Intelligence 19 (1):43-54.
    Assume we could someday create artificial creatures with intelligence comparable to our own. Could it be ethical use them as unpaid labor? There is very little philosophical literature on this topic, but the consensus so far has been that such robot servitude would merely be a new form of slavery. Against this consensus I defend the permissibility of robot servitude, and in particular the controversial case of designing robots so that they want to serve human ends. A typical objection to (...)
    Direct download  
     
    Export citation  
     
    Bookmark   10 citations  
  • Machine Metaethics.Susan Leigh Anderson - 2011 - In M. Anderson S. Anderson (ed.), Machine Ethics. Cambridge Univ. Press.
  • Once people understand that machine ethics is concerned with how intelligent machines should behave, they often maintain that Isaac Asimov has already given us an ideal set of rules for such machines. They have in mind Asimov's three laws of robotics: 1. a robot may not injure a human being, or, through inaction, allow a human.Susan Leigh Anderson - 2011 - In M. Anderson S. Anderson (ed.), Machine Ethics. Cambridge Univ. Press.