Switch to: References

Add citations

You must login to add citations.
  1. Robots: ethical by design.Gordana Dodig Crnkovic & Baran Çürüklü - 2012 - Ethics and Information Technology 14 (1):61-71.
    Among ethicists and engineers within robotics there is an ongoing discussion as to whether ethical robots are possible or even desirable. We answer both of these questions in the positive, based on an extensive literature study of existing arguments. Our contribution consists in bringing together and reinterpreting pieces of information from a variety of sources. One of the conclusions drawn is that artifactual morality must come in degrees and depend on the level of agency, autonomy and intelligence of the machine. (...)
    Direct download (9 more)  
     
    Export citation  
     
    Bookmark   13 citations  
  • Embedding Values in Artificial Intelligence (AI) Systems.Ibo van de Poel - 2020 - Minds and Machines 30 (3):385-409.
    Organizations such as the EU High-Level Expert Group on AI and the IEEE have recently formulated ethical principles and (moral) values that should be adhered to in the design and deployment of artificial intelligence (AI). These include respect for autonomy, non-maleficence, fairness, transparency, explainability, and accountability. But how can we ensure and verify that an AI system actually respects these values? To help answer this question, I propose an account for determining when an AI system can be said to embody (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   37 citations  
  • ISPs & Rowdy Web Sites Before the Law: Should We Change Today’s Safe Harbour Clauses?Ugo Pagallo - 2011 - Philosophy and Technology 24 (4):419-436.
    The paper examines today’s debate on the new responsibilities of Internet service providers in connection with legal problems concerning jurisdiction, data processing, people’s privacy and education. The focus is foremost on the default rules and safe harbour clauses for ISPs liability, set up by the US and European legal systems. This framework is deepened in light of the different functions of the services provided on the Internet so as to highlight multiple levels of control over information and, correspondingly, different types (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  • Robots of Just War: A Legal Perspective.Ugo Pagallo - 2011 - Philosophy and Technology 24 (3):307-323.
    In order to present a hopefully comprehensive framework of what is the stake of the growing use of robot soldiers, the paper focuses on: the different impact of robots on legal systems, e.g., contractual obligations and tort liability; how robots affect crucial notions as causality, predictability and human culpability in criminal law and, finally, specific hypotheses of robots employed in “just wars.” By using the traditional distinction between causes that make wars just and conduct admissible on the battlefield, the aim (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  • Do Others Mind? Moral Agents Without Mental States.Fabio Tollon - 2021 - South African Journal of Philosophy 40 (2):182-194.
    As technology advances and artificial agents (AAs) become increasingly autonomous, start to embody morally relevant values and act on those values, there arises the issue of whether these entities should be considered artificial moral agents (AMAs). There are two main ways in which one could argue for AMA: using intentional criteria or using functional criteria. In this article, I provide an exposition and critique of “intentional” accounts of AMA. These accounts claim that moral agency should only be accorded to entities (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • Moral Judgments in the Age of Artificial Intelligence.Yulia W. Sullivan & Samuel Fosso Wamba - 2022 - Journal of Business Ethics 178 (4):917-943.
    The current research aims to answer the following question: “who will be held responsible for harm involving an artificial intelligence system?” Drawing upon the literature on moral judgments, we assert that when people perceive an AI system’s action as causing harm to others, they will assign blame to different entity groups involved in an AI’s life cycle, including the company, the developer team, and even the AI system itself, especially when such harm is perceived to be intentional. Drawing upon the (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • From machine ethics to computational ethics.Samuel T. Segun - 2021 - AI and Society 36 (1):263-276.
    Research into the ethics of artificial intelligence is often categorized into two subareas—robot ethics and machine ethics. Many of the definitions and classifications of the subject matter of these subfields, as found in the literature, are conflated, which I seek to rectify. In this essay, I infer that using the term ‘machine ethics’ is too broad and glosses over issues that the term computational ethics best describes. I show that the subject of inquiry of computational ethics is of great value (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Autonomous Weapons and Distributed Responsibility.Marcus Schulzke - 2013 - Philosophy and Technology 26 (2):203-219.
    The possibility that autonomous weapons will be deployed on the battlefields of the future raises the challenge of determining who can be held responsible for how these weapons act. Robert Sparrow has argued that it would be impossible to attribute responsibility for autonomous robots' actions to their creators, their commanders, or the robots themselves. This essay reaches a much different conclusion. It argues that the problem of determining responsibility for autonomous robots can be solved by addressing it within the context (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   25 citations  
  • Human Goals Are Constitutive of Agency in Artificial Intelligence.Elena Popa - 2021 - Philosophy and Technology 34 (4):1731-1750.
    The question whether AI systems have agency is gaining increasing importance in discussions of responsibility for AI behavior. This paper argues that an approach to artificial agency needs to be teleological, and consider the role of human goals in particular if it is to adequately address the issue of responsibility. I will defend the view that while AI systems can be viewed as autonomous in the sense of identifying or pursuing goals, they rely on human goals and other values incorporated (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  • Cracking down on autonomy: three challenges to design in IT Law. [REVIEW]U. Pagallo - 2012 - Ethics and Information Technology 14 (4):319-328.
    The paper examines how technology challenges conventional borders of national legal systems, as shown by cases that scholars address as a part of their everyday work in the fields of information technology (IT)-Law, i.e., computer crimes, data protection, digital copyright, and so forth. Information on the internet has in fact a ubiquitous nature that transcends political borders and questions the notion of the law as made of commands enforced through physical sanctions. Whereas many of today’s impasses on jurisdiction, international conflicts (...)
    Direct download (8 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  • Negotiating autonomy and responsibility in military robots.Merel Noorman & Deborah G. Johnson - 2014 - Ethics and Information Technology 16 (1):51-62.
    Central to the ethical concerns raised by the prospect of increasingly autonomous military robots are issues of responsibility. In this paper we examine different conceptions of autonomy within the discourse on these robots to bring into focus what is at stake when it comes to the autonomous nature of military robots. We argue that due to the metaphorical use of the concept of autonomy, the autonomy of robots is often treated as a black box in discussions about autonomous military robots. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  • Moral difference between humans and robots: paternalism and human-relative reason.Tsung-Hsing Ho - 2022 - AI and Society 37 (4):1533-1543.
    According to some philosophers, if moral agency is understood in behaviourist terms, robots could become moral agents that are as good as or even better than humans. Given the behaviourist conception, it is natural to think that there is no interesting moral difference between robots and humans in terms of moral agency (call it the _equivalence thesis_). However, such moral differences exist: based on Strawson’s account of participant reactive attitude and Scanlon’s relational account of blame, I argue that a distinct (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark  
  • Artificial moral agents are infeasible with foreseeable technologies.Patrick Chisan Hew - 2014 - Ethics and Information Technology 16 (3):197-206.
    For an artificial agent to be morally praiseworthy, its rules for behaviour and the mechanisms for supplying those rules must not be supplied entirely by external humans. Such systems are a substantial departure from current technologies and theory, and are a low prospect. With foreseeable technologies, an artificial agent will carry zero responsibility for its behavior and humans will retain full responsibility.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   15 citations  
  • Moral responsibility for computing artifacts: the rules and issues of trust.F. S. Grodzinsky, K. Miller & M. J. Wolf - 2012 - Acm Sigcas Computers and Society 42 (2):15-25.
    "The Rules" are found in a collaborative document that states principles for responsibility when a computer artifact is designed, developed and deployed into a sociotechnical system. At this writing, over 50 people from nine countries have signed onto The Rules. Unlike codes of ethics, The Rules are not tied to any organization, and computer users as well as computing professionals are invited to sign onto The Rules. The emphasis in The Rules is that both users and professionals have responsibilities in (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Developing artificial agents worthy of trust: “Would you buy a used car from this artificial agent?”. [REVIEW]F. S. Grodzinsky, K. W. Miller & M. J. Wolf - 2011 - Ethics and Information Technology 13 (1):17-27.
    There is a growing literature on the concept of e-trust and on the feasibility and advisability of “trusting” artificial agents. In this paper we present an object-oriented model for thinking about trust in both face-to-face and digitally mediated environments. We review important recent contributions to this literature regarding e-trust in conjunction with presenting our model. We identify three important types of trust interactions and examine trust from the perspective of a software developer. Too often, the primary focus of research in (...)
    Direct download (9 more)  
     
    Export citation  
     
    Bookmark   17 citations  
  • Artificial Moral Agents: Moral Mentors or Sensible Tools?Fabio Fossa - 2018 - Ethics and Information Technology (2):1-12.
    The aim of this paper is to offer an analysis of the notion of artificial moral agent (AMA) and of its impact on human beings’ self-understanding as moral agents. Firstly, I introduce the topic by presenting what I call the Continuity Approach. Its main claim holds that AMAs and human moral agents exhibit no significant qualitative difference and, therefore, should be considered homogeneous entities. Secondly, I focus on the consequences this approach leads to. In order to do this I take (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  • Liability for Robots: Sidestepping the Gaps.Bartek Chomanski - 2021 - Philosophy and Technology 34 (4):1013-1032.
    In this paper, I outline a proposal for assigning liability for autonomous machines modeled on the doctrine of respondeat superior. I argue that the machines’ users’ or designers’ liability should be determined by the manner in which the machines are created, which, in turn, should be responsive to considerations of the machines’ welfare interests. This approach has the twin virtues of promoting socially beneficial design of machines, and of taking their potential moral patiency seriously. I then argue for abandoning the (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  • A Normative Approach to Artificial Moral Agency.Dorna Behdadi & Christian Munthe - 2020 - Minds and Machines 30 (2):195-218.
    This paper proposes a methodological redirection of the philosophical debate on artificial moral agency in view of increasingly pressing practical needs due to technological development. This “normative approach” suggests abandoning theoretical discussions about what conditions may hold for moral agency and to what extent these may be met by artificial entities such as AI systems and robots. Instead, the debate should focus on how and to what extent such entities should be included in human practices normally assuming moral agency and (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   19 citations  
  • Skepticism and Information.Eric T. Kerr & Duncan Pritchard - 2012 - In Hilmi Demir (ed.), Philosophy of Engineering and Technology Volume 8. Springer.
    Philosophers of information, according to Luciano Floridi (The philosophy of information. Oxford University Press, Oxford, 2010, p 32), study how information should be “adequately created, processed, managed, and used.” A small number of epistemologists have employed the concept of information as a cornerstone of their theoretical framework. How this concept can be used to make sense of seemingly intractable epistemological problems, however, has not been widely explored. This paper examines Fred Dretske’s information-based epistemology, in particular his response to radical epistemological (...)
    Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  • 反思機器人的道德擬人主義.Tsung-Hsing Ho - 2020 - EurAmerica 50 (2):179-205.
    如果機器人的發展要能如科幻想像一般,在沒有人類監督下自動地工作,就必須確定機器人不會做出道德上錯誤的行為。 根據行為主義式的道德主體觀,若就外顯行為來看,機器人在道德上的表現跟人類一般,機器人就可被視為道德主體。從這很自然地引伸出機器人的道德擬人主義:凡適用於人類的道德規則就適用於機器人。我反對道德擬人主義 ,藉由史特勞森對於人際關係與反應態度的洞見,並以家長主義行為為例,我論述由於機器人缺乏人格性,無法參與人際關係,因此在關於家長主義行為上,機器人應該比人類受到更嚴格的限制。.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation