Switch to: References

Add citations

You must login to add citations.
  1. The Kant-Inspired Indirect Argument for Non-Sentient Robot Rights.Tobias Flattery - 2023 - AI and Ethics.
    Some argue that robots could never be sentient, and thus could never have intrinsic moral status. Others disagree, believing that robots indeed will be sentient and thus will have moral status. But a third group thinks that, even if robots could never have moral status, we still have a strong moral reason to treat some robots as if they do. Drawing on a Kantian argument for indirect animal rights, a number of technology ethicists contend that our treatment of anthropomorphic or (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • Artificial agents and the expanding ethical circle.Steve Torrance - 2013 - AI and Society 28 (4):399-414.
    I discuss the realizability and the ethical ramifications of Machine Ethics, from a number of different perspectives: I label these the anthropocentric, infocentric, biocentric and ecocentric perspectives. Each of these approaches takes a characteristic view of the position of humanity relative to other aspects of the designed and the natural worlds—or relative to the possibilities of ‘extra-human’ extensions to the ethical community. In the course of the discussion, a number of key issues emerge concerning the relation between technology and ethics, (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  • The Morality of Artificial Friends in Ishiguro’s Klara and the Sun.Jakob Stenseke - 2022 - Journal of Science Fiction and Philosophy 5.
    Can artificial entities be worthy of moral considerations? Can they be artificial moral agents (AMAs), capable of telling the difference between good and evil? In this essay, I explore both questions—i.e., whether and to what extent artificial entities can have a moral status (“the machine question”) and moral agency (“the AMA question”)—in light of Kazuo Ishiguro’s 2021 novel Klara and the Sun. I do so by juxtaposing two prominent approaches to machine morality that are central to the novel: the (1) (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • A Vindication of the Rights of Machines.David J. Gunkel - 2014 - Philosophy and Technology 27 (1):113-132.
    This essay responds to the machine question in the affirmative, arguing that artifacts, like robots, AI, and other autonomous systems, can no longer be legitimately excluded from moral consideration. The demonstration of this thesis proceeds in four parts or movements. The first and second parts approach the subject by investigating the two constitutive components of the ethical relationship—moral agency and patiency. In the process, they each demonstrate failure. This occurs not because the machine is somehow unable to achieve what is (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   47 citations  
  • Socially robotic: making useless machines.Ceyda Yolgormez & Joseph Thibodeau - 2022 - AI and Society 37 (2):565-578.
    As robots increasingly become part of our everyday lives, questions arise with regards to how to approach them and how to understand them in social contexts. The Western history of human–robot relations revolves around competition and control, which restricts our ability to relate to machines in other ways. In this study, we take a relational approach to explore different manners of socializing with robots, especially those that exceed an instrumental approach. The nonhuman subjects of this study are built to explore (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • Robot minds and human ethics: the need for a comprehensive model of moral decision making. [REVIEW]Wendell Wallach - 2010 - Ethics and Information Technology 12 (3):243-250.
    Building artificial moral agents (AMAs) underscores the fragmentary character of presently available models of human ethical behavior. It is a distinctly different enterprise from either the attempt by moral philosophers to illuminate the “ought” of ethics or the research by cognitive scientists directed at revealing the mechanisms that influence moral psychology, and yet it draws on both. Philosophers and cognitive scientists have tended to stress the importance of particular cognitive mechanisms, e.g., reasoning, moral sentiments, heuristics, intuitions, or a moral grammar, (...)
    Direct download (7 more)  
     
    Export citation  
     
    Bookmark   24 citations  
  • Consciousness and ethics: Artificially conscious moral agents.Wendell Wallach, Colin Allen & Stan Franklin - 2011 - International Journal of Machine Consciousness 3 (01):177-192.
  • Ethical aspects of AI robots for agri-food; a relational approach based on four case studies.Simone van der Burg, Else Giesbers, Marc-Jeroen Bogaardt, Wijbrand Ouweltjes & Kees Lokhorst - forthcoming - AI and Society:1-15.
    These last years, the development of AI robots for agriculture, livestock farming and food processing industries is rapidly increasing. These robots are expected to help produce and deliver food more efficiently for a growing human population, but they also raise societal and ethical questions. As the type of questions raised by these AI robots in society have been rarely empirically explored, we engaged in four case studies focussing on four types of AI robots for agri-food ‘in the making’: manure collectors, (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • A challenge for machine ethics.Ryan Tonkens - 2009 - Minds and Machines 19 (3):421-438.
    That the successful development of fully autonomous artificial moral agents (AMAs) is imminent is becoming the received view within artificial intelligence research and robotics. The discipline of Machines Ethics, whose mandate is to create such ethical robots, is consequently gaining momentum. Although it is often asked whether a given moral framework can be implemented into machines, it is never asked whether it should be. This paper articulates a pressing challenge for Machine Ethics: To identify an ethical framework that is both (...)
    Direct download (9 more)  
     
    Export citation  
     
    Bookmark   38 citations  
  • The artificial view: toward a non-anthropocentric account of moral patiency.Fabio Tollon - 2020 - Ethics and Information Technology 23 (2):147-155.
    In this paper I provide an exposition and critique of the Organic View of Ethical Status, as outlined by Torrance (2008). A key presupposition of this view is that only moral patients can be moral agents. It is claimed that because artificial agents lack sentience, they cannot be proper subjects of moral concern (i.e. moral patients). This account of moral standing in principle excludes machines from participating in our moral universe. I will argue that the Organic View operationalises anthropocentric intuitions (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  • On and beyond artifacts in moral relations: accounting for power and violence in Coeckelbergh’s social relationism.Fabio Tollon & Kiasha Naidoo - 2023 - AI and Society 38 (6):2609-2618.
    The ubiquity of technology in our lives and its culmination in artificial intelligence raises questions about its role in our moral considerations. In this paper, we address a moral concern in relation to technological systems given their deep integration in our lives. Coeckelbergh develops a social-relational account, suggesting that it can point us toward a dynamic, historicised evaluation of moral concern. While agreeing with Coeckelbergh’s move away from grounding moral concern in the ontological properties of entities, we suggest that it (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • AI ethics and the banality of evil.Payman Tajalli - 2021 - Ethics and Information Technology 23 (3):447-454.
    In this paper, I draw on Hannah Arendt’s notion of ‘banality of evil’ to argue that as long as AI systems are designed to follow codes of ethics or particular normative ethical theories chosen by us and programmed in them, they are Eichmanns destined to commit evil. Since intelligence alone is not sufficient for ethical decision making, rather than strive to program AI to determine the right ethical decision based on some ethical theory or criteria, AI should be concerned with (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Moral Judgments in the Age of Artificial Intelligence.Yulia W. Sullivan & Samuel Fosso Wamba - 2022 - Journal of Business Ethics 178 (4):917-943.
    The current research aims to answer the following question: “who will be held responsible for harm involving an artificial intelligence system?” Drawing upon the literature on moral judgments, we assert that when people perceive an AI system’s action as causing harm to others, they will assign blame to different entity groups involved in an AI’s life cycle, including the company, the developer team, and even the AI system itself, especially when such harm is perceived to be intentional. Drawing upon the (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • Conscious machines: Memory, melody and muscular imagination. [REVIEW]Susan A. J. Stuart - 2010 - Phenomenology and the Cognitive Sciences 9 (1):37-51.
    A great deal of effort has been, and continues to be, devoted to developing consciousness artificially (A small selection of the many authors writing in this area includes: Cotterill (J Conscious Stud 2:290–311, 1995 , 1998 ), Haikonen ( 2003 ), Aleksander and Dunmall (J Conscious Stud 10:7–18, 2003 ), Sloman ( 2004 , 2005 ), Aleksander ( 2005 ), Holland and Knight ( 2006 ), and Chella and Manzotti ( 2007 )), and yet a similar amount of effort has (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  • A neo-aristotelian perspective on the need for artificial moral agents (AMAs).Alejo José G. Sison & Dulce M. Redín - 2023 - AI and Society 38 (1):47-65.
    We examine Van Wynsberghe and Robbins (JAMA 25:719-735, 2019) critique of the need for Artificial Moral Agents (AMAs) and its rebuttal by Formosa and Ryan (JAMA 10.1007/s00146-020-01089-6, 2020) set against a neo-Aristotelian ethical background. Neither Van Wynsberghe and Robbins (JAMA 25:719-735, 2019) essay nor Formosa and Ryan’s (JAMA 10.1007/s00146-020-01089-6, 2020) is explicitly framed within the teachings of a specific ethical school. The former appeals to the lack of “both empirical and intuitive support” (Van Wynsberghe and Robbins 2019, p. 721) for (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  • Sentience, Vulcans, and Zombies: The Value of Phenomenal Consciousness.Joshua Shepherd - forthcoming - AI and Society:1-11.
    Many think that a specific aspect of phenomenal consciousness – valenced or affective experience – is essential to consciousness’s moral significance (valence sentientism). They hold that valenced experience is necessary for well-being, or moral status, or psychological intrinsic value (or all three). Some think that phenomenal consciousness generally is necessary for non-derivative moral significance (broad sentientism). Few think that consciousness is unnecessary for moral significance (non-necessitarianism). In this paper I consider the prospects for these views. I first consider the prospects (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • From machine ethics to computational ethics.Samuel T. Segun - 2021 - AI and Society 36 (1):263-276.
    Research into the ethics of artificial intelligence is often categorized into two subareas—robot ethics and machine ethics. Many of the definitions and classifications of the subject matter of these subfields, as found in the literature, are conflated, which I seek to rectify. In this essay, I infer that using the term ‘machine ethics’ is too broad and glosses over issues that the term computational ethics best describes. I show that the subject of inquiry of computational ethics is of great value (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • The hard limit on human nonanthropocentrism.Michael R. Scheessele - 2022 - AI and Society 37 (1):49-65.
    There may be a limit on our capacity to suppress anthropocentric tendencies toward non-human others. Normally, we do not reach this limit in our dealings with animals, the environment, etc. Thus, continued striving to overcome anthropocentrism when confronted with these non-human others may be justified. Anticipation of super artificial intelligence may force us to face this limit, denying us the ability to free ourselves completely of anthropocentrism. This could be for our own good.
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Could you hate a robot? And does it matter if you could?Helen Ryland - 2021 - AI and Society 36 (2):637-649.
    This article defends two claims. First, humans could be in relationships characterised by hate with some robots. Second, it matters that humans could hate robots, as this hate could wrong the robots (by leaving them at risk of mistreatment, exploitation, etc.). In defending this second claim, I will thus be accepting that morally considerable robots either currently exist, or will exist in the near future, and so it can matter (morally speaking) how we treat these robots. The arguments presented in (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • A theoretical perspective on social agency.Alessandro Pollini - 2009 - AI and Society 24 (2):165-171.
    In interacting with artificial social agents, novel forms of sociality between humans and machines emerge. The theme of Social Agency between humans and robots is of emerging importance. In this paper key theoretical issues are discussed in a preliminary exploration of the concept. We try to understand what Social Agency is and how it is created by, negotiated with, and attributed to artificial agents. This is done in particular considering socially situated robots and by exploring how people recognize and accept (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • What makes any agent a moral agent? Reflections on machine consciousness and moral agency.Joel Parthemore & Blay Whitby - 2013 - International Journal of Machine Consciousness 5 (2):105-129.
    In this paper, we take moral agency to be that context in which a particular agent can, appropriately, be held responsible for her actions and their consequences. In order to understand moral agency, we will discuss what it would take for an artifact to be a moral agent. For reasons that will become clear over the course of the paper, we take the artifactual question to be a useful way into discussion but ultimately misleading. We set out a number of (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  • Moral Status for Malware! The Difficulty of Defining Advanced Artificial Intelligence.Miranda Mowbray - 2021 - Cambridge Quarterly of Healthcare Ethics 30 (3):517-528.
    The suggestion has been made that future advanced artificial intelligence (AI) that passes some consciousness-related criteria should be treated as having moral status, and therefore, humans would have an ethical obligation to consider its well-being. In this paper, the author discusses the extent to which software and robots already pass proposed criteria for consciousness; and argues against the moral status for AI on the grounds that human malware authors may design malware to fake consciousness. In fact, the article warns that (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • Robotic Bodies and the Kairos of Humanoid Theologies.James McBride - 2019 - Sophia 58 (4):663-676.
    In the not-too-distant future, robots will populate the walks of everyday life, from the manufacturing floor to corporate offices, and from battlefields to the home. While most work on the social implications of robotics focuses on such moral issues as the economic impact on human workers or the ethics of lethal machines, scant attention is paid to the effect of the advent of the robotic age on religion. Robots will likely become commonplace in the home by the end of the (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • The autonomy-safety-paradox of service robotics in Europe and Japan: a comparative analysis.Hironori Matsuzaki & Gesa Lindemann - 2016 - AI and Society 31 (4):501-517.
  • Computationally rational agents can be moral agents.Bongani Andy Mabaso - 2020 - Ethics and Information Technology 23 (2):137-145.
    In this article, a concise argument for computational rationality as a basis for artificial moral agency is advanced. Some ethicists have long argued that rational agents can become artificial moral agents. However, most of their views have come from purely philosophical perspectives, thus making it difficult to transfer their arguments to a scientific and analytical frame of reference. The result has been a disintegrated approach to the conceptualisation and design of artificial moral agents. In this article, I make the argument (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  • Artificial Moral Agents Within an Ethos of AI4SG.Bongani Andy Mabaso - 2020 - Philosophy and Technology 34 (1):7-21.
    As artificial intelligence (AI) continues to proliferate into every area of modern life, there is no doubt that society has to think deeply about the potential impact, whether negative or positive, that it will have. Whilst scholars recognise that AI can usher in a new era of personal, social and economic prosperity, they also warn of the potential for it to be misused towards the detriment of society. Deliberate strategies are therefore required to ensure that AI can be safely integrated (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  • From responsible robotics towards a human rights regime oriented to the challenges of robotics and artificial intelligence.Hin-Yan Liu & Karolina Zawieska - 2020 - Ethics and Information Technology 22 (4):321-333.
    As the aim of the responsible robotics initiative is to ensure that responsible practices are inculcated within each stage of design, development and use, this impetus is undergirded by the alignment of ethical and legal considerations towards socially beneficial ends. While every effort should be expended to ensure that issues of responsibility are addressed at each stage of technological progression, irresponsibility is inherent within the nature of robotics technologies from a theoretical perspective that threatens to thwart the endeavour. This is (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  • Why a Virtual Assistant for Moral Enhancement When We Could have a Socrates?Francisco Lara - 2021 - Science and Engineering Ethics 27 (4):1-27.
    Can Artificial Intelligence be more effective than human instruction for the moral enhancement of people? The author argues that it only would be if the use of this technology were aimed at increasing the individual's capacity to reflectively decide for themselves, rather than at directly influencing behaviour. To support this, it is shown how a disregard for personal autonomy, in particular, invalidates the main proposals for applying new technologies, both biomedical and AI-based, to moral enhancement. As an alternative to these (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  • Relationalism through Social Robotics.Raya A. Jones - 2013 - Journal for the Theory of Social Behaviour 43 (4):405-424.
    Social robotics is a rapidly developing industry-oriented area of research, intent on making robots in social roles commonplace in the near future. This has led to rising interest in the dynamics as well as ethics of human-robot relationships, described here as a nascent relational turn. A contrast is drawn with the 1990s’ paradigm shift associated with relational-self themes in social psychology. Constructions of the human-robot relationship reproduce the “I-You-Me” dominant model of theorising about the self with biases that (as in (...)
    Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Hybrids and the Boundaries of Moral Considerability or Revisiting the Idea of Non-Instrumental Value.Magdalena Holy-Luczaj & Vincent Blok - 2019 - Philosophy and Technology 34 (2):223-242.
    The transgressive ontological character of hybrids—entities crossing the ontological binarism of naturalness and artificiality, e.g., biomimetic projects—calls for pondering the question of their ethical status, since metaphysical and moral ideas are often inextricably linked. The example of it is the concept of “moral considerability” and related to it the idea of “intrinsic value” understood as a non-instrumentality of a being. Such an approach excludes hybrids from moral considerations due to their instrumental character. In the paper, we revisit the boundaries of (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  • Hybrids and the Boundaries of Moral Considerability or Revisiting the Idea of Non-Instrumental Value.Magdalena Holy-Luczaj & Vincent Blok - 2019 - Philosophy and Technology 34 (2):223-242.
    The transgressive ontological character of hybrids—entities crossing the ontological binarism of naturalness and artificiality, e.g., biomimetic projects—calls for pondering the question of their ethical status, since metaphysical and moral ideas are often inextricably linked. The example of it is the concept of “moral considerability” and related to it the idea of “intrinsic value” understood as a non-instrumentality of a being. Such an approach excludes hybrids from moral considerations due to their instrumental character. In the paper, we revisit the boundaries of (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  • Is the machine question the same question as the animal question?Katharyn Hogan - 2017 - Ethics and Information Technology 19 (1):29-38.
  • Artificial moral agents are infeasible with foreseeable technologies.Patrick Chisan Hew - 2014 - Ethics and Information Technology 16 (3):197-206.
    For an artificial agent to be morally praiseworthy, its rules for behaviour and the mechanisms for supplying those rules must not be supplied entirely by external humans. Such systems are a substantial departure from current technologies and theory, and are a low prospect. With foreseeable technologies, an artificial agent will carry zero responsibility for its behavior and humans will retain full responsibility.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   15 citations  
  • The Moral Consideration of Artificial Entities: A Literature Review.Jamie Harris & Jacy Reese Anthis - 2021 - Science and Engineering Ethics 27 (4):1-95.
    Ethicists, policy-makers, and the general public have questioned whether artificial entities such as robots warrant rights or other forms of moral consideration. There is little synthesis of the research on this topic so far. We identify 294 relevant research or discussion items in our literature review of this topic. There is widespread agreement among scholars that some artificial entities could warrant moral consideration in the future, if not also the present. The reasoning varies, such as concern for the effects on (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  • Building Moral Robots: Ethical Pitfalls and Challenges.John-Stewart Gordon - 2020 - Science and Engineering Ethics 26 (1):141-157.
    This paper examines the ethical pitfalls and challenges that non-ethicists, such as researchers and programmers in the fields of computer science, artificial intelligence and robotics, face when building moral machines. Whether ethics is “computable” depends on how programmers understand ethics in the first place and on the adequacy of their understanding of the ethical problems and methodological challenges in these fields. Researchers and programmers face at least two types of problems due to their general lack of ethical knowledge or expertise. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  • Making moral machines: why we need artificial moral agents.Paul Formosa & Malcolm Ryan - forthcoming - AI and Society.
    As robots and Artificial Intelligences become more enmeshed in rich social contexts, it seems inevitable that we will have to make them into moral machines equipped with moral skills. Apart from the technical difficulties of how we could achieve this goal, we can also ask the ethical question of whether we should seek to create such Artificial Moral Agents (AMAs). Recently, several papers have argued that we have strong reasons not to develop AMAs. In response, we develop a comprehensive analysis (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   10 citations  
  • I, Volkswagen.Stephanie Collins - 2022 - Philosophical Quarterly 72 (2):283-304.
    Philosophers increasingly argue that collective agents can be blameworthy for wrongdoing. Advocates tend to endorse functionalism, on which collectives are analogous to complicated robots. This is puzzling: we don’t hold robots blameworthy. I argue we don’t hold robots blameworthy because blameworthiness presupposes the capacity for a mental state I call ‘moral self-awareness’. This raises a new problem for collective blameworthiness: collectives seem to lack the capacity for moral self-awareness. I solve the problem by giving an account of how collectives have (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  • You, robot: on the linguistic construction of artificial others. [REVIEW]Mark Coeckelbergh - 2011 - AI and Society 26 (1):61-69.
    How can we make sense of the idea of ‘personal’ or ‘social’ relations with robots? Starting from a social and phenomenological approach to human–robot relations, this paper explores how we can better understand and evaluate these relations by attending to the ways our conscious experience of the robot and the human–robot relation is mediated by language. It is argued that our talk about and to robots is not a mere representation of an objective robotic or social-interactive reality, but rather interprets (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   18 citations  
  • Robot rights? Towards a social-relational justification of moral consideration.Mark Coeckelbergh - 2010 - Ethics and Information Technology 12 (3):209-221.
    Should we grant rights to artificially intelligent robots? Most current and near-future robots do not meet the hard criteria set by deontological and utilitarian theory. Virtue ethics can avoid this problem with its indirect approach. However, both direct and indirect arguments for moral consideration rest on ontological features of entities, an approach which incurs several problems. In response to these difficulties, this paper taps into a different conceptual resource in order to be able to grant some degree of moral consideration (...)
    Direct download (13 more)  
     
    Export citation  
     
    Bookmark   94 citations  
  • Artificial agents, good care, and modernity.Mark Coeckelbergh - 2015 - Theoretical Medicine and Bioethics 36 (4):265-277.
    When is it ethically acceptable to use artificial agents in health care? This article articulates some criteria for good care and then discusses whether machines as artificial agents that take over care tasks meet these criteria. Particular attention is paid to intuitions about the meaning of ‘care’, ‘agency’, and ‘taking over’, but also to the care process as a labour process in a modern organizational and financial-economic context. It is argued that while there is in principle no objection to using (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   17 citations  
  • Sympathy for Dolores: Moral Consideration for Robots Based on Virtue and Recognition.Massimiliano L. Cappuccio, Anco Peeters & William McDonald - 2019 - Philosophy and Technology 33 (1):9-31.
    This paper motivates the idea that social robots should be credited as moral patients, building on an argumentative approach that combines virtue ethics and social recognition theory. Our proposal answers the call for a nuanced ethical evaluation of human-robot interaction that does justice to both the robustness of the social responses solicited in humans by robots and the fact that robots are designed to be used as instruments. On the one hand, we acknowledge that the instrumental nature of robots and (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   13 citations  
  • The ethics of artificial intelligence: superintelligence, life 3.0 and robot rights.Kati Tusinski Berg - 2018 - Journal of Media Ethics 33 (3):151-153.
  • A Normative Approach to Artificial Moral Agency.Dorna Behdadi & Christian Munthe - 2020 - Minds and Machines 30 (2):195-218.
    This paper proposes a methodological redirection of the philosophical debate on artificial moral agency in view of increasingly pressing practical needs due to technological development. This “normative approach” suggests abandoning theoretical discussions about what conditions may hold for moral agency and to what extent these may be met by artificial entities such as AI systems and robots. Instead, the debate should focus on how and to what extent such entities should be included in human practices normally assuming moral agency and (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   19 citations  
  • Empathic responses and moral status for social robots: an argument in favor of robot patienthood based on K. E. Løgstrup.Simon N. Balle - 2022 - AI and Society 37 (2):535-548.
    Empirical research on human–robot interaction has demonstrated how humans tend to react to social robots with empathic responses and moral behavior. How should we ethically evaluate such responses to robots? Are people wrong to treat non-sentient artefacts as moral patients since this rests on anthropomorphism and ‘over-identification’ —or correct since spontaneous moral intuition and behavior toward nonhumans is indicative for moral patienthood, such that social robots become our ‘Others’?. In this research paper, I weave extant HRI studies that demonstrate empathic (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • Foundations of an Ethical Framework for AI Entities: the Ethics of Systems.Andrej Dameski - 2020 - Dissertation, University of Luxembourg
    The field of AI ethics during the current and previous decade is receiving an increasing amount of attention from all involved stakeholders: the public, science, philosophy, religious organizations, enterprises, governments, and various organizations. However, this field currently lacks consensus on scope, ethico-philosophical foundations, or common methodology. This thesis aims to contribute towards filling this gap by providing an answer to the two main research questions: first, what theory can explain moral scenarios in which AI entities are participants?; and second, what (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • The Implementation of Ethical Decision Procedures in Autonomous Systems : the Case of the Autonomous Vehicle.Katherine Evans - 2021 - Dissertation, Sorbonne Université
    The ethics of emerging forms of artificial intelligence has become a prolific subject in both academic and public spheres. A great deal of these concerns flow from the need to ensure that these technologies do not cause harm—physical, emotional or otherwise—to the human agents with which they will interact. In the literature, this challenge has been met with the creation of artificial moral agents: embodied or virtual forms of artificial intelligence whose decision procedures are constrained by explicit normative principles, requiring (...)
    No categories
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark  
  • The Problem of Evil in Virtual Worlds.Brendan Shea - 2017 - In Mark Silcox (ed.), Experience Machines: The Philosophy of Virtual Worlds. Lanham, MD: Rowman & Littlefield. pp. 137-155.
    In its original form, Nozick’s experience machine serves as a potent counterexample to a simplistic form of hedonism. The pleasurable life offered by the experience machine, its seems safe to say, lacks the requisite depth that many of us find necessary to lead a genuinely worthwhile life. Among other things, the experience machine offers no opportunities to establish meaningful relationships, or to engage in long-term artistic, intellectual, or political projects that survive one’s death. This intuitive objection finds some support in (...)
    Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  • Designing People to Serve.Steve Petersen - 2011 - In Patrick Lin, George Bekey & Keith Abney (eds.), Robot Ethics. MIT Press.
    I argue that, contrary to intuition, it would be both possible and permissible to design people - whether artificial or organic - who by their nature desire to do tasks we find unpleasant.
    Direct download  
     
    Export citation  
     
    Bookmark   21 citations  
  • A Framework for Grounding the Moral Status of Intelligent Machines.Michael Scheessele - 2018 - AIES '18, February 2–3, 2018, New Orleans, LA, USA.
    I propose a framework, derived from moral theory, for assessing the moral status of intelligent machines. Using this framework, I claim that some current and foreseeable intelligent machines have approximately as much moral status as plants, trees, and other environmental entities. This claim raises the question: what obligations could a moral agent (e.g., a normal adult human) have toward an intelligent machine? I propose that the threshold for any moral obligation should be the "functional morality" of Wallach and Allen [20], (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • Autonomous Systems in Society and War : Philosophical Inquiries.Linda Johansson - 2013 - Dissertation, Royal Institute of Technology, Stockholm
    The overall aim of this thesis is to look at some philosophical issues surrounding autonomous systems in society and war. These issues can be divided into three main categories. The first, discussed in papers I and II, concerns ethical issues surrounding the use of autonomous systems – where the focus in this thesis is on military robots. The second issue, discussed in paper III, concerns how to make sure that advanced robots behave ethically adequate. The third issue, discussed in papers (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark