Results for 'moral robots'

987 found
Order:
  1.  24
    Toward safe AI.Andres Morales-Forero, Samuel Bassetto & Eric Coatanea - 2023 - AI and Society 38 (2):685-696.
    Since some AI algorithms with high predictive power have impacted human integrity, safety has become a crucial challenge in adopting and deploying AI. Although it is impossible to prevent an algorithm from failing in complex tasks, it is crucial to ensure that it fails safely, especially if it is a critical system. Moreover, due to AI’s unbridled development, it is imperative to minimize the methodological gaps in these systems’ engineering. This paper uses the well-known Box-Jenkins method for statistical modeling as (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  2.  20
    An explanation space to align user studies with the technical development of Explainable AI.Garrick Cabour, Andrés Morales-Forero, Élise Ledoux & Samuel Bassetto - 2023 - AI and Society 38 (2):869-887.
    Providing meaningful and actionable explanations for end-users is a situated problem requiring the intersection of multiple disciplines to address social, operational, and technical challenges. However, the explainable artificial intelligence community has not commonly adopted or created tangible design tools that allow interdisciplinary work to develop reliable AI-powered solutions. This paper proposes a formative architecture that defines the explanation space from a user-inspired perspective. The architecture comprises five intertwined components to outline explanation requirements for a task: (1) the end-users’ mental models, (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  3.  52
    Building Moral Robots: Ethical Pitfalls and Challenges.John-Stewart Gordon - 2020 - Science and Engineering Ethics 26 (1):141-157.
    This paper examines the ethical pitfalls and challenges that non-ethicists, such as researchers and programmers in the fields of computer science, artificial intelligence and robotics, face when building moral machines. Whether ethics is “computable” depends on how programmers understand ethics in the first place and on the adequacy of their understanding of the ethical problems and methodological challenges in these fields. Researchers and programmers face at least two types of problems due to their general lack of ethical knowledge or (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  4. Welcoming Robots into the Moral Circle: A Defence of Ethical Behaviourism.John Danaher - 2020 - Science and Engineering Ethics 26 (4):2023-2049.
    Can robots have significant moral status? This is an emerging topic of debate among roboticists and ethicists. This paper makes three contributions to this debate. First, it presents a theory – ‘ethical behaviourism’ – which holds that robots can have significant moral status if they are roughly performatively equivalent to other entities that have significant moral status. This theory is then defended from seven objections. Second, taking this theoretical position onboard, it is argued that the (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   68 citations  
  5. Moral Machines: Teaching Robots Right From Wrong.Wendell Wallach & Colin Allen - 2008 - New York, US: Oxford University Press.
    Computers are already approving financial transactions, controlling electrical supplies, and driving trains. Soon, service robots will be taking care of the elderly in their homes, and military robots will have their own targeting and firing protocols. Colin Allen and Wendell Wallach argue that as robots take on more and more responsibility, they must be programmed with moral decision-making abilities, for our own safety. Taking a fast paced tour through the latest thinking about philosophical ethics and artificial (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   179 citations  
  6. Robot rights? Towards a social-relational justification of moral consideration.Mark Coeckelbergh - 2010 - Ethics and Information Technology 12 (3):209-221.
    Should we grant rights to artificially intelligent robots? Most current and near-future robots do not meet the hard criteria set by deontological and utilitarian theory. Virtue ethics can avoid this problem with its indirect approach. However, both direct and indirect arguments for moral consideration rest on ontological features of entities, an approach which incurs several problems. In response to these difficulties, this paper taps into a different conceptual resource in order to be able to grant some degree (...)
    Direct download (13 more)  
     
    Export citation  
     
    Bookmark   93 citations  
  7. Moral difference between humans and robots: paternalism and human-relative reason.Tsung-Hsing Ho - 2022 - AI and Society 37 (4):1533-1543.
    According to some philosophers, if moral agency is understood in behaviourist terms, robots could become moral agents that are as good as or even better than humans. Given the behaviourist conception, it is natural to think that there is no interesting moral difference between robots and humans in terms of moral agency (call it the _equivalence thesis_). However, such moral differences exist: based on Strawson’s account of participant reactive attitude and Scanlon’s relational account (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark  
  8. Robotic Nudges for Moral Improvement through Stoic Practice.Michał Klincewicz - 2019 - Techné: Research in Philosophy and Technology 23 (3):425-455.
    This paper offers a theoretical framework that can be used to derive viable engineering strategies for the design and development of robots that can nudge people towards moral improvement. The framework relies on research in developmental psychology and insights from Stoic ethics. Stoicism recommends contemplative practices that over time help one develop dispositions to behave in ways that improve the functioning of mechanisms that are constitutive of moral cognition. Robots can nudge individuals towards these practices and (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  9.  7
    The Moral Status of Social Robots: A Pragmatic Approach.Paul Showler - 2024 - Philosophy and Technology 37 (2):1-22.
    Debates about the moral status of social robots (SRs) currently face a second-order, or metatheoretical impasse. On the one hand, moral individualists argue that the moral status of SRs depends on their possession of morally relevant properties. On the other hand, moral relationalists deny that we ought to attribute moral status on the basis of the properties that SRs instantiate, opting instead for other modes of reflection and critique. This paper develops and defends a (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  10. Integrating robot ethics and machine morality: the study and design of moral competence in robots.Bertram F. Malle - 2016 - Ethics and Information Technology 18 (4):243-256.
    Robot ethics encompasses ethical questions about how humans should design, deploy, and treat robots; machine morality encompasses questions about what moral capacities a robot should have and how these capacities could be computationally implemented. Publications on both of these topics have doubled twice in the past 10 years but have often remained separate from one another. In an attempt to better integrate the two, I offer a framework for what a morally competent robot would look like and discuss (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   19 citations  
  11.  9
    Artificial Morality: Virtuous Robots for Virtual Games.Peter Danielson - 1992 - London: Routledge.
    This book explores the role of artificial intelligence in the development of a claim that morality is person-made and rational. Professor Danielson builds moral robots that do better than amoral competitors in a tournament of games like the Prisoners Dilemma and Chicken. The book thus engages in current controversies over the adequacy of the received theory of rational choice. It sides with Gauthier and McClennan, who extend the devices of rational choice to include moral constraint. Artificial Morality (...)
    Direct download  
     
    Export citation  
     
    Bookmark   30 citations  
  12. The Moral Standing of Social Robots: Untapped Insights from Africa.Nancy S. Jecker, Caesar A. Atiure & Martin Odei Ajei - 2022 - Philosophy and Technology 35 (2):1-22.
    This paper presents an African relational view of social robotsmoral standing which draws on the philosophy of ubuntu. The introduction places the question of moral standing in historical and cultural contexts. Section 2 demonstrates an ubuntu framework by applying it to the fictional case of a social robot named Klara, taken from Ishiguro’s novel, Klara and the Sun. We argue that an ubuntu ethic assigns moral standing to Klara, based on her relational qualities and pro-social (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  13.  41
    How to do robots with words: a performative view of the moral status of humans and nonhumans.Mark Coeckelbergh - 2023 - Ethics and Information Technology 25 (3):1-9.
    Moral status arguments are typically formulated as descriptive statements that tell us something about the world. But philosophy of language teaches us that language can also be used performatively: we do things with words and use words to try to get others to do things. Does and should this theory extend to what we say about moral status, and what does it mean? Drawing on Austin, Searle, and Butler and further developing relational views of moral status, this (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  14.  4
    Artificial Morality: Virtuous Robots for Virtual Games.Peter Danielson - 1992 - Routledge.
    This book explores the role of artificial intelligence in the development of a claim that morality is person-made and rational. Professor Danielson builds moral robots that do better than amoral competitors in a tournament of games like the Prisoners Dilemma and Chicken. The book thus engages in current controversies over the adequacy of the received theory of rational choice. It sides with Gauthier and McClennan, who extend the devices of rational choice to include moral constraint. _Artificial Morality_ (...)
    Direct download  
     
    Export citation  
     
    Bookmark   20 citations  
  15. Moral Responsibility of Robots and Hybrid Agents.Raul Hakli & Pekka Mäkelä - 2019 - The Monist 102 (2):259-275.
    We study whether robots can satisfy the conditions of an agent fit to be held morally responsible, with a focus on autonomy and self-control. An analogy between robots and human groups enables us to modify arguments concerning collective responsibility for studying questions of robot responsibility. We employ Mele’s history-sensitive account of autonomy and responsibility to argue that even if robots were to have all the capacities required of moral agency, their history would deprive them from autonomy (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   28 citations  
  16. Moral appearances: emotions, robots, and human morality. [REVIEW]Mark Coeckelbergh - 2010 - Ethics and Information Technology 12 (3):235-241.
    Can we build ‘moral robots’? If morality depends on emotions, the answer seems negative. Current robots do not meet standard necessary conditions for having emotions: they lack consciousness, mental states, and feelings. Moreover, it is not even clear how we might ever establish whether robots satisfy these conditions. Thus, at most, robots could be programmed to follow rules, but it would seem that such ‘psychopathic’ robots would be dangerous since they would lack full (...) agency. However, I will argue that in the future we might nevertheless be able to build quasi-moral robots that can learn to create the appearance of emotions and the appearance of being fully moral. I will also argue that this way of drawing robots into our social-moral world is less problematic than it might first seem, since human morality also relies on such appearances. (shrink)
    Direct download (9 more)  
     
    Export citation  
     
    Bookmark   46 citations  
  17. The morality of autonomous robots.Aaron M. Johnson & Sidney Axinn - 2013 - Journal of Military Ethics 12 (2):129 - 141.
    While there are many issues to be raised in using lethal autonomous robotic weapons (beyond those of remotely operated drones), we argue that the most important question is: should the decision to take a human life be relinquished to a machine? This question is often overlooked in favor of technical questions of sensor capability, operational questions of chain of command, or legal questions of sovereign borders. We further argue that the answer must be ?no? and offer several reasons for banning (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   22 citations  
  18.  78
    Robot Lies in Health Care: When Is Deception Morally Permissible?Andreas Matthias - 2015 - Kennedy Institute of Ethics Journal 25 (2):169-162.
    Autonomous robots are increasingly interacting with users who have limited knowledge of robotics and are likely to have an erroneous mental model of the robot’s workings, capabilities, and internal structure. The robot’s real capabilities may diverge from this mental model to the extent that one might accuse the robot’s manufacturer of deceiving the user, especially in cases where the user naturally tends to ascribe exaggerated capabilities to the machine (e.g. conversational systems in elder-care contexts, or toy robots in (...)
    Direct download (7 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  19.  11
    Robots as Malevolent Moral Agents: Harmful Behavior Results in Dehumanization, Not Anthropomorphism.Aleksandra Swiderska & Dennis Küster - 2020 - Cognitive Science 44 (7):e12872.
    A robot's decision to harm a person is sometimes considered to be the ultimate proof of it gaining a human‐like mind. Here, we contrasted predictions about attribution of mental capacities from moral typecasting theory, with the denial of agency from dehumanization literature. Experiments 1 and 2 investigated mind perception for intentionally and accidentally harmful robotic agents based on text and image vignettes. Experiment 3 disambiguated agent intention (malevolent and benevolent), and additionally varied the type of agent (robotic and human) (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  20. The rise of the robots and the crisis of moral patiency.John Danaher - 2019 - AI and Society 34 (1):129-136.
    This paper adds another argument to the rising tide of panic about robots and AI. The argument is intended to have broad civilization-level significance, but to involve less fanciful speculation about the likely future intelligence of machines than is common among many AI-doomsayers. The argument claims that the rise of the robots will create a crisis of moral patiency. That is to say, it will reduce the ability and willingness of humans to act in the world as (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   28 citations  
  21.  14
    Children-Robot Friendship, Moral Agency, and Aristotelian Virtue Development.Mihaela Constantinescu, Radu Uszkai, Constantin Vica & Cristina Voinea - 2022 - Frontiers in Robotics and AI 9.
    Social robots are increasingly developed for the companionship of children. In this article we explore the moral implications of children-robot friendships using the Aristotelian framework of virtue ethics. We adopt a moderate position and argue that, although robots cannot be virtue friends, they can nonetheless enable children to exercise ethical and intellectual virtues. The Aristotelian requirements for true friendship apply only partly to children: unlike adults, children relate to friendship as an educational play of exploration, which is (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  22.  82
    When Morals Ain’t Enough: Robots, Ethics, and the Rules of the Law.Ugo Pagallo - 2017 - Minds and Machines 27 (4):625-638.
    No single moral theory can instruct us as to whether and to what extent we are confronted with legal loopholes, e.g. whether or not new legal rules should be added to the system in the criminal law field. This question on the primary rules of the law appears crucial for today’s debate on roboethics and still, goes beyond the expertise of robo-ethicists. On the other hand, attention should be drawn to the secondary rules of the law: The unpredictability of (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  23. Robot Morals and Human Ethics.Wendell Wallach - 2010 - Teaching Ethics 11 (1):87-92.
    Building artificial moral agents (AMAs) underscores the fragmentary character of presently available models of human ethical behavior. It is a distinctly different enterprise from either the attempt by moral philosophers to illuminate the “ought” of ethics or the research by cognitive scientists directed at revealing the mechanisms that influence moral psychology, and yet it draws on both. Philosophers and cognitive scientists have tended to stress the importance of particular cognitive mechanisms, e.g., reasoning, moral sentiments, heuristics, intuitions, (...)
    Direct download (7 more)  
     
    Export citation  
     
    Bookmark   25 citations  
  24. When is a robot a moral agent.John P. Sullins - 2006 - International Review of Information Ethics 6 (12):23-30.
    In this paper Sullins argues that in certain circumstances robots can be seen as real moral agents. A distinction is made between persons and moral agents such that, it is not necessary for a robot to have personhood in order to be a moral agent. I detail three requirements for a robot to be seen as a moral agent. The first is achieved when the robot is significantly autonomous from any programmers or operators of the (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   70 citations  
  25. Moral Status and Intelligent Robots.John-Stewart Gordon & David J. Gunkel - 2021 - Southern Journal of Philosophy 60 (1):88-117.
    The Southern Journal of Philosophy, Volume 60, Issue 1, Page 88-117, March 2022.
    No categories
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  26.  44
    Implementing moral decision making faculties in computers and robots.Wendell Wallach - 2008 - AI and Society 22 (4):463-475.
    The challenge of designing computer systems and robots with the ability to make moral judgments is stepping out of science fiction and moving into the laboratory. Engineers and scholars, anticipating practical necessities, are writing articles, participating in conference workshops, and initiating a few experiments directed at substantiating rudimentary moral reasoning in hardware and software. The subject has been designated by several names, including machine ethics, machine morality, artificial morality, or computational morality. Most references to the challenge elucidate (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   10 citations  
  27.  93
    Sharing Moral Responsibility with Robots: A Pragmatic Approach.Gordana Dodig Crnkovic & Daniel Persson - 2008 - In Holst, Per Kreuger & Peter Funk (eds.), Frontiers in Artificial Intelligence and Applications Volume 173. IOS Press Books.
    Roboethics is a recently developed field of applied ethics which deals with the ethical aspects of technologies such as robots, ambient intelligence, direct neural interfaces and invasive nano-devices and intelligent soft bots. In this article we look specifically at the issue of (moral) responsibility in artificial intelligent systems. We argue for a pragmatic approach, where responsibility is seen as a social regulatory mechanism. We claim that having a system which takes care of certain tasks intelligently, learning from experience (...)
    Direct download  
     
    Export citation  
     
    Bookmark   5 citations  
  28.  55
    Not robots: children's perspectives on authenticity, moral agency and stimulant drug treatments.Ilina Singh - 2013 - Journal of Medical Ethics 39 (6):359-366.
    In this article, I examine children's reported experiences with stimulant drug treatments for attention deficit hyperactivity disorder in light of bioethical arguments about the potential threats of psychotropic drugs to authenticity and moral agency. Drawing on a study that involved over 150 families in the USA and the UK, I show that children are able to report threats to authenticity, but that the majority of children are not concerned with such threats. On balance, children report that stimulants improve their (...)
    Direct download (9 more)  
     
    Export citation  
     
    Bookmark   17 citations  
  29.  47
    Robots with Moral Status?David DeGrazia - 2022 - Perspectives in Biology and Medicine 65 (1):73-88.
  30. Sympathy for Dolores: Moral Consideration for Robots Based on Virtue and Recognition.Massimiliano L. Cappuccio, Anco Peeters & William McDonald - 2019 - Philosophy and Technology 33 (1):9-31.
    This paper motivates the idea that social robots should be credited as moral patients, building on an argumentative approach that combines virtue ethics and social recognition theory. Our proposal answers the call for a nuanced ethical evaluation of human-robot interaction that does justice to both the robustness of the social responses solicited in humans by robots and the fact that robots are designed to be used as instruments. On the one hand, we acknowledge that the instrumental (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   13 citations  
  31. On the moral responsibility of military robots.Thomas Hellström - 2013 - Ethics and Information Technology 15 (2):99-107.
    This article discusses mechanisms and principles for assignment of moral responsibility to intelligent robots, with special focus on military robots. We introduce the concept autonomous power as a new concept, and use it to identify the type of robots that call for moral considerations. It is furthermore argued that autonomous power, and in particular the ability to learn, is decisive for assignment of moral responsibility to robots. As technological development will lead to (...) with increasing autonomous power, we should be prepared for a future when people blame robots for their actions. It is important to, already today, investigate the mechanisms that control human behavior in this respect. The results may be used when designing future military robots, to control unwanted tendencies to assign responsibility to the robots. Independent of the responsibility issue, the moral quality of robots’ behavior should be seen as one of many performance measures by which we evaluate robots. How to design ethics based control systems should be carefully investigated already now. From a consequentialist view, it would indeed be highly immoral to develop robots capable of performing acts involving life and death, without including some kind of moral framework. (shrink)
    Direct download (11 more)  
     
    Export citation  
     
    Bookmark   33 citations  
  32.  16
    Robots as moral environments.Tomislav Furlanis, Takayuki Kanda & Dražen Brščić - forthcoming - AI and Society:1-19.
    In this philosophical exploration, we investigate the concept of robotic moral environment interaction. The common view understands moral interaction to occur between agents endowed with ethical and interactive capacities. However, recent developments in moral philosophy argue that moral interaction also occurs in relation to the environment. Here conditions and situations of the environment contribute to human moral cognition and the formation of our moral experiences. Based on this philosophical position, we imagine robots interacting (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  33.  10
    Correction: On the moral status of social robots: considering the consciousness criterion.Kestutis Mosakas - forthcoming - AI and Society:1-1.
  34.  8
    Ethical Behaviourism and the Moral Status of AI Robots. 김상득 - 2023 - Journal of Korean Philosophical Society 167:59-81.
    이 논문은 도덕적 기계의 가능성을 정초하는 첫 단추로, 기계의 도덕적 지위를 적극 옹호하고 있는 윤리적 행동주의를 비판적으로 천착하는 데 그 목적이 있다. 이를 위해 필자는 먼저 기계의 도덕적 지위를 부인하는 표준적 입장에 대한 대안으로 기능주의 입장을, 특히 존 다나허의 윤리적 행동주의의 입장을 해명할 것이다. 윤리적 행동주의는 도덕적 지위를 근거짓는 속성의 동등성이 아니라, 관찰 가능한 행동의 동등성을 토대로 비교와 유비추론의 방법을 통해 도덕적 지위를 추론할 수 있다고 주장한다. 이러한 접근법은 탈형이상학적이고 또 인간중심적이라는 편견에서 벗어난다는 장점을 지니지만, 행동 패턴의 동등성에서 도덕적 지위를 (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  35. Is it time for robot rights? Moral status in artificial entities.Vincent C. Müller - 2021 - Ethics and Information Technology 23 (3):579–587.
    Some authors have recently suggested that it is time to consider rights for robots. These suggestions are based on the claim that the question of robot rights should not depend on a standard set of conditions for ‘moral status’; but instead, the question is to be framed in a new way, by rejecting the is/ought distinction, making a relational turn, or assuming a methodological behaviourism. We try to clarify these suggestions and to show their highly problematic consequences. While (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   18 citations  
  36. On the moral status of social robots: considering the consciousness criterion.Kestutis Mosakas - 2021 - AI and Society 36 (2):429-443.
    While philosophers have been debating for decades on whether different entities—including severely disabled human beings, embryos, animals, objects of nature, and even works of art—can legitimately be considered as having moral status, this question has gained a new dimension in the wake of artificial intelligence (AI). One of the more imminent concerns in the context of AI is that of the moral rights and status of social robots, such as robotic caregivers and artificial companions, that are built (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   17 citations  
  37.  71
    Moral Machines. Teaching Robots Right from Wrong. A book review.Dawid Lubiszewski - 2011 - Avant: Trends in Interdisciplinary Studies 2 (1).
  38.  18
    Robots and people with dementia: Unintended consequences and moral hazard.Fiachra O’Brolcháin - 2019 - Nursing Ethics 26 (4):962-972.
    Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  39. Social Robotics as Moral Education? Fighting Discrimination Through the Design of Social Robots.Fabio Fossa - 2022 - In Pekka Mäkelä, Raul Hakli & Joanna Seibt (eds.), Social Robots in Social Institutions. Proceedings of Robophilosophy’22. Amsterdam: IOS Press. pp. 184-193.
    Recent research in the field of social robotics has shed light on the considerable role played by biases in the design of social robots. Cues that trigger widespread biased expectations are implemented in the design of social robots to increase their familiarity and boost interaction quality. Ethical discussion has focused on the question concerning the permissibility of leveraging social biases to meet the design goals of social robotics. As a result, integrating ethically problematic social biases in the design (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  40.  61
    The Moral Status of AGI-enabled Robots: A Functionality-Based Analysis.Mubarak Hussain - 2023 - Symposion: Theoretical and Applied Inquiries in Philosophy and Social Sciences 10 (1):105-127.
  41. Robot minds and human ethics: the need for a comprehensive model of moral decision making. [REVIEW]Wendell Wallach - 2010 - Ethics and Information Technology 12 (3):243-250.
    Building artificial moral agents (AMAs) underscores the fragmentary character of presently available models of human ethical behavior. It is a distinctly different enterprise from either the attempt by moral philosophers to illuminate the “ought” of ethics or the research by cognitive scientists directed at revealing the mechanisms that influence moral psychology, and yet it draws on both. Philosophers and cognitive scientists have tended to stress the importance of particular cognitive mechanisms, e.g., reasoning, moral sentiments, heuristics, intuitions, (...)
    Direct download (7 more)  
     
    Export citation  
     
    Bookmark   24 citations  
  42. Can robots be moral?Laszlo Versenyi - 1974 - Ethics 84 (3):248-259.
  43.  22
    Robots and people with dementia: Unintended consequences and moral hazard.Fiachra O’Brolcháin - 2017 - Nursing Ethics:096973301774296.
    Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  44.  44
    Empathic responses and moral status for social robots: an argument in favor of robot patienthood based on K. E. Løgstrup.Simon N. Balle - 2022 - AI and Society 37 (2):535-548.
    Empirical research on human–robot interaction has demonstrated how humans tend to react to social robots with empathic responses and moral behavior. How should we ethically evaluate such responses to robots? Are people wrong to treat non-sentient artefacts as moral patients since this rests on anthropomorphism and ‘over-identification’ —or correct since spontaneous moral intuition and behavior toward nonhumans is indicative for moral patienthood, such that social robots become our ‘Others’?. In this research paper, I (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  45.  24
    Ethics for Robots: how to design a moral algorithm.Derek Leben - 2018 - Routledge.
    Ethics for Robots describes and defends a method for designing and evaluating ethics algorithms for autonomous machines, such as self-driving cars and search and rescue drones. Derek Leben argues that such algorithms should be evaluated by how effectively they accomplish the problem of cooperation among self-interested organisms, and therefore, rather than simulating the psychological systems that have evolved to solve this problem, engineers should be tackling the problem itself, taking relevant lessons from our moral psychology. Leben draws on (...)
    Direct download  
     
    Export citation  
     
    Bookmark   9 citations  
  46. Can humanoid robots be moral?Sanjit Chakraborty - 2018 - Ethics in Science and Environmental Politics 18:49-60.
    The concept of morality underpins the moral responsibility that not only depends on the outward practices (or ‘output,’ in the case of humanoid robots) of the agents but on the internal attitudes (‘input’) that rational and responsible intentioned beings generate. The primary question that has initiated the extensive debate, i.e., ‘Can humanoid robots be moral?’, stems from the normative outlook where morality includes human conscience and socio-linguistic background. This paper advances the thesis that the conceptions of (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  47. Can humanoid robots be moral?Sanjit Chakraborty - 2018 - Ethics in Science and Environmental Politics 18:49-60.
    The concept of morality underpins the moral responsibility that not only depends on the outward practices (or ‘output’, in the case of humanoid robots) of the agents but on the internal attitudes (‘input’) that rational and responsible intentioned beings generate. The primary question that has initiated extensive debate, i.e. ‘Can humanoid robots be moral?’, stems from the normative outlook where morality includes human conscience and socio-linguistic background. This paper advances the thesis that the conceptions of morality (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  48. Can Humanoid Robots be Moral?Sanjit Chakraborty - 2018 - Ethics in Science, Environment and Politics 18:49-60.
    The concept of morality underpins the moral responsibility that not only depends on the outward practices (or ‘output’, in the case of humanoid robots) of the agents but on the internal attitudes (‘input’) that rational and responsible intentioned beings generate. The primary question that has initiated extensive debate, i.e. ‘Can humanoid robots be moral?’, stems from the normative outlook where morality includes human conscience and socio-linguistic background. This paper advances the thesis that the conceptions of morality (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  49. Robots as moral agents?Catrin Misselhorn - 2013 - In Frank Rövekamp & Friederike Bosse (eds.), Ethics in Science and Society: German and Japanese Views. IUDICIUM Verlag.
     
    Export citation  
     
    Bookmark   4 citations  
  50.  21
    On the moral permissibility of robot apologies.Makoto Kureha - forthcoming - AI and Society:1-11.
    Robots that incorporate the function of apologizing have emerged in recent years. This paper examines the moral permissibility of making robots apologize. First, I characterize the nature of apology based on analyses conducted in multiple scholarly domains. Next, I present a prima facie argument that robot apologies are not permissible because they may harm human societies by inducing the misattribution of responsibility. Subsequently, I respond to a possible response to the prima facie objection based on the interpretation (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
1 — 50 / 987