Switch to: References

Add citations

You must login to add citations.
  1. Moralische Roboter: Humanistisch-philosophische Grundlagen und didaktische Anwendungen.André Schmiljun & Iga Maria Schmiljun - 2024 - transcript Verlag.
    Brauchen Roboter moralische Kompetenz? Die Antwort lautet ja. Einerseits benötigen Roboter moralische Kompetenz, um unsere Welt aus Regeln, Vorschriften und Werten zu begreifen, andererseits um von ihrem Umfeld akzeptiert zu werden. Wie aber lässt sich moralische Kompetenz in Roboter implementieren? Welche philosophischen Herausforderungen sind zu erwarten? Und wie können wir uns und unsere Kinder auf Roboter vorbereiten, die irgendwann über moralische Kompetenz verfügen werden? André und Iga Maria Schmiljun skizzieren aus einer humanistisch-philosophischen Perspektive erste Antworten auf diese Fragen und entwickeln (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • Manufacturing Morality A general theory of moral agency grounding computational implementations: the ACTWith model.Jeffrey White - 2013 - In Computational Intelligence. Nova Publications. pp. 1-65.
    The ultimate goal of research into computational intelligence is the construction of a fully embodied and fully autonomous artificial agent. This ultimate artificial agent must not only be able to act, but it must be able to act morally. In order to realize this goal, a number of challenges must be met, and a number of questions must be answered, the upshot being that, in doing so, the form of agency to which we must aim in developing artificial agents comes (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  • Machine Ethics in Care: Could a Moral Avatar Enhance the Autonomy of Care-Dependent Persons?Catrin Misselhorn - forthcoming - Cambridge Quarterly of Healthcare Ethics:1-14.
    It is a common view that artificial systems could play an important role in dealing with the shortage of caregivers due to demographic change. One argument to show that this is also in the interest of care-dependent persons is that artificial systems might significantly enhance user autonomy since they might stay longer in their homes. This argument presupposes that the artificial systems in question do not require permanent supervision and control by human caregivers. For this reason, they need the capacity (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • Ethics and risks between human and robotic interaction.Lin Yu & Shejiao Ding - 2019 - Interaction Studies 20 (1):134-147.
    Robot is definitely playing important role in human society. Low contact on machine standards is mostly on industrial robot while close contacts are in increasing demand in service robot, etc. The development of robotics with advanced hardware and artificial intelligence (AI) provide the possibility with human beings while close contacts raise many new issues on ethics and risks. For interaction, the related technique of perception, cognition and interaction are briefly introduced. For ethics, rules should be given for the robot designers (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • A Prospective Framework for the Design of Ideal Artificial Moral Agents: Insights from the Science of Heroism in Humans.Travis J. Wiltshire - 2015 - Minds and Machines 25 (1):57-71.
    The growing field of machine morality has becoming increasingly concerned with how to develop artificial moral agents. However, there is little consensus on what constitutes an ideal moral agent let alone an artificial one. Leveraging a recent account of heroism in humans, the aim of this paper is to provide a prospective framework for conceptualizing, and in turn designing ideal artificial moral agents, namely those that would be considered heroic robots. First, an overview of what it means to be an (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  • Autonomous reboot: Aristotle, autonomy and the ends of machine ethics.Jeffrey White - 2022 - AI and Society 37 (2):647-659.
    Tonkens has issued a seemingly impossible challenge, to articulate a comprehensive ethical framework within which artificial moral agents satisfy a Kantian inspired recipe—"rational" and "free"—while also satisfying perceived prerogatives of machine ethicists to facilitate the creation of AMAs that are perfectly and not merely reliably ethical. Challenges for machine ethicists have also been presented by Anthony Beavers and Wendell Wallach. Beavers pushes for the reinvention of traditional ethics to avoid "ethical nihilism" due to the reduction of morality to mechanical causation. (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • Autonomous Reboot: Kant, the categorical imperative, and contemporary challenges for machine ethicists.Jeffrey White - 2022 - AI and Society 37 (2):661-673.
    Ryan Tonkens has issued a seemingly impossible challenge, to articulate a comprehensive ethical framework within which artificial moral agents satisfy a Kantian inspired recipe—"rational" and "free"—while also satisfying perceived prerogatives of machine ethicists to facilitate the creation of AMAs that are perfectly and not merely reliably ethical. This series of papers meets this challenge by landscaping traditional moral theory in resolution of a comprehensive account of moral agency. The first paper established the challenge and set out autonomy in Aristotelian terms. (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • Machine morality: bottom-up and top-down approaches for modelling human moral faculties. [REVIEW]Wendell Wallach, Colin Allen & Iva Smit - 2008 - AI and Society 22 (4):565-582.
    The implementation of moral decision making abilities in artificial intelligence (AI) is a natural and necessary extension to the social mechanisms of autonomous software agents and robots. Engineers exploring design strategies for systems sensitive to moral considerations in their choices and actions will need to determine what role ethical theory should play in defining control architectures for such systems. The architectures for morally intelligent agents fall within two broad approaches: the top-down imposition of ethical theories, and the bottom-up building of (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   34 citations  
  • Safe/moral autopoiesis and consciousness.Mark R. Waser - 2013 - International Journal of Machine Consciousness 5 (1):59-74.
  • Implementing moral decision making faculties in computers and robots.Wendell Wallach - 2008 - AI and Society 22 (4):463-475.
    The challenge of designing computer systems and robots with the ability to make moral judgments is stepping out of science fiction and moving into the laboratory. Engineers and scholars, anticipating practical necessities, are writing articles, participating in conference workshops, and initiating a few experiments directed at substantiating rudimentary moral reasoning in hardware and software. The subject has been designated by several names, including machine ethics, machine morality, artificial morality, or computational morality. Most references to the challenge elucidate one facet or (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   10 citations  
  • Consciousness and ethics: Artificially conscious moral agents.Wendell Wallach, Colin Allen & Stan Franklin - 2011 - International Journal of Machine Consciousness 3 (01):177-192.
  • Service robots, care ethics, and design.A. van Wynsberghe - 2016 - Ethics and Information Technology 18 (4):311-321.
    It should not be a surprise in the near future to encounter either a personal or a professional service robot in our homes and/or our work places: according to the International Federation for Robots, there will be approx 35 million service robots at work by 2018. Given that individuals will interact and even cooperate with these service robots, their design and development demand ethical attention. With this in mind I suggest the use of an approach for incorporating ethics into the (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   29 citations  
  • Critiquing the Reasons for Making Artificial Moral Agents.Aimee van Wynsberghe & Scott Robbins - 2018 - Science and Engineering Ethics:1-17.
    Many industry leaders and academics from the field of machine ethics would have us believe that the inevitability of robots coming to have a larger role in our lives demands that robots be endowed with moral reasoning capabilities. Robots endowed in this way may be referred to as artificial moral agents. Reasons often given for developing AMAs are: the prevention of harm, the necessity for public trust, the prevention of immoral use, such machines are better moral reasoners than humans, and (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   44 citations  
  • Critiquing the Reasons for Making Artificial Moral Agents.Aimee van Wynsberghe & Scott Robbins - 2019 - Science and Engineering Ethics 25 (3):719-735.
    Many industry leaders and academics from the field of machine ethics would have us believe that the inevitability of robots coming to have a larger role in our lives demands that robots be endowed with moral reasoning capabilities. Robots endowed in this way may be referred to as artificial moral agents. Reasons often given for developing AMAs are: the prevention of harm, the necessity for public trust, the prevention of immoral use, such machines are better moral reasoners than humans, and (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   45 citations  
  • Can we program or train robots to be good?Amanda Sharkey - 2020 - Ethics and Information Technology 22 (4):283-295.
    As robots are deployed in a widening range of situations, it is necessary to develop a clearer position about whether or not they can be trusted to make good moral decisions. In this paper, we take a realistic look at recent attempts to program and to train robots to develop some form of moral competence. Examples of implemented robot behaviours that have been described as 'ethical', or 'minimally ethical' are considered, although they are found to only operate in quite constrained (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   12 citations  
  • Integrating robot ethics and machine morality: the study and design of moral competence in robots.Bertram F. Malle - 2016 - Ethics and Information Technology 18 (4):243-256.
    Robot ethics encompasses ethical questions about how humans should design, deploy, and treat robots; machine morality encompasses questions about what moral capacities a robot should have and how these capacities could be computationally implemented. Publications on both of these topics have doubled twice in the past 10 years but have often remained separate from one another. In an attempt to better integrate the two, I offer a framework for what a morally competent robot would look like and discuss a number (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   19 citations  
  • Why a Virtual Assistant for Moral Enhancement When We Could have a Socrates?Francisco Lara - 2021 - Science and Engineering Ethics 27 (4):1-27.
    Can Artificial Intelligence be more effective than human instruction for the moral enhancement of people? The author argues that it only would be if the use of this technology were aimed at increasing the individual's capacity to reflectively decide for themselves, rather than at directly influencing behaviour. To support this, it is shown how a disregard for personal autonomy, in particular, invalidates the main proposals for applying new technologies, both biomedical and AI-based, to moral enhancement. As an alternative to these (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  • Is the machine question the same question as the animal question?Katharyn Hogan - 2017 - Ethics and Information Technology 19 (1):29-38.
  • Moral judgment as information processing: an integrative review.Steve Guglielmo - 2015 - Frontiers in Psychology 6.
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  • What ethics can say on artificial intelligence: Insights from a systematic literature review.Francesco Vincenzo Giarmoleo, Ignacio Ferrero, Marta Rocchi & Massimiliano Matteo Pellegrini - forthcoming - Business and Society Review.
    The abundance of literature on ethical concerns regarding artificial intelligence (AI) highlights the need to systematize, integrate, and categorize existing efforts through a systematic literature review. The article aims to investigate prevalent concerns, proposed solutions, and prominent ethical approaches within the field. Considering 309 articles from the beginning of the publications in this field up until December 2021, this systematic literature review clarifies what the ethical concerns regarding AI are, and it charts them into two groups: (i) ethical concerns that (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • Artificial Moral Agents: Moral Mentors or Sensible Tools?Fabio Fossa - 2018 - Ethics and Information Technology (2):1-12.
    The aim of this paper is to offer an analysis of the notion of artificial moral agent (AMA) and of its impact on human beings’ self-understanding as moral agents. Firstly, I introduce the topic by presenting what I call the Continuity Approach. Its main claim holds that AMAs and human moral agents exhibit no significant qualitative difference and, therefore, should be considered homogeneous entities. Secondly, I focus on the consequences this approach leads to. In order to do this I take (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  • Making moral machines: why we need artificial moral agents.Paul Formosa & Malcolm Ryan - forthcoming - AI and Society.
    As robots and Artificial Intelligences become more enmeshed in rich social contexts, it seems inevitable that we will have to make them into moral machines equipped with moral skills. Apart from the technical difficulties of how we could achieve this goal, we can also ask the ethical question of whether we should seek to create such Artificial Moral Agents (AMAs). Recently, several papers have argued that we have strong reasons not to develop AMAs. In response, we develop a comprehensive analysis (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   10 citations  
  • Cryptocurrency with a Conscience: Using Artificial Intelligence to Develop Money that Advances Human Ethical Values.Matthew E. Gladden - 2015 - Annales. Ethics in Economic Life 18 (4):85-98.
    Cryptocurrencies like Bitcoin are offering new avenues for economic empowerment to individuals around the world. However, they also provide a powerful tool that facilitates criminal activities such as human trafficking and illegal weapons sales that cause great harm to individuals and communities. Cryptocurrency advocates have argued that the ethical dimensions of cryptocurrency are not qualitatively new, insofar as money has always been understood as a passive instrument that lacks ethical values and can be used for good or ill purposes. In (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • Artificial Moral Agents: A Survey of the Current Status. [REVIEW]José-Antonio Cervantes, Sonia López, Luis-Felipe Rodríguez, Salvador Cervantes, Francisco Cervantes & Félix Ramos - 2020 - Science and Engineering Ethics 26 (2):501-532.
    One of the objectives in the field of artificial intelligence for some decades has been the development of artificial agents capable of coexisting in harmony with people and other systems. The computing research community has made efforts to design artificial agents capable of doing tasks the way people do, tasks requiring cognitive mechanisms such as planning, decision-making, and learning. The application domains of such software agents are evident nowadays. Humans are experiencing the inclusion of artificial agents in their environment as (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   25 citations