Switch to: References

Add citations

You must login to add citations.
  1. A Study of Technological Intentionality in C++ and Generative Adversarial Model: Phenomenological and Postphenomenological Perspectives.Dmytro Mykhailov & Nicola Liberati - forthcoming - Foundations of Science:1-17.
    This paper aims to highlight the life of computer technologies to understand what kind of ‘technological intentionality’ is present in computers based upon the phenomenological elements constituting the objects in general. Such a study can better explain the effects of new digital technologies on our society and highlight the role of digital technologies by focusing on their activities. Even if Husserlian phenomenology rarely talks about technologies, some of its aspects can be used to address the actions performed by the digital (...)
    Direct download (3 more)  
    Translate
     
     
    Export citation  
     
    Bookmark  
  • Disengagement with Ethics in Robotics as a Tacit Form of Dehumanisation.Karolina Zawieska - 2020 - AI and Society 35 (4):869-883.
    Over the past two decades, ethical challenges related to robotics technologies have gained increasing interest among different research and non-academic communities, in particular through the field of roboethics. While the reasons to address roboethics are clear, why not to engage with ethics needs to be better understood. This paper focuses on a limited or lacking engagement with ethics that takes place within some parts of the robotics community and its implications for the conceptualisation of the human being. The underlying assumption (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • A neo-aristotelian perspective on the need for artificial moral agents.Alejo José G. Sison & Dulce M. Redín - forthcoming - AI and Society:1-19.
    We examine Van Wynsberghe and Robbins critique of the need for Artificial Moral Agents and its rebuttal by Formosa and Ryan set against a neo-Aristotelian ethical background. Neither Van Wynsberghe and Robbins essay nor Formosa and Ryan’s is explicitly framed within the teachings of a specific ethical school. The former appeals to the lack of “both empirical and intuitive support” for AMAs, and the latter opts for “argumentative breadth over depth”, meaning to provide “the essential groundwork for making an all (...)
    Direct download (3 more)  
    Translate
     
     
    Export citation  
     
    Bookmark   1 citation  
  • Making Moral Machines: Why We Need Artificial Moral Agents.Paul Formosa & Malcolm Ryan - forthcoming - AI and Society.
    As robots and Artificial Intelligences become more enmeshed in rich social contexts, it seems inevitable that we will have to make them into moral machines equipped with moral skills. Apart from the technical difficulties of how we could achieve this goal, we can also ask the ethical question of whether we should seek to create such Artificial Moral Agents (AMAs). Recently, several papers have argued that we have strong reasons not to develop AMAs. In response, we develop a comprehensive analysis (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  • Critiquing the Reasons for Making Artificial Moral Agents.Aimee van Wynsberghe & Scott Robbins - 2018 - Science and Engineering Ethics:1-17.
    Many industry leaders and academics from the field of machine ethics would have us believe that the inevitability of robots coming to have a larger role in our lives demands that robots be endowed with moral reasoning capabilities. Robots endowed in this way may be referred to as artificial moral agents. Reasons often given for developing AMAs are: the prevention of harm, the necessity for public trust, the prevention of immoral use, such machines are better moral reasoners than humans, and (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   29 citations  
  • Can We Program or Train Robots to Be Good?Amanda Sharkey - 2020 - Ethics and Information Technology 22 (4):283-295.
    As robots are deployed in a widening range of situations, it is necessary to develop a clearer position about whether or not they can be trusted to make good moral decisions. In this paper, we take a realistic look at recent attempts to program and to train robots to develop some form of moral competence. Examples of implemented robot behaviours that have been described as 'ethical', or 'minimally ethical' are considered, although they are found to only operate in quite constrained (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  • Critiquing the Reasons for Making Artificial Moral Agents.Aimee van Wynsberghe & Scott Robbins - 2019 - Science and Engineering Ethics 25 (3):719-735.
    Many industry leaders and academics from the field of machine ethics would have us believe that the inevitability of robots coming to have a larger role in our lives demands that robots be endowed with moral reasoning capabilities. Robots endowed in this way may be referred to as artificial moral agents. Reasons often given for developing AMAs are: the prevention of harm, the necessity for public trust, the prevention of immoral use, such machines are better moral reasoners than humans, and (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   29 citations