4 found
Order:
  1.  96
    The responsibility gap: Ascribing responsibility for the actions of learning automata.Andreas Matthias - 2004 - Ethics and Information Technology 6 (3):175-183.
    Traditionally, the manufacturer/operator of a machine is held (morally and legally) responsible for the consequences of its operation. Autonomous, learning machines, based on neural networks, genetic algorithms and agent architectures, create a new situation, where the manufacturer/operator of the machine is in principle not capable of predicting the future machine behaviour any more, and thus cannot be held morally responsible or liable for it. The society must decide between not using this kind of machine any more (which is not a (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   172 citations  
  2.  78
    Robot Lies in Health Care: When Is Deception Morally Permissible?Andreas Matthias - 2015 - Kennedy Institute of Ethics Journal 25 (2):169-162.
    Autonomous robots are increasingly interacting with users who have limited knowledge of robotics and are likely to have an erroneous mental model of the robot’s workings, capabilities, and internal structure. The robot’s real capabilities may diverge from this mental model to the extent that one might accuse the robot’s manufacturer of deceiving the user, especially in cases where the user naturally tends to ascribe exaggerated capabilities to the machine (e.g. conversational systems in elder-care contexts, or toy robots in child care). (...)
    Direct download (7 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  3. The responsibility gap: Ascribing responsibility for the actions of learning automata. [REVIEW]Andreas Matthias - 2004 - Ethics and Information Technology 6 (3):175-183.
    Traditionally, the manufacturer/operator of a machine is held (morally and legally) responsible for the consequences of its operation. Autonomous, learning machines, based on neural networks, genetic algorithms and agent architectures, create a new situation, where the manufacturer/operator of the machine is in principle not capable of predicting the future machine behaviour any more, and thus cannot be held morally responsible or liable for it. The society must decide between not using this kind of machine any more (which is not a (...)
    Direct download (11 more)  
     
    Export citation  
     
    Bookmark   165 citations  
  4.  23
    Dignity and Dissent in Humans and Non-humans.Andreas Matthias - 2020 - Science and Engineering Ethics 26 (5):2497-2510.
    Is there a difference between human beings and those based on artificial intelligence that would affect their ability to be subjects of dignity? This paper first examines the philosophical notion of dignity as Immanuel Kant derives it from the moral autonomy of the individual. It then asks whether animals and AI systems can claim Kantian dignity or whether there is a sharp divide between human beings, animals and AI systems regarding their ability to be subjects of dignity. How this question (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation