4 found
Order:
  1. The responsibility gap: Ascribing responsibility for the actions of learning automata. [REVIEW]Andreas Matthias - 2004 - Ethics and Information Technology 6 (3):175-183.
    Traditionally, the manufacturer/operator of a machine is held (morally and legally) responsible for the consequences of its operation. Autonomous, learning machines, based on neural networks, genetic algorithms and agent architectures, create a new situation, where the manufacturer/operator of the machine is in principle not capable of predicting the future machine behaviour any more, and thus cannot be held morally responsible or liable for it. The society must decide between not using this kind of machine any more (which is not a (...)
    Direct download (11 more)  
     
    Export citation  
     
    Bookmark   151 citations  
  2.  60
    The responsibility gap: Ascribing responsibility for the actions of learning automata.Andreas Matthias - 2004 - Ethics and Information Technology 6 (3):175-183.
    Traditionally, the manufacturer/operator of a machine is held (morally and legally) responsible for the consequences of its operation. Autonomous, learning machines, based on neural networks, genetic algorithms and agent architectures, create a new situation, where the manufacturer/operator of the machine is in principle not capable of predicting the future machine behaviour any more, and thus cannot be held morally responsible or liable for it. The society must decide between not using this kind of machine any more (which is not a (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   151 citations  
  3.  58
    Robot Lies in Health Care: When Is Deception Morally Permissible?Andreas Matthias - 2015 - Kennedy Institute of Ethics Journal 25 (2):169-162.
    From the very beginnings of Artificial Intelligence, the users’ misjudgment of a machine’s capabilities has been one of the recurrent topoi. Weizenbaum reports the surprising reaction of users to the crude conversational capabilities of the now-famous “Eliza” program :I was startled to see how quickly and how deeply people conversing with “Doctor” became emotionally involved with the computer and how unequivocally they anthropomorphized it. Once my secretary, who had watched me work on the program for many months and therefore surely (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  4.  14
    Dignity and Dissent in Humans and Non-humans.Andreas Matthias - 2020 - Science and Engineering Ethics 26 (5):2497-2510.
    Is there a difference between human beings and those based on artificial intelligence that would affect their ability to be subjects of dignity? This paper first examines the philosophical notion of dignity as Immanuel Kant derives it from the moral autonomy of the individual. It then asks whether animals and AI systems can claim Kantian dignity or whether there is a sharp divide between human beings, animals and AI systems regarding their ability to be subjects of dignity. How this question (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation