Order:
  1.  22
    Who is controlling whom? Reframing “meaningful human control” of AI systems in security.Pascal Vörös, Serhiy Kandul, Thomas Burri & Markus Christen - 2023 - Ethics and Information Technology 25 (1):1-7.
    Decisions in security contexts, including armed conflict, law enforcement, and disaster relief, often need to be taken under circumstances of limited information, stress, and time pressure. Since AI systems are capable of providing a certain amount of relief in such contexts, such systems will become increasingly important, be it as decision-support or decision-making systems. However, given that human life may be at stake in such situations, moral responsibility for such decisions should remain with humans. Hence the idea of “meaningful human (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  2. An Evaluation Schema for the Ethical Use of Autonomous Robotic Systems in Security Applications.Markus Christen, Thomas Burri, Joseph O. Chapa, Raphael Salvi, Filippo Santoni de Sio & John P. Sullins - 2017 - University of Zurich Digital Society Initiative White Paper Series, No. 1.
    We propose a multi-step evaluation schema designed to help procurement agencies and others to examine the ethical dimensions of autonomous systems to be applied in the security sector, including autonomous weapons systems.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  3.  20
    Human control redressed: comparing AI and human predictability in a real-effort task.Serhiy Https://Orcidorg Kandul, Vincent Micheli, Juliane Beck, Thomas Burri, François Https://Orcidorg Fleuret, Markus Https://Orcidorg Kneer & Markus Christen - forthcoming - .
    Predictability is a prerequisite for effective human control of artificial intelligence (AI). The inability to predict malfunctioning of AI, for example, impedes timely human intervention. In this paper, we empirically investigate how AI’s predictability compares to the predictability of humans in a real-effort task. We show that humans are worse at predicting AI performance than at predicting human performance. Importantly, participants are not aware of the differences in relative predictability of AI and overestimate their prediction skills. These results raise doubts (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark