Switch to: References

Add citations

You must login to add citations.
  1. Vulnerabilities and responsibilities: dealing with monsters in computer security.W. Pieters & L. Consoli - 2009 - Journal of Information, Communication and Ethics in Society 7 (4):243-257.
    PurposeThe purpose of this paper is to analyze information security assessment in terms of cultural categories and virtue ethics, in order to explain the cultural origin of certain types of security vulnerabilities, as well as to enable a proactive attitude towards preventing such vulnerabilities.Design/methodology/approachVulnerabilities in information security are compared to the concept of “monster” introduced by Martijntje Smits in philosophy of technology. The applicability of different strategies for dealing with monsters to information security is discussed, and the strategies are linked (...)
    No categories
    Direct download (7 more)  
     
    Export citation  
     
    Bookmark  
  • Security-by-Experiment: Lessons from Responsible Deployment in Cyberspace.Wolter Pieters, Dina Hadžiosmanović & Francien Dechesne - 2016 - Science and Engineering Ethics 22 (3):831-850.
    Conceiving new technologies as social experiments is a means to discuss responsible deployment of technologies that may have unknown and potentially harmful side-effects. Thus far, the uncertain outcomes addressed in the paradigm of new technologies as social experiments have been mostly safety-related, meaning that potential harm is caused by the design plus accidental events in the environment. In some domains, such as cyberspace, adversarial agents may be at least as important when it comes to undesirable effects of deployed technologies. In (...)
    Direct download (12 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Explanation and trust: what to tell the user in security and AI? [REVIEW]Wolter Pieters - 2011 - Ethics and Information Technology 13 (1):53-64.
    There is a common problem in artificial intelligence (AI) and information security. In AI, an expert system needs to be able to justify and explain a decision to the user. In information security, experts need to be able to explain to the public why a system is secure. In both cases, an important goal of explanation is to acquire or maintain the users’ trust. In this paper, I investigate the relation between explanation and trust in the context of computing science. (...)
    Direct download (13 more)  
     
    Export citation  
     
    Bookmark   6 citations