Switch to: References

Add citations

You must login to add citations.
  1. On the Opacity of Deep Neural Networks.Anders Søgaard - forthcoming - Canadian Journal of Philosophy:1-16.
    Deep neural networks are said to be opaque, impeding the development of safe and trustworthy artificial intelligence, but where this opacity stems from is less clear. What are the sufficient properties for neural network opacity? Here, I discuss five common properties of deep neural networks and two different kinds of opacity. Which of these properties are sufficient for what type of opacity? I show how each kind of opacity stems from only one of these five properties, and then discuss to (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • AI and the need for justification (to the patient).Anantharaman Muralidharan, Julian Savulescu & G. Owen Schaefer - 2024 - Ethics and Information Technology 26 (1):1-12.
    This paper argues that one problem that besets black-box AI is that it lacks algorithmic justifiability. We argue that the norm of shared decision making in medical care presupposes that treatment decisions ought to be justifiable to the patient. Medical decisions are justifiable to the patient only if they are compatible with the patient’s values and preferences and the patient is able to see that this is so. Patient-directed justifiability is threatened by black-box AIs because the lack of rationale provided (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • Scientific Exploration and Explainable Artificial Intelligence.Carlos Zednik & Hannes Boelsen - 2022 - Minds and Machines 32 (1):219-239.
    Models developed using machine learning are increasingly prevalent in scientific research. At the same time, these models are notoriously opaque. Explainable AI aims to mitigate the impact of opacity by rendering opaque models transparent. More than being just the solution to a problem, however, Explainable AI can also play an invaluable role in scientific exploration. This paper describes how post-hoc analytic techniques from Explainable AI can be used to refine target phenomena in medical science, to identify starting points for future (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  • Model Organisms as Scientific Representations.Lorenzo Sartori - forthcoming - British Journal for the Philosophy of Science.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • Understanding via exemplification in XAI: how explaining image classification benefits from exemplars.Sara Mann - forthcoming - AI and Society:1-16.
    Artificial intelligent (AI) systems that perform image classification tasks are being used to great success in many application contexts. However, many of these systems are opaque, even to experts. This lack of understanding can be problematic for ethical, legal, or practical reasons. The research field Explainable AI (XAI) has therefore developed several approaches to explain image classifiers. The hope is to bring about understanding, e.g., regarding why certain images are classified as belonging to a particular target class. Most of these (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • Expert judgment in climate science: How it is used and how it can be justified.Mason Majszak & Julie Jebeile - 2023 - Studies in History and Philosophy of Science 100 (C):32-38.
    Like any science marked by high uncertainty, climate science is characterized by a widespread use of expert judgment. In this paper, we first show that, in climate science, expert judgment is used to overcome uncertainty, thus playing a crucial role in the domain and even at times supplanting models. One is left to wonder to what extent it is legitimate to assign expert judgment such a status as an epistemic superiority in the climate context, especially as the production of expert (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • Traveling with TARDIS. Parameterization and transferability in molecular modeling and simulation.Johannes Lenhard & Hans Hasse - 2023 - Synthese 201 (4):1-18.
    The English language has adopted the word Tardis for something that looks simple from the outside but is much more complicated when inspected from the inside. The word comes from a BBC science fiction series, in which the Tardis is a machine for traveling in time and space, that looks like a phone booth from the outside. This paper claims that simulation models are a Tardis in a way that calls into question their transferability. The argument is developed taking Molecular (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • The Non-theory-driven Character of Computer Simulations and Their Role as Exploratory Strategies.Juan M. Durán - 2023 - Minds and Machines 33 (3):487-505.
    In this article, I focus on the role of computer simulations as exploratory strategies. I begin by establishing the non-theory-driven nature of simulations. This refers to their ability to characterize phenomena without relying on a predefined conceptual framework that is provided by an implemented mathematical model. Drawing on Steinle’s notion of exploratory experimentation and Gelfert’s work on exploratory models, I present three exploratory strategies for computer simulations: (1) starting points and continuation of scientific inquiry, (2) varying the parameters, and (3) (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • Philosophy of science at sea: Clarifying the interpretability of machine learning.Claus Beisbart & Tim Räz - 2022 - Philosophy Compass 17 (6):e12830.
    Philosophy Compass, Volume 17, Issue 6, June 2022.
    No categories
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   8 citations