Switch to: References

Add citations

You must login to add citations.
  1. Reliability in Machine Learning.Thomas Grote, Konstantin Genin & Emily Sullivan - forthcoming - Philosophy Compass.
    Issues of reliability are claiming center-stage in the epistemology of machine learning. This paper unifies different branches in the literature and points to promising research directions, whilst also providing an accessible introduction to key concepts in statistics and machine learning---as far as they are concerned with reliability.
    Direct download  
     
    Export citation  
     
    Bookmark  
  • The explanation game: a formal framework for interpretable machine learning.David S. Watson & Luciano Floridi - 2020 - Synthese 198 (10):1–⁠32.
    We propose a formal framework for interpretable machine learning. Combining elements from statistical learning, causal interventionism, and decision theory, we design an idealised explanation game in which players collaborate to find the best explanation for a given algorithmic prediction. Through an iterative procedure of questions and answers, the players establish a three-dimensional Pareto frontier that describes the optimal trade-offs between explanatory accuracy, simplicity, and relevance. Multiple rounds are played at different levels of abstraction, allowing the players to explore overlapping causal (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   16 citations  
  • The explanation game: a formal framework for interpretable machine learning.David S. Watson & Luciano Floridi - 2021 - Synthese 198 (10):9211-9242.
    We propose a formal framework for interpretable machine learning. Combining elements from statistical learning, causal interventionism, and decision theory, we design an idealisedexplanation gamein which players collaborate to find the best explanation(s) for a given algorithmic prediction. Through an iterative procedure of questions and answers, the players establish a three-dimensional Pareto frontier that describes the optimal trade-offs between explanatory accuracy, simplicity, and relevance. Multiple rounds are played at different levels of abstraction, allowing the players to explore overlapping causal patterns of (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   14 citations  
  • On the Philosophy of Unsupervised Learning.David S. Watson - 2023 - Philosophy and Technology 36 (2):1-26.
    Unsupervised learning algorithms are widely used for many important statistical tasks with numerous applications in science and industry. Yet despite their prevalence, they have attracted remarkably little philosophical scrutiny to date. This stands in stark contrast to supervised and reinforcement learning algorithms, which have been widely studied and critically evaluated, often with an emphasis on ethical concerns. In this article, I analyze three canonical unsupervised learning problems: clustering, abstraction, and generative modeling. I argue that these methods raise unique epistemological and (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  • Varieties of Justification in Machine Learning.David Corfield - 2010 - Minds and Machines 20 (2):291-301.
    Forms of justification for inductive machine learning techniques are discussed and classified into four types. This is done with a view to introduce some of these techniques and their justificatory guarantees to the attention of philosophers, and to initiate a discussion as to whether they must be treated separately or rather can be viewed consistently from within a single framework.
    Direct download (10 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  • A Falsificationist Account of Artificial Neural Networks.Oliver Buchholz & Eric Raidl - forthcoming - The British Journal for the Philosophy of Science.
    Machine learning operates at the intersection of statistics and computer science. This raises the question as to its underlying methodology. While much emphasis has been put on the close link between the process of learning from data and induction, the falsificationist component of machine learning has received minor attention. In this paper, we argue that the idea of falsification is central to the methodology of machine learning. It is commonly thought that machine learning algorithms infer general prediction rules from past (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • Simple Models in Complex Worlds: Occam’s Razor and Statistical Learning Theory.Falco J. Bargagli Stoffi, Gustavo Cevolani & Giorgio Gnecco - 2022 - Minds and Machines 32 (1):13-42.
    The idea that “simplicity is a sign of truth”, and the related “Occam’s razor” principle, stating that, all other things being equal, simpler models should be preferred to more complex ones, have been long discussed in philosophy and science. We explore these ideas in the context of supervised machine learning, namely the branch of artificial intelligence that studies algorithms which balance simplicity and accuracy in order to effectively learn about the features of the underlying domain. Focusing on statistical learning theory, (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • Falsifiable implies Learnable.David Balduzzi - manuscript
    The paper demonstrates that falsifiability is fundamental to learning. We prove the following theorem for statistical learning and sequential prediction: If a theory is falsifiable then it is learnable -- i.e. admits a strategy that predicts optimally. An analogous result is shown for universal induction.
    Direct download  
     
    Export citation  
     
    Bookmark