Results for 'Algorithmic explainability, Explanation game, Interpretable machine learning, Pareto frontier, Relevance'

1000+ found
Order:
  1. The explanation game: a formal framework for interpretable machine learning.David S. Watson & Luciano Floridi - 2020 - Synthese 198 (10):1–⁠32.
    We propose a formal framework for interpretable machine learning. Combining elements from statistical learning, causal interventionism, and decision theory, we design an idealised explanation game in which players collaborate to find the best explanation for a given algorithmic prediction. Through an iterative procedure of questions and answers, the players establish a three-dimensional Pareto frontier that describes the optimal trade-offs between explanatory accuracy, simplicity, and relevance. Multiple rounds are played at different levels of abstraction, (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   16 citations  
  2.  23
    The explanation game: a formal framework for interpretable machine learning.David S. Watson & Luciano Floridi - 2021 - Synthese 198 (10):9211-9242.
    We propose a formal framework for interpretable machine learning. Combining elements from statistical learning, causal interventionism, and decision theory, we design an idealisedexplanation gamein which players collaborate to find the best explanation(s) for a given algorithmic prediction. Through an iterative procedure of questions and answers, the players establish a three-dimensional Pareto frontier that describes the optimal trade-offs between explanatory accuracy, simplicity, and relevance. Multiple rounds are played at different levels of abstraction, allowing the players (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   14 citations  
  3.  13
    The Explanation Game: A Formal Framework for Interpretable Machine Learning.David S. Watson & Luciano Floridi - 2021 - In Josh Cowls & Jessica Morley (eds.), The 2020 Yearbook of the Digital Ethics Lab. Springer Verlag. pp. 109-143.
    We propose a formal framework for interpretable machine learning. Combining elements from statistical learning, causal interventionism, and decision theory, we design an idealised explanation game in which players collaborate to find the best explanation for a given algorithmic prediction. Through an iterative procedure of questions and answers, the players establish a three-dimensional Pareto frontier that describes the optimal trade-offs between explanatory accuracy, simplicity, and relevance. Multiple rounds are played at different levels of abstraction, (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   10 citations  
  4.  36
    Conceptual challenges for interpretable machine learning.David S. Watson - 2022 - Synthese 200 (2):1-33.
    As machine learning has gradually entered into ever more sectors of public and private life, there has been a growing demand for algorithmic explainability. How can we make the predictions of complex statistical models more intelligible to end users? A subdiscipline of computer science known as interpretable machine learning (IML) has emerged to address this urgent question. Numerous influential methods have been proposed, from local linear approximations to rule lists and counterfactuals. In this article, I highlight (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  5. Explainable machine learning practices: opening another black box for reliable medical AI.Emanuele Ratti & Mark Graves - 2022 - AI and Ethics:1-14.
    In the past few years, machine learning (ML) tools have been implemented with success in the medical context. However, several practitioners have raised concerns about the lack of transparency—at the algorithmic level—of many of these tools; and solutions from the field of explainable AI (XAI) have been seen as a way to open the ‘black box’ and make the tools more trustworthy. Recently, Alex London has argued that in the medical context we do not need machine learning (...)
    Direct download  
     
    Export citation  
     
    Bookmark   5 citations  
  6.  27
    Machine learning in medicine: should the pursuit of enhanced interpretability be abandoned?Chang Ho Yoon, Robert Torrance & Naomi Scheinerman - 2022 - Journal of Medical Ethics 48 (9):581-585.
    We argue why interpretability should have primacy alongside empiricism for several reasons: first, if machine learning models are beginning to render some of the high-risk healthcare decisions instead of clinicians, these models pose a novel medicolegal and ethical frontier that is incompletely addressed by current methods of appraising medical interventions like pharmacological therapies; second, a number of judicial precedents underpinning medical liability and negligence are compromised when ‘autonomous’ ML recommendations are considered to be en par with human instruction in (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  7.  23
    Justificatory explanations in machine learning: for increased transparency through documenting how key concepts drive and underpin design and engineering decisions.David Casacuberta, Ariel Guersenzvaig & Cristian Moyano-Fernández - 2024 - AI and Society 39 (1):279-293.
    Given the pervasiveness of AI systems and their potential negative effects on people’s lives (especially among already marginalised groups), it becomes imperative to comprehend what goes on when an AI system generates a result, and based on what reasons, it is achieved. There are consistent technical efforts for making systems more “explainable” by reducing their opaqueness and increasing their interpretability and explainability. In this paper, we explore an alternative non-technical approach towards explainability that complement existing ones. Leaving aside technical, statistical, (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  8.  30
    Algorithmic Decision-Making Based on Machine Learning from Big Data: Can Transparency Restore Accountability?Paul Laat - 2018 - Philosophy and Technology 31 (4):525-541.
    Decision-making assisted by algorithms developed by machine learning is increasingly determining our lives. Unfortunately, full opacity about the process is the norm. Would transparency contribute to restoring accountability for such systems as is often maintained? Several objections to full transparency are examined: the loss of privacy when datasets become public, the perverse effects of disclosure of the very algorithms themselves (“gaming the system” in particular), the potential loss of companies’ competitive edge, and the limited gains in answerability to be (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   26 citations  
  9.  14
    Decoding Intracranial EEG With Machine Learning: A Systematic Review.Nykan Mirchi, Nebras M. Warsi, Frederick Zhang, Simeon M. Wong, Hrishikesh Suresh, Karim Mithani, Lauren Erdman & George M. Ibrahim - 2022 - Frontiers in Human Neuroscience 16.
    Advances in intracranial electroencephalography and neurophysiology have enabled the study of previously inaccessible brain regions with high fidelity temporal and spatial resolution. Studies of iEEG have revealed a rich neural code subserving healthy brain function and which fails in disease states. Machine learning, a form of artificial intelligence, is a modern tool that may be able to better decode complex neural signals and enhance interpretation of these data. To date, a number of publications have applied ML to iEEG, but (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  10. Clinical applications of machine learning algorithms: beyond the black box.David S. Watson, Jenny Krutzinna, Ian N. Bruce, Christopher E. M. Griffiths, Iain B. McInnes, Michael R. Barnes & Luciano Floridi - 2019 - British Medical Journal 364:I886.
    Machine learning algorithms may radically improve our ability to diagnose and treat disease. For moral, legal, and scientific reasons, it is essential that doctors and patients be able to understand and explain the predictions of these models. Scalable, customisable, and ethical solutions can be achieved by working together with relevant stakeholders, including patients, data scientists, and policy makers.
    Direct download  
     
    Export citation  
     
    Bookmark   16 citations  
  11. Explaining Machine Learning Decisions.John Zerilli - 2022 - Philosophy of Science 89 (1):1-19.
    The operations of deep networks are widely acknowledged to be inscrutable. The growing field of Explainable AI has emerged in direct response to this problem. However, owing to the nature of the opacity in question, XAI has been forced to prioritise interpretability at the expense of completeness, and even realism, so that its explanations are frequently interpretable without being underpinned by more comprehensive explanations faithful to the way a network computes its predictions. While this has been taken to be (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   10 citations  
  12.  42
    Transparency as design publicity: explaining and justifying inscrutable algorithms.Michele Loi, Andrea Ferrario & Eleonora Viganò - 2020 - Ethics and Information Technology 23 (3):253-263.
    In this paper we argue that transparency of machine learning algorithms, just as explanation, can be defined at different levels of abstraction. We criticize recent attempts to identify the explanation of black box algorithms with making their decisions (post-hoc) interpretable, focusing our discussion on counterfactual explanations. These approaches to explanation simplify the real nature of the black boxes and risk misleading the public about the normative features of a model. We propose a new form of (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   12 citations  
  13. Explaining Explanations in AI.Brent Mittelstadt - forthcoming - FAT* 2019 Proceedings 1.
    Recent work on interpretability in machine learning and AI has focused on the building of simplified models that approximate the true criteria used to make decisions. These models are a useful pedagogical device for teaching trained professionals how to predict what decisions will be made by the complex system, and most importantly how the system might break. However, when considering any such model it’s important to remember Box’s maxim that "All models are wrong but some are useful." We focus (...)
    Direct download  
     
    Export citation  
     
    Bookmark   41 citations  
  14.  17
    The Thick Machine: Anthropological AI between explanation and explication.Mathieu Jacomy, Asger Gehrt Olesen & Anders Kristian Munk - 2022 - Big Data and Society 9 (1).
    According to Clifford Geertz, the purpose of anthropology is not to explain culture but to explicate it. That should cause us to rethink our relationship with machine learning. It is, we contend, perfectly possible that machine learning algorithms, which are unable to explain, and could even be unexplainable themselves, can still be of critical use in a process of explication. Thus, we report on an experiment with anthropological AI. From a dataset of 175K Facebook comments, we trained a (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  15.  7
    What Machine Learning Can Tell Us About the Role of Language Dominance in the Diagnostic Accuracy of German LITMUS Non-word and Sentence Repetition Tasks.Lina Abed Ibrahim & István Fekete - 2019 - Frontiers in Psychology 9.
    This study investigates the performance of 21 monolingual and 56 bilingual children aged 5;6-9;0 on German-LITMUS-sentence-repetition (SRT; Hamann et al., 2013) and nonword-repetition-tasks (NWRT; Grimm et al., 2014), which were constructed according to the LITMUS-principles (Language Impairment Testing in Multilingual Settings; Armon-Lotem et al., 2015). Both tasks incorporate complex structures shown to be cross-linguistically challenging for children with Specific Language Impairment (SLI) and aim at minimizing bias against bilingual children while still being indicative of the presence of language impairment across (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  16.  16
    Machine Learning Against Terrorism: How Big Data Collection and Analysis Influences the Privacy-Security Dilemma.H. M. Verhelst, A. W. Stannat & G. Mecacci - 2020 - Science and Engineering Ethics 26 (6):2975-2984.
    Rapid advancements in machine learning techniques allow mass surveillance to be applied on larger scales and utilize more and more personal data. These developments demand reconsideration of the privacy-security dilemma, which describes the tradeoffs between national security interests and individual privacy concerns. By investigating mass surveillance techniques that use bulk data collection and machine learning algorithms, we show why these methods are unlikely to pinpoint terrorists in order to prevent attacks. The diverse characteristics of terrorist attacks—especially when considering (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  17.  37
    Machines Learn Better with Better Data Ontology: Lessons from Philosophy of Induction and Machine Learning Practice.Dan Li - 2023 - Minds and Machines 33 (3):429-450.
    As scientists start to adopt machine learning (ML) as one research tool, the security of ML and the knowledge generated become a concern. In this paper, I explain how supervised ML can be improved with better data ontology, or the way we make categories and turn information into data. More specifically, we should design data ontology in such a way that is consistent with the knowledge that we have about the target phenomenon so that such ontology can help us (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  18. AI, Explainability and Public Reason: The Argument from the Limitations of the Human Mind.Jocelyn Maclure - 2021 - Minds and Machines 31 (3):421-438.
    Machine learning-based AI algorithms lack transparency. In this article, I offer an interpretation of AI’s explainability problem and highlight its ethical saliency. I try to make the case for the legal enforcement of a strong explainability requirement: human organizations which decide to automate decision-making should be legally obliged to demonstrate the capacity to explain and justify the algorithmic decisions that have an impact on the wellbeing, rights, and opportunities of those affected by the decisions. This legal duty can (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  19.  13
    Machine Learning Classifiers to Evaluate Data From Gait Analysis With Depth Cameras in Patients With Parkinson’s Disease.Beatriz Muñoz-Ospina, Daniela Alvarez-Garcia, Hugo Juan Camilo Clavijo-Moran, Jaime Andrés Valderrama-Chaparro, Melisa García-Peña, Carlos Alfonso Herrán, Christian Camilo Urcuqui, Andrés Navarro-Cadavid & Jorge Orozco - 2022 - Frontiers in Human Neuroscience 16.
    IntroductionThe assessments of the motor symptoms in Parkinson’s disease are usually limited to clinical rating scales, and it depends on the clinician’s experience. This study aims to propose a machine learning technique algorithm using the variables from upper and lower limbs, to classify people with PD from healthy people, using data from a portable low-cost device. And can be used to support the diagnosis and follow-up of patients in developing countries and remote areas.MethodsWe used Kinect®eMotion system to capture the (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  20. Algorithmic Decision-Making Based on Machine Learning from Big Data: Can Transparency Restore Accountability?Paul B. de Laat - 2018 - Philosophy and Technology 31 (4):525-541.
    Decision-making assisted by algorithms developed by machine learning is increasingly determining our lives. Unfortunately, full opacity about the process is the norm. Would transparency contribute to restoring accountability for such systems as is often maintained? Several objections to full transparency are examined: the loss of privacy when datasets become public, the perverse effects of disclosure of the very algorithms themselves, the potential loss of companies’ competitive edge, and the limited gains in answerability to be expected since sophisticated algorithms usually (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   28 citations  
  21.  49
    Algorithmic Decision-Making Based on Machine Learning from Big Data: Can Transparency Restore Accountability?Massimo Durante & Marcello D'Agostino - 2018 - Philosophy and Technology 31 (4):525-541.
    Decision-making assisted by algorithms developed by machine learning is increasingly determining our lives. Unfortunately, full opacity about the process is the norm. Would transparency contribute to restoring accountability for such systems as is often maintained? Several objections to full transparency are examined: the loss of privacy when datasets become public, the perverse effects of disclosure of the very algorithms themselves, the potential loss of companies’ competitive edge, and the limited gains in answerability to be expected since sophisticated algorithms usually (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   26 citations  
  22.  25
    Narrative and Explanation: Explaining Anna Karenina in the Light of Its Epigraph.Marina Ludwigs - 2004 - Contagion: Journal of Violence, Mimesis, and Culture 11 (1):124-145.
    In lieu of an abstract, here is a brief excerpt of the content:NARRATIVE AND EXPLANATION: EXPLAINING ANNA KARENINA IN THE LIGHT OF ITS EPIGRAPH Marina Ludwigs University ofCalifornia, Irvine In this paper, I will be examining the relation of explanation to narrative, looking briefly at the theoretical side ofthe problematic and in more detail at specific explanatory issues that arise in Tolstoy's novel Anna Karenina. Although the use itselfofthe term "explanation" is not as visible in the humanities (...)
    No categories
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  23. The Use and Misuse of Counterfactuals in Ethical Machine Learning.Atoosa Kasirzadeh & Andrew Smart - 2021 - In ACM Conference on Fairness, Accountability, and Transparency (FAccT 21).
    The use of counterfactuals for considerations of algorithmic fairness and explainability is gaining prominence within the machine learning community and industry. This paper argues for more caution with the use of counterfactuals when the facts to be considered are social categories such as race or gender. We review a broad body of papers from philosophy and social sciences on social ontology and the semantics of counterfactuals, and we conclude that the counterfactual approach in machine learning fairness and (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  24.  4
    Predicting Student Performance Using Machine Learning in fNIRS Data.Amanda Yumi Ambriola Oku & João Ricardo Sato - 2021 - Frontiers in Human Neuroscience 15.
    Increasing student involvement in classes has always been a challenge for teachers and school managers. In online learning, some interactivity mechanisms like quizzes are increasingly used to engage students during classes and tasks. However, there is a high demand for tools that evaluate the efficiency of these mechanisms. In order to distinguish between high and low levels of engagement in tasks, it is possible to monitor brain activity through functional near-infrared spectroscopy. The main advantages of this technique are portability, low (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  25.  17
    Linking Human And Machine Behavior: A New Approach to Evaluate Training Data Quality for Beneficial Machine Learning.Thilo Hagendorff - 2021 - Minds and Machines 31 (4):563-593.
    Machine behavior that is based on learning algorithms can be significantly influenced by the exposure to data of different qualities. Up to now, those qualities are solely measured in technical terms, but not in ethical ones, despite the significant role of training and annotation data in supervised machine learning. This is the first study to fill this gap by describing new dimensions of data quality for supervised machine learning applications. Based on the rationale that different social and (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  26.  72
    The Pragmatic Turn in Explainable Artificial Intelligence.Andrés Páez - 2019 - Minds and Machines 29 (3):441-459.
    In this paper I argue that the search for explainable models and interpretable decisions in AI must be reformulated in terms of the broader project of offering a pragmatic and naturalistic account of understanding in AI. Intuitively, the purpose of providing an explanation of a model or a decision is to make it understandable to its stakeholders. But without a previous grasp of what it means to say that an agent understands a model or a decision, the explanatory (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   29 citations  
  27.  71
    Automatic Detection of Focal Cortical Dysplasia Type II in MRI: Is the Application of Surface-Based Morphometry and Machine Learning Promising?Zohreh Ganji, Mohsen Aghaee Hakak, Seyed Amir Zamanpour & Hoda Zare - 2021 - Frontiers in Human Neuroscience 15.
    Background and ObjectivesFocal cortical dysplasia is a type of malformations of cortical development and one of the leading causes of drug-resistant epilepsy. Postoperative results improve the diagnosis of lesions on structural MRIs. Advances in quantitative algorithms have increased the identification of FCD lesions. However, due to significant differences in size, shape, and location of the lesion in different patients and a big deal of time for the objective diagnosis of lesion as well as the dependence of individual interpretation, sensitive approaches (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  28.  3
    Identifying Alcohol Use Disorder With Resting State Functional Magnetic Resonance Imaging Data: A Comparison Among Machine Learning Classifiers.Victor M. Vergara, Flor A. Espinoza & Vince D. Calhoun - 2022 - Frontiers in Psychology 13.
    Alcohol use disorder is a burden to society creating social and health problems. Detection of AUD and its effects on the brain are difficult to assess. This problem is enhanced by the comorbid use of other substances such as nicotine that has been present in previous studies. Recent machine learning algorithms have raised the attention of researchers as a useful tool in studying and detecting AUD. This work uses AUD and controls samples free of any other substance use to (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  29.  95
    Machine Learning and the Future of Scientific Explanation.Florian J. Boge & Michael Poznic - 2021 - Journal for General Philosophy of Science / Zeitschrift für Allgemeine Wissenschaftstheorie 52 (1):171-176.
    The workshop “Machine Learning: Prediction Without Explanation?” brought together philosophers of science and scholars from various fields who study and employ Machine Learning (ML) techniques, in order to discuss the changing face of science in the light of ML's constantly growing use. One major focus of the workshop was on the impact of ML on the concept and value of scientific explanation. One may speculate whether ML’s increased use in science exemplifies a paradigmatic turn towards mere (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  30.  11
    Yield Response of Different Rice Ecotypes to Meteorological, Agro-Chemical, and Soil Physiographic Factors for Interpretable Precision Agriculture Using Extreme Gradient Boosting and Support Vector Regression.Md Sabbir Ahmed, Md Tasin Tazwar, Haseen Khan, Swadhin Roy, Junaed Iqbal, Md Golam Rabiul Alam, Md Rafiul Hassan & Mohammad Mehedi Hassan - 2022 - Complexity 2022:1-20.
    The food security of more than half of the world’s population depends on rice production which is one of the key objectives of precision agriculture. The traditional rice almanac used astronomical and climate factors to estimate yield response. However, this research integrated meteorological, agro-chemical, and soil physiographic factors for yield response prediction. Besides, the impact of those factors on the production of three major rice ecotypes has also been studied in this research. Moreover, this study found a different set of (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  31.  6
    On the Juridical Relevance of the Phenomenological Notion of Person in Max Scheler and Edith Stein.Francesco Galofaro - 2022 - International Journal for the Semiotics of Law - Revue Internationale de Sémiotique Juridique 35 (4):1317-1331.
    The paper presents a semiotic interpretation of the phenomenological debate on the notion of person, focusing in particular on Edmund Husserl, Max Scheler, and Edith Stein. The semiotic interpretation lets us identify the categories that orient the debate: collective/individual and subject/object. As we will see, the phenomenological analysis of the relation between person and social units such as the community, the association, and the mass shows similarities to contemporary socio-semiotic models. The difference between community, association, and mass provides an (...) for the establishment of legal systems. The notion of person we inherit from phenomenology can also be useful in facing juridical problems raised by the use of non-human decision-makers such as machine learning algorithms and artificial intelligence applications. (shrink)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  32.  7
    CortexVR: Immersive analysis and training of cognitive executive functions of soccer players using virtual reality and machine learning.Christian Krupitzer, Jens Naber, Jan-Philipp Stauffert, Jan Mayer, Jan Spielmann, Paul Ehmann, Noel Boci, Maurice Bürkle, André Ho, Clemens Komorek, Felix Heinickel, Samuel Kounev, Christian Becker & Marc Erich Latoschik - 2022 - Frontiers in Psychology 13.
    GoalThis paper presents an immersive Virtual Reality system to analyze and train Executive Functions of soccer players. EFs are important cognitive functions for athletes. They are a relevant quality that distinguishes amateurs from professionals.MethodThe system is based on immersive technology, hence, the user interacts naturally and experiences a training session in a virtual world. The proposed system has a modular design supporting the extension of various so-called game modes. Game modes combine selected game mechanics with specific simulation content to target (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  33.  5
    Machine learning and social theory: Collective machine behaviour in algorithmic trading.Christian Borch - 2022 - European Journal of Social Theory 25 (4):503-520.
    This article examines what the rise in machine learning systems might mean for social theory. Focusing on financial markets, in which algorithmic securities trading founded on ML-based decision-making is gaining traction, I discuss the extent to which established sociological notions remain relevant or demand a reconsideration when applied to an ML context. I argue that ML systems have some capacity for agency and for engaging in forms of collective machine behaviour, in which ML systems interact with other (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  34.  56
    Believing in Black Boxes: Must Machine Learning in Healthcare be Explainable to be Evidence-Based?Liam McCoy, Connor Brenna, Stacy Chen, Karina Vold & Sunit Das - forthcoming - Journal of Clinical Epidemiology.
    Objective: To examine the role of explainability in machine learning for healthcare (MLHC), and its necessity and significance with respect to effective and ethical MLHC application. Study Design and Setting: This commentary engages with the growing and dynamic corpus of literature on the use of MLHC and artificial intelligence (AI) in medicine, which provide the context for a focused narrative review of arguments presented in favour of and opposition to explainability in MLHC. Results: We find that concerns regarding explainability (...)
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  35. ANNs and Unifying Explanations: Reply to Erasmus, Brunet, and Fisher.Yunus Prasetya - 2022 - Philosophy and Technology 35 (2):1-9.
    In a recent article, Erasmus, Brunet, and Fisher (2021) argue that Artificial Neural Networks (ANNs) are explainable. They survey four influential accounts of explanation: the Deductive-Nomological model, the Inductive-Statistical model, the Causal-Mechanical model, and the New-Mechanist model. They argue that, on each of these accounts, the features that make something an explanation is invariant with regard to the complexity of the explanans and the explanandum. Therefore, they conclude, the complexity of ANNs (and other Machine Learning models) does (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  36.  18
    Predicting and explaining with machine learning models: Social science as a touchstone.Oliver Buchholz & Thomas Grote - 2023 - Studies in History and Philosophy of Science Part A 102 (C):60-69.
    Machine learning (ML) models recently led to major breakthroughs in predictive tasks in the natural sciences. Yet their benefits for the social sciences are less evident, as even high-profile studies on the prediction of life trajectories have shown to be largely unsuccessful – at least when measured in traditional criteria of scientific success. This paper tries to shed light on this remarkable performance gap. Comparing two social science case studies to a paradigm example from the natural sciences, we argue (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  37.  12
    Causal scientific explanations from machine learning.Stefan Buijsman - 2023 - Synthese 202 (6):1-16.
    Machine learning is used more and more in scientific contexts, from the recent breakthroughs with AlphaFold2 in protein fold prediction to the use of ML in parametrization for large climate/astronomy models. Yet it is unclear whether we can obtain scientific explanations from such models. I argue that when machine learning is used to conduct causal inference we can give a new positive answer to this question. However, these ML models are purpose-built models and there are technical results showing (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  38.  52
    On Explainable AI and Abductive Inference.Kyrylo Medianovskyi & Ahti-Veikko Pietarinen - 2022 - Philosophies 7 (2):35.
    Modern explainable AI methods remain far from providing human-like answers to ‘why’ questions, let alone those that satisfactorily agree with human-level understanding. Instead, the results that such methods provide boil down to sets of causal attributions. Currently, the choice of accepted attributions rests largely, if not solely, on the explainee’s understanding of the quality of explanations. The paper argues that such decisions may be transferred from a human to an XAI agent, provided that its machine-learning algorithms perform genuinely abductive (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  39.  59
    Why a Right to an Explanation of Algorithmic Decision-Making Should Exist: A Trust-Based Approach.Tae Wan Kim & Bryan R. Routledge - 2022 - Business Ethics Quarterly 32 (1):75-102.
    Businesses increasingly rely on algorithms that are data-trained sets of decision rules (i.e., the output of the processes often called “machine learning”) and implement decisions with little or no human intermediation. In this article, we provide a philosophical foundation for the claim that algorithmic decision-making gives rise to a “right to explanation.” It is often said that, in the digital era, informed consent is dead. This negative view originates from a rigid understanding that presumes informed consent is (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  40.  21
    Tree-based machine learning algorithms in the Internet of Things environment for multivariate flood status prediction.Salama A. Mostafa, Bashar Ahmed Khalaf, Ahmed Mahmood Khudhur, Ali Noori Kareem & Firas Mohammed Aswad - 2021 - Journal of Intelligent Systems 31 (1):1-14.
    Floods are one of the most common natural disasters in the world that affect all aspects of life, including human beings, agriculture, industry, and education. Research for developing models of flood predictions has been ongoing for the past few years. These models are proposed and built-in proportion for risk reduction, policy proposition, loss of human lives, and property damages associated with floods. However, flood status prediction is a complex process and demands extensive analyses on the factors leading to the occurrence (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  41. The Pragmatic Turn in Explainable Artificial Intelligence (XAI).Andrés Páez - 2019 - Minds and Machines 29 (3):441-459.
    In this paper I argue that the search for explainable models and interpretable decisions in AI must be reformulated in terms of the broader project of offering a pragmatic and naturalistic account of understanding in AI. Intuitively, the purpose of providing an explanation of a model or a decision is to make it understandable to its stakeholders. But without a previous grasp of what it means to say that an agent understands a model or a decision, the explanatory (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   30 citations  
  42. The Unbearable Shallow Understanding of Deep Learning.Alessio Plebe & Giorgio Grasso - 2019 - Minds and Machines 29 (4):515-553.
    This paper analyzes the rapid and unexpected rise of deep learning within Artificial Intelligence and its applications. It tackles the possible reasons for this remarkable success, providing candidate paths towards a satisfactory explanation of why it works so well, at least in some domains. A historical account is given for the ups and downs, which have characterized neural networks research and its evolution from “shallow” to “deep” learning architectures. A precise account of “success” is given, in order to sieve (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  43.  11
    The virtue of simplicity: On machine learning models in algorithmic trading.Kristian Bondo Hansen - 2020 - Big Data and Society 7 (1).
    Machine learning models are becoming increasingly prevalent in algorithmic trading and investment management. The spread of machine learning in finance challenges existing practices of modelling and model use and creates a demand for practical solutions for how to manage the complexity pertaining to these techniques. Drawing on interviews with quants applying machine learning techniques to financial problems, the article examines how these people manage model complexity in the process of devising machine learning-powered trading algorithms. The (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  44. Against Interpretability: a Critical Examination of the Interpretability Problem in Machine Learning.Maya Krishnan - 2020 - Philosophy and Technology 33 (3):487-502.
    The usefulness of machine learning algorithms has led to their widespread adoption prior to the development of a conceptual framework for making sense of them. One common response to this situation is to say that machine learning suffers from a “black box problem.” That is, machine learning algorithms are “opaque” to human users, failing to be “interpretable” or “explicable” in terms that would render categorization procedures “understandable.” The purpose of this paper is to challenge the widespread (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   30 citations  
  45.  36
    Fairer machine learning in the real world: Mitigating discrimination without collecting sensitive data.Reuben Binns & Michael Veale - 2017 - Big Data and Society 4 (2).
    Decisions based on algorithmic, machine learning models can be unfair, reproducing biases in historical data used to train them. While computational techniques are emerging to address aspects of these concerns through communities such as discrimination-aware data mining and fairness, accountability and transparency machine learning, their practical implementation faces real-world challenges. For legal, institutional or commercial reasons, organisations might not hold the data on sensitive attributes such as gender, ethnicity, sexuality or disability needed to diagnose and mitigate emergent (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   16 citations  
  46.  4
    Deep Learning Opacity, and the Ethical Accountability of AI Systems. A New Perspective.Gianfranco Basti & Giuseppe Vitiello - 2023 - In Raffaela Giovagnoli & Robert Lowe (eds.), The Logic of Social Practices II. Springer Nature Switzerland. pp. 21-73.
    In this paper we analyse the conditions for attributing to AI autonomous systems the ontological status of “artificial moral agents”, in the context of the “distributed responsibility” between humans and machines in Machine Ethics (ME). In order to address the fundamental issue in ME of the unavoidable “opacity” of their decisions with ethical/legal relevance, we start from the neuroethical evidence in cognitive science. In humans, the “transparency” and then the “ethical accountability” of their actions as responsible moral agents (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  47.  11
    Machine learning in tutorials – Universal applicability, underinformed application, and other misconceptions.Andreas Breiter, Juliane Jarke & Hendrik Heuer - 2021 - Big Data and Society 8 (1).
    Machine learning has become a key component of contemporary information systems. Unlike prior information systems explicitly programmed in formal languages, ML systems infer rules from data. This paper shows what this difference means for the critical analysis of socio-technical systems based on machine learning. To provide a foundation for future critical analysis of machine learning-based systems, we engage with how the term is framed and constructed in self-education resources. For this, we analyze machine learning tutorials, an (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  48. Legal requirements on explainability in machine learning.Adrien Bibal, Michael Lognoul, Alexandre de Streel & Benoît Frénay - 2020 - Artificial Intelligence and Law 29 (2):149-169.
    Deep learning and other black-box models are becoming more and more popular today. Despite their high performance, they may not be accepted ethically or legally because of their lack of explainability. This paper presents the increasing number of legal requirements on machine learning model interpretability and explainability in the context of private and public decision making. It then explains how those legal requirements can be implemented into machine-learning models and concludes with a call for more inter-disciplinary research on (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  49.  38
    The End of Vagueness: Technological Epistemicism, Surveillance Capitalism, and Explainable Artificial Intelligence.Alison Duncan Kerr & Kevin Scharp - 2022 - Minds and Machines 32 (3):585-611.
    Artificial Intelligence (AI) pervades humanity in 2022, and it is notoriously difficult to understand how certain aspects of it work. There is a movement—_Explainable_ Artificial Intelligence (XAI)—to develop new methods for explaining the behaviours of AI systems. We aim to highlight one important philosophical significance of XAI—it has a role to play in the elimination of vagueness. To show this, consider that the use of AI in what has been labeled _surveillance capitalism_ has resulted in humans quickly gaining the capability (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  50.  11
    Understanding the Impact of Machine Learning on Labor and Education: A Time-Dependent Turing Test.Joseph Ganem - 2023 - Springer Nature Switzerland.
    This book provides a novel framework for understanding and revising labor markets and education policies in an era of machine learning. It posits that while learning and knowing both require thinking, learning is fundamentally different than knowing because it results in cognitive processes that change over time. Learning, in contrast to knowing, requires time and agency. Therefore, “learning algorithms”—that enable machines to modify their actions based on real-world experiences—are a fundamentally new form of artificial intelligence that have potential to (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
1 — 50 / 1000