Results for 'Explainable artificial intelligence (XAI)'

40 found
Order:
  1. Explainable Artificial Intelligence (XAI) 2.0: A Manifesto of Open Challenges and Interdisciplinary Research Directions.Luca Longo, Mario Brcic, Federico Cabitza, Jaesik Choi, Roberto Confalonieri, Javier Del Ser, Riccardo Guidotti, Yoichi Hayashi, Francisco Herrera, Andreas Holzinger, Richard Jiang, Hassan Khosravi, Freddy Lecue, Gianclaudio Malgieri, Andrés Páez, Wojciech Samek, Johannes Schneider, Timo Speith & Simone Stumpf - 2024 - Information Fusion 106 (June 2024).
    As systems based on opaque Artificial Intelligence (AI) continue to flourish in diverse real-world applications, understanding these black box models has become paramount. In response, Explainable AI (XAI) has emerged as a field of research with practical and ethical benefits across various domains. This paper not only highlights the advancements in XAI and its application in real-world scenarios but also addresses the ongoing challenges within XAI, emphasizing the need for broader perspectives and collaborative efforts. We bring together (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  2.  28
    Explainable Artificial Intelligence (XAI) to Enhance Trust Management in Intrusion Detection Systems Using Decision Tree Model.Basim Mahbooba, Mohan Timilsina, Radhya Sahal & Martin Serrano - 2021 - Complexity 2021:1-11.
    Despite the growing popularity of machine learning models in the cyber-security applications ), most of these models are perceived as a black-box. The eXplainable Artificial Intelligence has become increasingly important to interpret the machine learning models to enhance trust management by allowing human experts to understand the underlying data evidence and causal reasoning. According to IDS, the critical role of trust management is to understand the impact of the malicious data to detect any intrusion in the system. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  3. The Pragmatic Turn in Explainable Artificial Intelligence (XAI).Andrés Páez - 2019 - Minds and Machines 29 (3):441-459.
    In this paper I argue that the search for explainable models and interpretable decisions in AI must be reformulated in terms of the broader project of offering a pragmatic and naturalistic account of understanding in AI. Intuitively, the purpose of providing an explanation of a model or a decision is to make it understandable to its stakeholders. But without a previous grasp of what it means to say that an agent understands a model or a decision, the explanatory strategies (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   30 citations  
  4. What do we want from Explainable Artificial Intelligence (XAI)? – A stakeholder perspective on XAI and a conceptual model guiding interdisciplinary XAI research.Markus Langer, Daniel Oster, Timo Speith, Lena Kästner, Kevin Baum, Holger Hermanns, Eva Schmidt & Andreas Sesing - 2021 - Artificial Intelligence 296 (C):103473.
    Previous research in Explainable Artificial Intelligence (XAI) suggests that a main aim of explainability approaches is to satisfy specific interests, goals, expectations, needs, and demands regarding artificial systems (we call these “stakeholders' desiderata”) in a variety of contexts. However, the literature on XAI is vast, spreads out across multiple largely disconnected disciplines, and it often remains unclear how explainability approaches are supposed to achieve the goal of satisfying stakeholders' desiderata. This paper discusses the main classes of (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   17 citations  
  5.  77
    Is explainable artificial intelligence intrinsically valuable?Nathan Colaner - 2022 - AI and Society 37 (1):231-238.
    There is general consensus that explainable artificial intelligence is valuable, but there is significant divergence when we try to articulate why, exactly, it is desirable. This question must be distinguished from two other kinds of questions asked in the XAI literature that are sometimes asked and addressed simultaneously. The first and most obvious is the ‘how’ question—some version of: ‘how do we develop technical strategies to achieve XAI?’ Another question is specifying what kind of explanation is worth (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  6.  41
    Explainable Artificial Intelligence in Data Science.Joaquín Borrego-Díaz & Juan Galán-Páez - 2022 - Minds and Machines 32 (3):485-531.
    A widespread need to explain the behavior and outcomes of AI-based systems has emerged, due to their ubiquitous presence. Thus, providing renewed momentum to the relatively new research area of eXplainable AI (XAI). Nowadays, the importance of XAI lies in the fact that the increasing control transference to this kind of system for decision making -or, at least, its use for assisting executive stakeholders- already affects many sensitive realms (as in Politics, Social Sciences, or Law). The decision-making power handover (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  7.  81
    A Means-End Account of Explainable Artificial Intelligence.Oliver Buchholz - 2023 - Synthese 202 (33):1-23.
    Explainable artificial intelligence (XAI) seeks to produce explanations for those machine learning methods which are deemed opaque. However, there is considerable disagreement about what this means and how to achieve it. Authors disagree on what should be explained (topic), to whom something should be explained (stakeholder), how something should be explained (instrument), and why something should be explained (goal). In this paper, I employ insights from means-end epistemology to structure the field. According to means-end epistemology, different means (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  8. From Responsibility to Reason-Giving Explainable Artificial Intelligence.Kevin Baum, Susanne Mantel, Timo Speith & Eva Schmidt - 2022 - Philosophy and Technology 35 (1):1-30.
    We argue that explainable artificial intelligence (XAI), specifically reason-giving XAI, often constitutes the most suitable way of ensuring that someone can properly be held responsible for decisions that are based on the outputs of artificial intelligent (AI) systems. We first show that, to close moral responsibility gaps (Matthias 2004), often a human in the loop is needed who is directly responsible for particular AI-supported decisions. Second, we appeal to the epistemic condition on moral responsibility to argue (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  9.  77
    The Pragmatic Turn in Explainable Artificial Intelligence.Andrés Páez - 2019 - Minds and Machines 29 (3):441-459.
    In this paper I argue that the search for explainable models and interpretable decisions in AI must be reformulated in terms of the broader project of offering a pragmatic and naturalistic account of understanding in AI. Intuitively, the purpose of providing an explanation of a model or a decision is to make it understandable to its stakeholders. But without a previous grasp of what it means to say that an agent understands a model or a decision, the explanatory strategies (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   30 citations  
  10.  16
    Subjectivity of Explainable Artificial Intelligence.Александр Николаевич Райков - 2022 - Russian Journal of Philosophical Sciences 65 (1):72-90.
    The article addresses the problem of identifying methods to develop the ability of artificial intelligence (AI) systems to provide explanations for their findings. This issue is not new, but, nowadays, the increasing complexity of AI systems is forcing scientists to intensify research in this direction. Modern neural networks contain hundreds of layers of neurons. The number of parameters of these networks reaches trillions, genetic algorithms generate thousands of generations of solutions, and the semantics of AI models become more (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  11.  42
    The End of Vagueness: Technological Epistemicism, Surveillance Capitalism, and Explainable Artificial Intelligence.Alison Duncan Kerr & Kevin Scharp - 2022 - Minds and Machines 32 (3):585-611.
    Artificial Intelligence (AI) pervades humanity in 2022, and it is notoriously difficult to understand how certain aspects of it work. There is a movement—_Explainable_ Artificial Intelligence (XAI)—to develop new methods for explaining the behaviours of AI systems. We aim to highlight one important philosophical significance of XAI—it has a role to play in the elimination of vagueness. To show this, consider that the use of AI in what has been labeled _surveillance capitalism_ has resulted in humans (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  12.  62
    Defining Explanation and Explanatory Depth in XAI.Stefan Buijsman - 2022 - Minds and Machines 32 (3):563-584.
    Explainable artificial intelligence (XAI) aims to help people understand black box algorithms, particularly of their outputs. But what are these explanations and when is one explanation better than another? The manipulationist definition of explanation from the philosophy of science offers good answers to these questions, holding that an explanation consists of a generalization that shows what happens in counterfactual cases. Furthermore, when it comes to explanatory depth this account holds that a generalization that has more abstract variables, (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  13.  15
    Understanding via exemplification in XAI: how explaining image classification benefits from exemplars.Sara Mann - forthcoming - AI and Society:1-16.
    Artificial intelligent (AI) systems that perform image classification tasks are being used to great success in many application contexts. However, many of these systems are opaque, even to experts. This lack of understanding can be problematic for ethical, legal, or practical reasons. The research field Explainable AI (XAI) has therefore developed several approaches to explain image classifiers. The hope is to bring about understanding, e.g., regarding why certain images are classified as belonging to a particular target class. Most (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  14. Local explanations via necessity and sufficiency: unifying theory and practice.David Watson, Limor Gultchin, Taly Ankur & Luciano Floridi - 2022 - Minds and Machines 32:185-218.
    Necessity and sufficiency are the building blocks of all successful explanations. Yet despite their importance, these notions have been conceptually underdeveloped and inconsistently applied in explainable artificial intelligence (XAI), a fast-growing research area that is so far lacking in firm theoretical foundations. Building on work in logic, probability, and causality, we establish the central role of necessity and sufficiency in XAI, unifying seemingly disparate methods in a single formal framework. We provide a sound and complete algorithm for (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  15.  50
    Causal Explanations and XAI.Sander Beckers - 2022 - Proceedings of the 1St Conference on Causal Learning and Reasoning, Pmlr.
    Although standard Machine Learning models are optimized for making predictions about observations, more and more they are used for making predictions about the results of actions. An important goal of Explainable Artificial Intelligence (XAI) is to compensate for this mismatch by offering explanations about the predictions of an ML-model which ensure that they are reliably action-guiding. As action-guiding explanations are causal explanations, the literature on this topic is starting to embrace insights from the literature on causal models. (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  16.  14
    Is Explainable AI Responsible AI?Isaac Taylor - forthcoming - AI and Society.
    When artificial intelligence (AI) is used to make high-stakes decisions, some worry that this will create a morally troubling responsibility gap—that is, a situation in which nobody is morally responsible for the actions and outcomes that result. Since the responsibility gap might be thought to result from individuals lacking knowledge of the future behavior of AI systems, it can be and has been suggested that deploying explainable artificial intelligence (XAI) techniques will help us to avoid (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  17.  67
    Explanatory pragmatism: a context-sensitive framework for explainable medical AI.Diana Robinson & Rune Nyrup - 2022 - Ethics and Information Technology 24 (1).
    Explainable artificial intelligence (XAI) is an emerging, multidisciplinary field of research that seeks to develop methods and tools for making AI systems more explainable or interpretable. XAI researchers increasingly recognise explainability as a context-, audience- and purpose-sensitive phenomenon, rather than a single well-defined property that can be directly measured and optimised. However, since there is currently no overarching definition of explainability, this poses a risk of miscommunication between the many different researchers within this multidisciplinary space. This (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  18.  33
    Explaining black-box classifiers using post-hoc explanations-by-example: The effect of explanations and error-rates in XAI user studies.Eoin M. Kenny, Courtney Ford, Molly Quinn & Mark T. Keane - 2021 - Artificial Intelligence 294 (C):103459.
  19.  7
    Explainable AI in the military domain.Nathan Gabriel Wood - 2024 - Ethics and Information Technology 26 (2):1-13.
    Artificial intelligence (AI) has become nearly ubiquitous in modern society, from components of mobile applications to medical support systems, and everything in between. In societally impactful systems imbued with AI, there has been increasing concern related to opaque AI, that is, artificial intelligence where it is unclear how or why certain decisions are reached. This has led to a recent boom in research on “explainable AI” (XAI), or approaches to making AI more explainable and (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  20. Ameliorating Algorithmic Bias, or Why Explainable AI Needs Feminist Philosophy.Linus Ta-Lun Huang, Hsiang-Yun Chen, Ying-Tung Lin, Tsung-Ren Huang & Tzu-Wei Hung - 2022 - Feminist Philosophy Quarterly 8 (3).
    Artificial intelligence (AI) systems are increasingly adopted to make decisions in domains such as business, education, health care, and criminal justice. However, such algorithmic decision systems can have prevalent biases against marginalized social groups and undermine social justice. Explainable artificial intelligence (XAI) is a recent development aiming to make an AI system’s decision processes less opaque and to expose its problematic biases. This paper argues against technical XAI, according to which the detection and interpretation of (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  21. Unjustified Sample Sizes and Generalizations in Explainable AI Research: Principles for More Inclusive User Studies.Uwe Peters & Mary Carman - forthcoming - IEEE Intelligent Systems.
    Many ethical frameworks require artificial intelligence (AI) systems to be explainable. Explainable AI (XAI) models are frequently tested for their adequacy in user studies. Since different people may have different explanatory needs, it is important that participant samples in user studies are large enough to represent the target population to enable generalizations. However, it is unclear to what extent XAI researchers reflect on and justify their sample sizes or avoid broad generalizations across people. We analyzed XAI (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  22.  15
    Explanation–Question–Response dialogue: An argumentative tool for explainable AI.Federico Castagna, Peter McBurney & Simon Parsons - forthcoming - Argument and Computation:1-23.
    Advancements and deployments of AI-based systems, especially Deep Learning-driven generative language models, have accomplished impressive results over the past few years. Nevertheless, these remarkable achievements are intertwined with a related fear that such technologies might lead to a general relinquishing of our lives’s control to AIs. This concern, which also motivates the increasing interest in the eXplainable Artificial Intelligence (XAI) research field, is mostly caused by the opacity of the output of deep learning systems and the way (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  23. Understanding, Idealization, and Explainable AI.Will Fleisher - 2022 - Episteme 19 (4):534-560.
    Many AI systems that make important decisions are black boxes: how they function is opaque even to their developers. This is due to their high complexity and to the fact that they are trained rather than programmed. Efforts to alleviate the opacity of black box systems are typically discussed in terms of transparency, interpretability, and explainability. However, there is little agreement about what these key concepts mean, which makes it difficult to adjudicate the success or promise of opacity alleviation methods. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  24.  23
    Creating meaningful work in the age of AI: explainable AI, explainability, and why it matters to organizational designers.Kristin Wulff & Hanne Finnestrand - forthcoming - AI and Society:1-14.
    In this paper, we contribute to research on enterprise artificial intelligence (AI), specifically to organizations improving the customer experiences and their internal processes through using the type of AI called machine learning (ML). Many organizations are struggling to get enough value from their AI efforts, and part of this is related to the area of explainability. The need for explainability is especially high in what is called black-box ML models, where decisions are made without anyone understanding how an (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  25. Explaining Go: Challenges in Achieving Explainability in AI Go Programs.Zack Garrett - 2023 - Journal of Go Studies 17 (2):29-60.
    There has been a push in recent years to provide better explanations for how AIs make their decisions. Most of this push has come from the ethical concerns that go hand in hand with AIs making decisions that affect humans. Outside of the strictly ethical concerns that have prompted the study of explainable AIs (XAIs), there has been research interest in the mere possibility of creating XAIs in various domains. In general, the more accurate we make our models the (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  26.  81
    Black is the new orange: how to determine AI liability.Paulo Henrique Padovan, Clarice Marinho Martins & Chris Reed - 2023 - Artificial Intelligence and Law 31 (1):133-167.
    Autonomous artificial intelligence (AI) systems can lead to unpredictable behavior causing loss or damage to individuals. Intricate questions must be resolved to establish how courts determine liability. Until recently, understanding the inner workings of “black boxes” has been exceedingly difficult; however, the use of Explainable Artificial Intelligence (XAI) would help simplify the complex problems that can occur with autonomous AI systems. In this context, this article seeks to provide technical explanations that can be given by (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  27. Explainable AI and Causal Understanding: Counterfactual Approaches Considered.Sam Baron - 2023 - Minds and Machines 33 (2):347-377.
    The counterfactual approach to explainable AI (XAI) seeks to provide understanding of AI systems through the provision of counterfactual explanations. In a recent systematic review, Chou et al. (Inform Fus 81:59–83, 2022) argue that the counterfactual approach does not clearly provide causal understanding. They diagnose the problem in terms of the underlying framework within which the counterfactual approach has been developed. To date, the counterfactual approach has not been developed in concert with the approach for specifying causes developed by (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  28. Cultural Bias in Explainable AI Research.Uwe Peters & Mary Carman - forthcoming - Journal of Artificial Intelligence Research.
    For synergistic interactions between humans and artificial intelligence (AI) systems, AI outputs often need to be explainable to people. Explainable AI (XAI) systems are commonly tested in human user studies. However, whether XAI researchers consider potential cultural differences in human explanatory needs remains unexplored. We highlight psychological research that found significant differences in human explanations between many people from Western, commonly individualist countries and people from non-Western, often collectivist countries. We argue that XAI research currently overlooks (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  29.  30
    Local Explanations via Necessity and Sufficiency: Unifying Theory and Practice.David S. Watson, Limor Gultchin, Ankur Taly & Luciano Floridi - 2022 - Minds and Machines 32 (1):185-218.
    Necessity and sufficiency are the building blocks of all successful explanations. Yet despite their importance, these notions have been conceptually underdeveloped and inconsistently applied in explainable artificial intelligence, a fast-growing research area that is so far lacking in firm theoretical foundations. In this article, an expanded version of a paper originally presented at the 37th Conference on Uncertainty in Artificial Intelligence, we attempt to fill this gap. Building on work in logic, probability, and causality, we (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  30. Exploring explainable AI in the tax domain.Łukasz Górski, Błażej Kuźniacki, Marco Almada, Kamil Tyliński, Madalena Calvo, Pablo Matias Asnaghi, Luciano Almada, Hilario Iñiguez, Fernando Rubianes, Octavio Pera & Juan Ignacio Nigrelli - forthcoming - Artificial Intelligence and Law:1-29.
    This paper analyses whether current explainable AI (XAI) techniques can help to address taxpayer concerns about the use of AI in taxation. As tax authorities around the world increase their use of AI-based techniques, taxpayers are increasingly at a loss about whether and how the ensuing decisions follow the procedures required by law and respect their substantive rights. The use of XAI has been proposed as a response to this issue, but it is still an open question whether current (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  31.  11
    Allure of Simplicity.Thomas Grote - 2023 - Philosophy of Medicine 4 (1).
    This paper develops an account of the opacity problem in medical machine learning (ML). Guided by pragmatist assumptions, I argue that opacity in ML models is problematic insofar as it potentially undermines the achievement of two key purposes: ensuring generalizability and optimizing clinician–machine decision-making. Three opacity amelioration strategies are examined, with explainable artificial intelligence (XAI) as the predominant approach, challenged by two revisionary strategies in the form of reliabilism and the interpretability by design. Comparing the three strategies, (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  32.  25
    Backtracking Counterfactuals.Julius von Kügelgen, Abdirisak Mohamed & Sander Beckers - forthcoming - Proceedings of the 2Nd Conference on Causal Learning and Reasoning.
    Counterfactual reasoning -- envisioning hypothetical scenarios, or possible worlds, where some circumstances are different from what (f)actually occurred (counter-to-fact) -- is ubiquitous in human cognition. Conventionally, counterfactually-altered circumstances have been treated as "small miracles" that locally violate the laws of nature while sharing the same initial conditions. In Pearl's structural causal model (SCM) framework this is made mathematically rigorous via interventions that modify the causal laws while the values of exogenous variables are shared. In recent years, however, this purely interventionist (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  33.  2
    Neobjašnjiv objašnjiv AI.Hyeongjoo Kim - 2023 - Synthesis Philosophica 38 (2):275-295.
    This paper critically investigates the explainable artificial intelligence (XAI) project. I analyze the word “explain” in XAI and the theory of explanation and identify the discrepancy between the meaning of the explanation claimed to be necessary and that which is actually presented. After summarizing the history of AI related to explainability, I argue that American philosophy in the 1900s operated in the background of said history. I then extract the meaning of explanation in view of XAI, to (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  34.  7
    Kognitive Optimierung durch KI?Sabine Ammon - 2023 - Philosophisches Jahrbuch 130 (2):92-107.
    Recent developments in artificial intelligence (AI) promise cognitive optimization in many areas of our lives, ranging from automated decision-making to superintelligence. In a predominant narrative, the black-box of machine learning systems is identified as one of the biggest obstacles from an epistemic point of view. The problem is expected to be solved by algorithmic counteractions emerging from the field of explainable artificial intelligence (XAI). However, deeper questions about a meaningful cognitive division of labor between AI (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  35.  27
    Symbolic Deep Networks: A Psychologically Inspired Lightweight and Efficient Approach to Deep Learning.Vladislav D. Veksler, Blaine E. Hoffman & Norbou Buchler - 2022 - Topics in Cognitive Science 14 (4):702-717.
    The last two decades have produced unprecedented successes in the fields of artificial intelligence and machine learning (ML), due almost entirely to advances in deep neural networks (DNNs). Deep hierarchical memory networks are not a novel concept in cognitive science and can be traced back more than a half century to Simon's early work on discrimination nets for simulating human expertise. The major difference between DNNs and the deep memory nets meant for explaining human cognition is that the (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  36.  64
    Why a Right to an Explanation of Algorithmic Decision-Making Should Exist: A Trust-Based Approach.Tae Wan Kim & Bryan R. Routledge - 2022 - Business Ethics Quarterly 32 (1):75-102.
    Businesses increasingly rely on algorithms that are data-trained sets of decision rules (i.e., the output of the processes often called “machine learning”) and implement decisions with little or no human intermediation. In this article, we provide a philosophical foundation for the claim that algorithmic decision-making gives rise to a “right to explanation.” It is often said that, in the digital era, informed consent is dead. This negative view originates from a rigid understanding that presumes informed consent is a static and (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  37. What is Interpretability?Adrian Erasmus, Tyler D. P. Brunet & Eyal Fisher - 2021 - Philosophy and Technology 34:833–862.
    We argue that artificial networks are explainable and offer a novel theory of interpretability. Two sets of conceptual questions are prominent in theoretical engagements with artificial neural networks, especially in the context of medical artificial intelligence: Are networks explainable, and if so, what does it mean to explain the output of a network? And what does it mean for a network to be interpretable? We argue that accounts of “explanation” tailored specifically to neural networks (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   16 citations  
  38. ANNs and Unifying Explanations: Reply to Erasmus, Brunet, and Fisher.Yunus Prasetya - 2022 - Philosophy and Technology 35 (2):1-9.
    In a recent article, Erasmus, Brunet, and Fisher (2021) argue that Artificial Neural Networks (ANNs) are explainable. They survey four influential accounts of explanation: the Deductive-Nomological model, the Inductive-Statistical model, the Causal-Mechanical model, and the New-Mechanist model. They argue that, on each of these accounts, the features that make something an explanation is invariant with regard to the complexity of the explanans and the explanandum. Therefore, they conclude, the complexity of ANNs (and other Machine Learning models) does not (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  39.  15
    A Genealogical Approach to Algorithmic Bias.Marta Ziosi, David Watson & Luciano Floridi - 2024 - Minds and Machines 34 (2):1-17.
    The Fairness, Accountability, and Transparency (FAccT) literature tends to focus on bias as a problem that requires ex post solutions (e.g. fairness metrics), rather than addressing the underlying social and technical conditions that (re)produce it. In this article, we propose a complementary strategy that uses genealogy as a constructive, epistemic critique to explain algorithmic bias in terms of the conditions that enable it. We focus on XAI feature attributions (Shapley values) and counterfactual approaches as potential tools to gauge these conditions (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  40.  8
    Symbolic Deep Networks: A Psychologically Inspired Lightweight and Efficient Approach to Deep Learning.Vladislav D. Veksler, Blaine E. Hoffman & Norbou Buchler - 2022 - Topics in Cognitive Science 14 (4):702-717.
    Deep Neural Networks (DNNs) are popular for classifying large noisy analogue data. However, DNNs suffer from several known issues, including explainability, efficiency, catastrophic interference, and a need for high‐end computational resources. Our simulations reveal that psychologically‐inspired symbolic deep networks (SDNs) achieve similar accuracy and robustness to noise as DNNs on common ML problem sets, while addressing these issues.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark