Switch to: References

Add citations

You must login to add citations.
  1. Scientific Exploration and Explainable Artificial Intelligence.Carlos Zednik & Hannes Boelsen - 2022 - Minds and Machines 32 (1):219-239.
    Models developed using machine learning are increasingly prevalent in scientific research. At the same time, these models are notoriously opaque. Explainable AI aims to mitigate the impact of opacity by rendering opaque models transparent. More than being just the solution to a problem, however, Explainable AI can also play an invaluable role in scientific exploration. This paper describes how post-hoc analytic techniques from Explainable AI can be used to refine target phenomena in medical science, to identify starting points for future (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  • On the Philosophy of Unsupervised Learning.David S. Watson - 2023 - Philosophy and Technology 36 (2):1-26.
    Unsupervised learning algorithms are widely used for many important statistical tasks with numerous applications in science and industry. Yet despite their prevalence, they have attracted remarkably little philosophical scrutiny to date. This stands in stark contrast to supervised and reinforcement learning algorithms, which have been widely studied and critically evaluated, often with an emphasis on ethical concerns. In this article, I analyze three canonical unsupervised learning problems: clustering, abstraction, and generative modeling. I argue that these methods raise unique epistemological and (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  • Conceptual challenges for interpretable machine learning.David S. Watson - 2022 - Synthese 200 (2):1-33.
    As machine learning has gradually entered into ever more sectors of public and private life, there has been a growing demand for algorithmic explainability. How can we make the predictions of complex statistical models more intelligible to end users? A subdiscipline of computer science known as interpretable machine learning (IML) has emerged to address this urgent question. Numerous influential methods have been proposed, from local linear approximations to rule lists and counterfactuals. In this article, I highlight three conceptual challenges that (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  • Assembled Bias: Beyond Transparent Algorithmic Bias.Robyn Repko Waller & Russell L. Waller - 2022 - Minds and Machines 32 (3):533-562.
    In this paper we make the case for the emergence of novel kind of bias with the use of algorithmic decision-making systems. We argue that the distinctive generative process of feature creation, characteristic of machine learning (ML), contorts feature parameters in ways that can lead to emerging feature spaces that encode novel algorithmic bias involving already marginalized groups. We term this bias _assembled bias._ Moreover, assembled biases are distinct from the much-discussed algorithmic bias, both in source (training data versus feature (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • Defining the undefinable: the black box problem in healthcare artificial intelligence.Jordan Joseph Wadden - 2022 - Journal of Medical Ethics 48 (10):764-768.
    The ‘black box problem’ is a long-standing talking point in debates about artificial intelligence. This is a significant point of tension between ethicists, programmers, clinicians and anyone else working on developing AI for healthcare applications. However, the precise definition of these systems are often left undefined, vague, unclear or are assumed to be standardised within AI circles. This leads to situations where individuals working on AI talk over each other and has been invoked in numerous debates between opaque and explainable (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  • Transparency and the Black Box Problem: Why We Do Not Trust AI.Warren J. von Eschenbach - 2021 - Philosophy and Technology 34 (4):1607-1622.
    With automation of routine decisions coupled with more intricate and complex information architecture operating this automation, concerns are increasing about the trustworthiness of these systems. These concerns are exacerbated by a class of artificial intelligence that uses deep learning, an algorithmic system of deep neural networks, which on the whole remain opaque or hidden from human comprehension. This situation is commonly referred to as the black box problem in AI. Without understanding how AI reaches its conclusions, it is an open (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   12 citations  
  • The Automated Laplacean Demon: How ML Challenges Our Views on Prediction and Explanation.Sanja Srećković, Andrea Berber & Nenad Filipović - 2021 - Minds and Machines 32 (1):159-183.
    Certain characteristics make machine learning a powerful tool for processing large amounts of data, and also particularly unsuitable for explanatory purposes. There are worries that its increasing use in science may sideline the explanatory goals of research. We analyze the key characteristics of ML that might have implications for the future directions in scientific research: epistemic opacity and the ‘theory-agnostic’ modeling. These characteristics are further analyzed in a comparison of ML with the traditional statistical methods, in order to demonstrate what (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  • The Importance of Understanding Deep Learning.Tim Räz & Claus Beisbart - forthcoming - Erkenntnis:1-18.
    Some machine learning models, in particular deep neural networks, are not very well understood; nevertheless, they are frequently used in science. Does this lack of understanding pose a problem for using DNNs to understand empirical phenomena? Emily Sullivan has recently argued that understanding with DNNs is not limited by our lack of understanding of DNNs themselves. In the present paper, we will argue, contra Sullivan, that our current lack of understanding of DNNs does limit our ability to understand with DNNs. (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  • Connecting ethics and epistemology of AI.Federica Russo, Eric Schliesser & Jean Wagemans - forthcoming - AI and Society:1-19.
    The need for fair and just AI is often related to the possibility of understanding AI itself, in other words, of turning an opaque box into a glass box, as inspectable as possible. Transparency and explainability, however, pertain to the technical domain and to philosophy of science, thus leaving the ethics and epistemology of AI largely disconnected. To remedy this, we propose an integrated approach premised on the idea that a glass-box epistemology should explicitly consider how to incorporate values and (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Explanatory pragmatism: a context-sensitive framework for explainable medical AI.Diana Robinson & Rune Nyrup - 2022 - Ethics and Information Technology 24 (1).
    Explainable artificial intelligence (XAI) is an emerging, multidisciplinary field of research that seeks to develop methods and tools for making AI systems more explainable or interpretable. XAI researchers increasingly recognise explainability as a context-, audience- and purpose-sensitive phenomenon, rather than a single well-defined property that can be directly measured and optimised. However, since there is currently no overarching definition of explainability, this poses a risk of miscommunication between the many different researchers within this multidisciplinary space. This is the problem we (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  • Sources of Understanding in Supervised Machine Learning Models.Paulo Pirozelli - 2022 - Philosophy and Technology 35 (2):1-19.
    In the last decades, supervised machine learning has seen the widespread growth of highly complex, non-interpretable models, of which deep neural networks are the most typical representative. Due to their complexity, these models have showed an outstanding performance in a series of tasks, as in image recognition and machine translation. Recently, though, there has been an important discussion over whether those non-interpretable models are able to provide any sort of understanding whatsoever. For some scholars, only interpretable models can provide understanding. (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • How to Make AlphaGo’s Children Explainable.Woosuk Park - 2022 - Philosophies 7 (3):55.
    Under the rubric of understanding the problem of explainability of AI in terms of abductive cognition, I propose to review the lessons from AlphaGo and her more powerful successors. As AI players in Baduk have arrived at superhuman level, there seems to be no hope for understanding the secret of their breathtakingly brilliant moves. Without making AI players explainable in some ways, both human and AI players would be less-than omniscient, if not ignorant, epistemic agents. Are we bound to have (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • Justice and the Normative Standards of Explainability in Healthcare.Saskia K. Nagel, Nils Freyer & Hendrik Kempt - 2022 - Philosophy and Technology 35 (4):1-19.
    Providing healthcare services frequently involves cognitively demanding tasks, including diagnoses and analyses as well as complex decisions about treatments and therapy. From a global perspective, ethically significant inequalities exist between regions where the expert knowledge required for these tasks is scarce or abundant. One possible strategy to diminish such inequalities and increase healthcare opportunities in expert-scarce settings is to provide healthcare solutions involving digital technologies that do not necessarily require the presence of a human expert, e.g., in the form of (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • The State Space of Artificial Intelligence.Holger Lyre - 2020 - Minds and Machines 30 (3):325-347.
    The goal of the paper is to develop and propose a general model of the state space of AI. Given the breathtaking progress in AI research and technologies in recent years, such conceptual work is of substantial theoretical interest. The present AI hype is mainly driven by the triumph of deep learning neural networks. As the distinguishing feature of such networks is the ability to self-learn, self-learning is identified as one important dimension of the AI state space. Another dimension is (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  • Artificial intelligence, transparency, and public decision-making.Karl de Fine Licht & Jenny de Fine Licht - 2020 - AI and Society 35 (4):917-926.
    The increasing use of Artificial Intelligence for making decisions in public affairs has sparked a lively debate on the benefits and potential harms of self-learning technologies, ranging from the hopes of fully informed and objectively taken decisions to fear for the destruction of mankind. To prevent the negative outcomes and to achieve accountable systems, many have argued that we need to open up the “black box” of AI decision-making and make it more transparent. Whereas this debate has primarily focused on (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   28 citations  
  • Explanations in AI as Claims of Tacit Knowledge.Nardi Lam - 2022 - Minds and Machines 32 (1):135-158.
    As AI systems become increasingly complex it may become unclear, even to the designer of a system, why exactly a system does what it does. This leads to a lack of trust in AI systems. To solve this, the field of explainable AI has been working on ways to produce explanations of these systems’ behaviors. Many methods in explainable AI, such as LIME, offer only a statistical argument for the validity of their explanations. However, some methods instead study the internal (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • Health Digital Twins, Legal Liability, and Medical Practice.Andreas Kuersten - 2023 - American Journal of Bioethics 23 (9):66-69.
    Digital twins for health care have the potential to significantly impact the provision of medical services. In addition to possible use in care, this technology could serve as a conduit by which no...
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • Beyond English: Considering Language and Culture in Psychological Text Analysis.Dalibor Kučera & Matthias R. Mehl - 2022 - Frontiers in Psychology 13.
    The paper discusses the role of language and culture in the context of quantitative text analysis in psychological research. It reviews current automatic text analysis methods and approaches from the perspective of the unique challenges that can arise when going beyond the default English language. Special attention is paid to closed-vocabulary approaches and related methods, both from the perspective of cross-cultural research where the analytic process inherently consists of comparing phenomena across cultures and languages and the perspective of generalizability beyond (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • Values and inductive risk in machine learning modelling: the case of binary classification models.Koray Karaca - 2021 - European Journal for Philosophy of Science 11 (4):1-27.
    I examine the construction and evaluation of machine learning binary classification models. These models are increasingly used for societal applications such as classifying patients into two categories according to the presence or absence of a certain disease like cancer and heart disease. I argue that the construction of ML classification models involves an optimisation process aiming at the minimization of the inductive risk associated with the intended uses of these models. I also argue that the construction of these models is (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  • The virtues of interpretable medical artificial intelligence.Joshua Hatherley, Robert Sparrow & Mark Howard - forthcoming - Cambridge Quarterly of Healthcare Ethics:1-10.
    Artificial intelligence (AI) systems have demonstrated impressive performance across a variety of clinical tasks. However, notoriously, sometimes these systems are 'black boxes'. The initial response in the literature was a demand for 'explainable AI'. However, recently, several authors have suggested that making AI more explainable or 'interpretable' is likely to be at the cost of the accuracy of these systems and that prioritising interpretability in medical AI may constitute a 'lethal prejudice'. In this paper, we defend the value of interpretability (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  • The Virtues of Interpretable Medical AI.Joshua Hatherley, Robert Sparrow & Mark Howard - forthcoming - Cambridge Quarterly of Healthcare Ethics:1-10.
    Artificial intelligence (AI) systems have demonstrated impressive performance across a variety of clinical tasks. However, notoriously, sometimes these systems are “black boxes.” The initial response in the literature was a demand for “explainable AI.” However, recently, several authors have suggested that making AI more explainable or “interpretable” is likely to be at the cost of the accuracy of these systems and that prioritizing interpretability in medical AI may constitute a “lethal prejudice.” In this paper, we defend the value of interpretability (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • Uncertainty, Evidence, and the Integration of Machine Learning into Medical Practice.Thomas Grote & Philipp Berens - 2023 - Journal of Medicine and Philosophy 48 (1):84-97.
    In light of recent advances in machine learning for medical applications, the automation of medical diagnostics is imminent. That said, before machine learning algorithms find their way into clinical practice, various problems at the epistemic level need to be overcome. In this paper, we discuss different sources of uncertainty arising for clinicians trying to evaluate the trustworthiness of algorithmic evidence when making diagnostic judgments. Thereby, we examine many of the limitations of current machine learning algorithms (with deep learning in particular) (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Analogue Models and Universal Machines. Paradigms of Epistemic Transparency in Artificial Intelligence.Hajo Greif - 2022 - Minds and Machines 32 (1):111-133.
    The problem of epistemic opacity in Artificial Intelligence is often characterised as a problem of intransparent algorithms that give rise to intransparent models. However, the degrees of transparency of an AI model should not be taken as an absolute measure of the properties of its algorithms but of the model’s degree of intelligibility to human users. Its epistemically relevant elements are to be specified on various levels above and beyond the computational one. In order to elucidate this claim, I first (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • Epistemic Value of Digital Simulacra for Patients.Eleanor Gilmore-Szott - 2023 - American Journal of Bioethics 23 (9):63-66.
    Artificial Intelligence and Machine Learning (AI/ML) models introduce unique considerations when determining their epistemic value. Fortunately, existing work on the epistemic features of AI/ML can...
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • The Deception of Certainty: how Non-Interpretable Machine Learning Outcomes Challenge the Epistemic Authority of Physicians. A deliberative-relational Approach.Florian Funer - 2022 - Medicine, Health Care and Philosophy 25 (2):167-178.
    Developments in Machine Learning (ML) have attracted attention in a wide range of healthcare fields to improve medical practice and the benefit of patients. Particularly, this should be achieved by providing more or less automated decision recommendations to the treating physician. However, some hopes placed in ML for healthcare seem to be disappointed, at least in part, by a lack of transparency or traceability. Skepticism exists primarily in the fact that the physician, as the person responsible for diagnosis, therapy, and (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • Accuracy and Interpretability: Struggling with the Epistemic Foundations of Machine Learning-Generated Medical Information and Their Practical Implications for the Doctor-Patient Relationship.Florian Funer - 2022 - Philosophy and Technology 35 (1):1-20.
    The initial successes in recent years in harnessing machine learning technologies to improve medical practice and benefit patients have attracted attention in a wide range of healthcare fields. Particularly, it should be achieved by providing automated decision recommendations to the treating clinician. Some hopes placed in such ML-based systems for healthcare, however, seem to be unwarranted, at least partially because of their inherent lack of transparency, although their results seem convincing in accuracy and reliability. Skepticism arises when the physician as (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • Understanding, Idealization, and Explainable AI.Will Fleisher - 2022 - Episteme 19 (4):534-560.
    Many AI systems that make important decisions are black boxes: how they function is opaque even to their developers. This is due to their high complexity and to the fact that they are trained rather than programmed. Efforts to alleviate the opacity of black box systems are typically discussed in terms of transparency, interpretability, and explainability. However, there is little agreement about what these key concepts mean, which makes it difficult to adjudicate the success or promise of opacity alleviation methods. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  • What is Interpretability?Adrian Erasmus, Tyler D. P. Brunet & Eyal Fisher - 2021 - Philosophy and Technology 34:833–862.
    We argue that artificial networks are explainable and offer a novel theory of interpretability. Two sets of conceptual questions are prominent in theoretical engagements with artificial neural networks, especially in the context of medical artificial intelligence: Are networks explainable, and if so, what does it mean to explain the output of a network? And what does it mean for a network to be interpretable? We argue that accounts of “explanation” tailored specifically to neural networks have ineffectively reinvented the wheel. In (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   15 citations  
  • Explainability, Public Reason, and Medical Artificial Intelligence.Michael Da Silva - 2023 - Ethical Theory and Moral Practice 26 (5):743-762.
    The contention that medical artificial intelligence (AI) should be ‘explainable’ is widespread in contemporary philosophy and in legal and best practice documents. Yet critics argue that ‘explainability’ is not a stable concept; non-explainable AI is often more accurate; mechanisms intended to improve explainability do not improve understanding and introduce new epistemic concerns; and explainability requirements are ad hoc where human medical decision-making is often opaque. A recent ‘political response’ to these issues contends that AI used in high-stakes scenarios, including medical (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • The Algorithmic Leviathan: Arbitrariness, Fairness, and Opportunity in Algorithmic Decision-Making Systems.Kathleen Creel & Deborah Hellman - 2022 - Canadian Journal of Philosophy 52 (1):26-43.
    This article examines the complaint that arbitrary algorithmic decisions wrong those whom they affect. It makes three contributions. First, it provides an analysis of what arbitrariness means in this context. Second, it argues that arbitrariness is not of moral concern except when special circumstances apply. However, when the same algorithm or different algorithms based on the same data are used in multiple contexts, a person may be arbitrarily excluded from a broad range of opportunities. The third contribution is to explain (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  • Prediction versus understanding in computationally enhanced neuroscience.Mazviita Chirimuuta - 2020 - Synthese 199 (1-2):767-790.
    The use of machine learning instead of traditional models in neuroscience raises significant questions about the epistemic benefits of the newer methods. I draw on the literature on model intelligibility in the philosophy of science to offer some benchmarks for the interpretability of artificial neural networks used as a predictive tool in neuroscience. Following two case studies on the use of ANN’s to model motor cortex and the visual system, I argue that the benefit of providing the scientist with understanding (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  • Empiricism in the foundations of cognition.Timothy Childers, Juraj Hvorecký & Ondrej Majer - 2023 - AI and Society 38 (1):67-87.
    This paper traces the empiricist program from early debates between nativism and behaviorism within philosophy, through debates about early connectionist approaches within the cognitive sciences, and up to their recent iterations within the domain of deep learning. We demonstrate how current debates on the nature of cognition via deep network architecture echo some of the core issues from the Chomsky/Quine debate and investigate the strength of support offered by these various lines of research to the empiricist standpoint. Referencing literature from (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  • Justificatory explanations in machine learning: for increased transparency through documenting how key concepts drive and underpin design and engineering decisions.David Casacuberta, Ariel Guersenzvaig & Cristian Moyano-Fernández - 2024 - AI and Society 39 (1):279-293.
    Given the pervasiveness of AI systems and their potential negative effects on people’s lives (especially among already marginalised groups), it becomes imperative to comprehend what goes on when an AI system generates a result, and based on what reasons, it is achieved. There are consistent technical efforts for making systems more “explainable” by reducing their opaqueness and increasing their interpretability and explainability. In this paper, we explore an alternative non-technical approach towards explainability that complement existing ones. Leaving aside technical, statistical, (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Black Boxes or Unflattering Mirrors? Comparative Bias in the Science of Machine Behaviour.Cameron Buckner - 2023 - British Journal for the Philosophy of Science 74 (3):681-712.
    The last 5 years have seen a series of remarkable achievements in deep-neural-network-based artificial intelligence research, and some modellers have argued that their performance compares favourably to human cognition. Critics, however, have argued that processing in deep neural networks is unlike human cognition for four reasons: they are (i) data-hungry, (ii) brittle, and (iii) inscrutable black boxes that merely (iv) reward-hack rather than learn real solutions to problems. This article rebuts these criticisms by exposing comparative bias within them, in the (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  • A Means-End Account of Explainable Artificial Intelligence.Oliver Buchholz - 2023 - Synthese 202 (33):1-23.
    Explainable artificial intelligence (XAI) seeks to produce explanations for those machine learning methods which are deemed opaque. However, there is considerable disagreement about what this means and how to achieve it. Authors disagree on what should be explained (topic), to whom something should be explained (stakeholder), how something should be explained (instrument), and why something should be explained (goal). In this paper, I employ insights from means-end epistemology to structure the field. According to means-end epistemology, different means ought to be (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  • The black box problem revisited. Real and imaginary challenges for automated legal decision making.Bartosz Brożek, Michał Furman, Marek Jakubiec & Bartłomiej Kucharzyk - forthcoming - Artificial Intelligence and Law:1-14.
    This paper addresses the black-box problem in artificial intelligence (AI), and the related problem of explainability of AI in the legal context. We argue, first, that the black box problem is, in fact, a superficial one as it results from an overlap of four different – albeit interconnected – issues: the opacity problem, the strangeness problem, the unpredictability problem, and the justification problem. Thus, we propose a framework for discussing both the black box problem and the explainability of AI. We (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  • Putting explainable AI in context: institutional explanations for medical AI.Jacob Browning & Mark Theunissen - 2022 - Ethics and Information Technology 24 (2).
    There is a current debate about if, and in what sense, machine learning systems used in the medical context need to be explainable. Those arguing in favor contend these systems require post hoc explanations for each individual decision to increase trust and ensure accurate diagnoses. Those arguing against suggest the high accuracy and reliability of the systems is sufficient for providing epistemic justified beliefs without the need for explaining each individual decision. But, as we show, both solutions have limitations—and it (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  • Can Robots Do Epidemiology? Machine Learning, Causal Inference, and Predicting the Outcomes of Public Health Interventions.Alex Broadbent & Thomas Grote - 2022 - Philosophy and Technology 35 (1):1-22.
    This paper argues that machine learning and epidemiology are on collision course over causation. The discipline of epidemiology lays great emphasis on causation, while ML research does not. Some epidemiologists have proposed imposing what amounts to a causal constraint on ML in epidemiology, requiring it either to engage in causal inference or restrict itself to mere projection. We whittle down the issues to the question of whether causal knowledge is necessary for underwriting predictions about the outcomes of public health interventions. (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Philosophy of science at sea: Clarifying the interpretability of machine learning.Claus Beisbart & Tim Räz - 2022 - Philosophy Compass 17 (6):e12830.
    Philosophy Compass, Volume 17, Issue 6, June 2022.
    No categories
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  • Varieties of transparency: exploring agency within AI systems.Gloria Andrada, Robert William Clowes & Paul Smart - 2023 - AI and Society 38 (4):1321-1331.
    AI systems play an increasingly important role in shaping and regulating the lives of millions of human beings across the world. Calls for greater _transparency_ from such systems have been widespread. However, there is considerable ambiguity concerning what “transparency” actually means, and therefore, what greater transparency might entail. While, according to some debates, transparency requires _seeing through_ the artefact or device, widespread calls for transparency imply _seeing into_ different aspects of AI systems. These two notions are in apparent tension with (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  • What Kind of Artificial Intelligence Should We Want for Use in Healthcare Decision-Making Applications?Jordan Joseph Wadden - 2021 - Canadian Journal of Bioethics / Revue canadienne de bioéthique 4 (1).
    The prospect of including artificial intelligence in clinical decision-making is an exciting next step for some areas of healthcare. This article provides an analysis of the available kinds of AI systems, focusing on macro-level characteristics. This includes examining the strengths and weaknesses of opaque systems and fully explainable systems. Ultimately, the article argues that “grey box” systems, which include some combination of opacity and transparency, ought to be used in healthcare settings.
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark