Switch to: Citations

Add references

You must login to add references.
  1. Transparency in Algorithmic and Human Decision-Making: Is There a Double Standard?John Zerilli, Alistair Knott, James Maclaurin & Colin Gavaghan - 2019 - Philosophy and Technology 32 (4):661-683.
    We are sceptical of concerns over the opacity of algorithmic decision tools. While transparency and explainability are certainly important desiderata in algorithmic governance, we worry that automated decision-making is being held to an unrealistically high standard, possibly owing to an unrealistically high estimate of the degree of transparency attainable from human decision-makers. In this paper, we review evidence demonstrating that much human decision-making is fraught with transparency problems, show in what respects AI fares little worse or better and argue that (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   46 citations  
  • Explanations in Software Engineering: The Pragmatic Point of View.Jan Winter - 2010 - Minds and Machines 20 (2):277-289.
  • Functional Explaining: A New Approach to the Philosophy of Explanation.Daniel A. Wilkenfeld - 2014 - Synthese 191 (14):3367-3391.
    In this paper, I argue that explanations just ARE those sorts of things that, under the right circumstances and in the right sort of way, bring about understanding. This raises the question of why such a seemingly simple account of explanation, if correct, would not have been identified and agreed upon decades ago. The answer is that only recently has it been made possible to analyze explanation in terms of understanding without the risk of collapsing both to merely phenomenological states. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   23 citations  
  • Depth and Deference: When and Why We Attribute Understanding.Daniel A. Wilkenfeld, Dillon Plunkett & Tania Lombrozo - 2016 - Philosophical Studies 173 (2):373-393.
    Four experiments investigate the folk concept of “understanding,” in particular when and why it is deployed differently from the concept of knowledge. We argue for the positions that people have higher demands with respect to explanatory depth when it comes to attributing understanding, and that this is true, in part, because understanding attributions play a functional role in identifying experts who should be heeded with respect to the general field in question. These claims are supported by our findings that people (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   17 citations  
  • Transparency You Can Trust: Transparency Requirements for Artificial Intelligence Between Legal Norms and Contextual Concerns.Aurelia Tamò-Larrieux, Christoph Lutz, Eduard Fosch Villaronga & Heike Felzmann - 2019 - Big Data and Society 6 (1).
    Transparency is now a fundamental principle for data processing under the General Data Protection Regulation. We explore what this requirement entails for artificial intelligence and automated decision-making systems. We address the topic of transparency in artificial intelligence by integrating legal, social, and ethical aspects. We first investigate the ratio legis of the transparency requirement in the General Data Protection Regulation and its ethical underpinnings, showing its focus on the provision of information and explanation. We then discuss the pitfalls with respect (...)
    Direct download  
     
    Export citation  
     
    Bookmark   10 citations  
  • XPLAIN: A System for Creating and Explaining Expert Consulting Programs.William R. Swartout - 1983 - Artificial Intelligence 21 (3):285-325.
  • Peeking Inside the Black Box: A New Kind of Scientific Visualization.Michael T. Stuart & Nancy J. Nersessian - 2018 - Minds and Machines 29 (1):87-107.
    Computational systems biologists create and manipulate computational models of biological systems, but they do not always have straightforward epistemic access to the content and behavioural profile of such models because of their length, coding idiosyncrasies, and formal complexity. This creates difficulties both for modellers in their research groups and for their bioscience collaborators who rely on these models. In this paper we introduce a new kind of visualization that was developed to address just this sort of epistemic opacity. The visualization (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  • The Misunderstood Limits of Folk Science: An Illusion of Explanatory Depth.Leonid Rozenblit & Frank Keil - 2002 - Cognitive Science 26 (5):521-562.
    No categories
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   112 citations  
  • A Misdirected Principle with a Catch: Explicability for AI.Scott Robbins - 2019 - Minds and Machines 29 (4):495-514.
    There is widespread agreement that there should be a principle requiring that artificial intelligence be ‘explicable’. Microsoft, Google, the World Economic Forum, the draft AI ethics guidelines for the EU commission, etc. all include a principle for AI that falls under the umbrella of ‘explicability’. Roughly, the principle states that “for AI to promote and not constrain human autonomy, our ‘decision about who should decide’ must be informed by knowledge of how AI would act instead of us” :689–707, 2018). There (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   22 citations  
  • Explanation and Trust: What to Tell the User in Security and AI? [REVIEW]Wolter Pieters - 2011 - Ethics and Information Technology 13 (1):53-64.
    There is a common problem in artificial intelligence (AI) and information security. In AI, an expert system needs to be able to justify and explain a decision to the user. In information security, experts need to be able to explain to the public why a system is secure. In both cases, an important goal of explanation is to acquire or maintain the users’ trust. In this paper, I investigate the relation between explanation and trust in the context of computing science. (...)
    Direct download (13 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  • The Pragmatic Turn in Explainable Artificial Intelligence (XAI).Andrés Páez - 2019 - Minds and Machines 29 (3):441-459.
    In this paper I argue that the search for explainable models and interpretable decisions in AI must be reformulated in terms of the broader project of offering a pragmatic and naturalistic account of understanding in AI. Intuitively, the purpose of providing an explanation of a model or a decision is to make it understandable to its stakeholders. But without a previous grasp of what it means to say that an agent understands a model or a decision, the explanatory strategies will (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   22 citations  
  • The Pragmatic Turn in Explainable Artificial Intelligence.Andrés Páez - 2019 - Minds and Machines 29 (3):441-459.
    In this paper I argue that the search for explainable models and interpretable decisions in AI must be reformulated in terms of the broader project of offering a pragmatic and naturalistic account of understanding in AI. Intuitively, the purpose of providing an explanation of a model or a decision is to make it understandable to its stakeholders. But without a previous grasp of what it means to say that an agent understands a model or a decision, the explanatory strategies will (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   21 citations  
  • Explanation in Artificial Intelligence: Insights From the Social Sciences.Tim Miller - 2019 - Artificial Intelligence 267 (C):1-38.
  • The Responsibility Gap: Ascribing Responsibility for the Actions of Learning Automata. [REVIEW]Andreas Matthias - 2004 - Ethics and Information Technology 6 (3):175-183.
    Traditionally, the manufacturer/operator of a machine is held (morally and legally) responsible for the consequences of its operation. Autonomous, learning machines, based on neural networks, genetic algorithms and agent architectures, create a new situation, where the manufacturer/operator of the machine is in principle not capable of predicting the future machine behaviour any more, and thus cannot be held morally responsible or liable for it. The society must decide between not using this kind of machine any more (which is not a (...)
    Direct download (11 more)  
     
    Export citation  
     
    Bookmark   126 citations  
  • The Responsibility Gap: Ascribing Responsibility for the Actions of Learning Automata.Andreas Matthias - unknown
    Traditionally, the manufacturer/operator of a machine is held responsible for the consequences of its operation. Autonomous, learning machines, based on neural networks, genetic algorithms and agent architectures, create a new situation, where the manufacturer/operator of the machine is in principle not capable of predicting the future machine behaviour any more, and thus cannot be held morally responsible or liable for it. The society must decide between not using this kind of machine any more, or facing a responsibility gap, which cannot (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   132 citations  
  • On Understanding and Testimony.Federica Isabella Malfatti - 2021 - Erkenntnis 86 (6):1345-1365.
    Testimony spreads information. It is also commonly agreed that it can transfer knowledge. Whether it can work as an epistemic source of understanding is a matter of dispute. However, testimony certainly plays a pivotal role in the proliferation of understanding in the epistemic community. But how exactly do we learn, and how do we make advancements in understanding on the basis of one another’s words? And what can we do to maximize the probability that the process of acquiring understanding from (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  • The German Ethics Code for Automated and Connected Driving.Christoph Luetge - 2017 - Philosophy and Technology 30 (4):547-558.
    The ethics of autonomous cars and automated driving have been a subject of discussion in research for a number of years :28–58, 2016). As levels of automation progress, with partially automated driving already becoming standard in new cars from a number of manufacturers, the question of ethical and legal standards becomes virulent. For exam-ple, while automated and autonomous cars, being equipped with appropriate detection sensors, processors, and intelligent mapping material, have a chance of being much safer than human-driven cars in (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   17 citations  
  • The Instrumental Value of Explanations.Tania Lombrozo - 2011 - Philosophy Compass 6 (8):539-551.
    Scientific and ‘intuitive’ or ‘folk’ theories are typically characterized as serving three critical functions: prediction, explanation, and control. While prediction and control have clear instrumental value, the value of explanation is less transparent. This paper reviews an emerging body of research from the cognitive sciences suggesting that the process of seeking, generating, and evaluating explanations in fact contributes to future prediction and control, albeit indirectly by facilitating the discovery and confirmation of instrumentally valuable theories. Theoretical and empirical considerations also suggest (...)
    Direct download  
     
    Export citation  
     
    Bookmark   27 citations  
  • Functional Explanation and the Function of Explanation.Tania Lombrozo & Susan Carey - 2006 - Cognition 99 (2):167-204.
    Teleological explanations (TEs) account for the existence or properties of an entity in terms of a function: we have hearts because they pump blood, and telephones for communication. While many teleological explanations seem appropriate, others are clearly not warranted-for example, that rain exists for plants to grow. Five experiments explore the theoretical commitments that underlie teleological explanations. With the analysis of [Wright, L. (1976). Teleological Explanations. Berkeley, CA: University of California Press] from philosophy as a point of departure, we examine (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   70 citations  
  • Fair, Transparent, and Accountable Algorithmic Decision-Making Processes: The Premise, the Proposed Solutions, and the Open Challenges.Bruno Lepri, Nuria Oliver, Emmanuel Letouzé, Alex Pentland & Patrick Vinck - 2018 - Philosophy and Technology 31 (4):611-627.
    The combination of increased availability of large amounts of fine-grained human behavioral data and advances in machine learning is presiding over a growing reliance on algorithms to address complex societal problems. Algorithmic decision-making processes might lead to more objective and thus potentially fairer decisions than those made by humans who may be influenced by greed, prejudice, fatigue, or hunger. However, algorithmic decision-making has been criticized for its potential to enhance discrimination, information and power asymmetry, and opacity. In this paper, we (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   43 citations  
  • Algorithmic Decision-Making Based on Machine Learning From Big Data: Can Transparency Restore Accountability?Paul Laat - 2018 - Philosophy and Technology 31 (4):525-541.
    Decision-making assisted by algorithms developed by machine learning is increasingly determining our lives. Unfortunately, full opacity about the process is the norm. Would transparency contribute to restoring accountability for such systems as is often maintained? Several objections to full transparency are examined: the loss of privacy when datasets become public, the perverse effects of disclosure of the very algorithms themselves, the potential loss of companies’ competitive edge, and the limited gains in answerability to be expected since sophisticated algorithms usually are (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   22 citations  
  • Understanding Phenomena.Christoph9 Kelp - 2015 - Synthese 192 (12):3799-3816.
    The literature on the nature of understanding can be divided into two broad camps. Explanationists believe that it is knowledge of explanations that is key to understanding. In contrast, their manipulationist rivals maintain that understanding essentially involves an ability to manipulate certain representations. The aim of this paper is to provide a novel knowledge based account of understanding. More specifically, it proposes an account of maximal understanding of a given phenomenon in terms of fully comprehensive and maximally well-connected knowledge of (...)
    No categories
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   65 citations  
  • Virtual, Visible, and Actionable: Data Assemblages and the Sightlines of Justice.Sheila Jasanoff - 2017 - Big Data and Society 4 (2).
    This paper explores the politics of representing events in the world in the form of data points, data sets, or data associations. Data collection involves an act of seeing and recording something that was previously hidden and possibly unnamed. The incidences included in a data set are not random or unrelated but stand for coherent, classifiable phenomena in the world. Moreover, for data to have an impact on law and policy, such information must be seen as actionable, that is, the (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   10 citations  
  • How Biased is the Sample? Reverse Engineering the Ranking Algorithm of Facebook’s Graph Application Programming Interface.Justin Chun-Ting Ho - 2020 - Big Data and Society 7 (1).
    Facebook research has proliferated during recent years. However, since November 2017, Facebook has introduced a new limitation on the maximum amount of page posts retrievable through their Graph application programming interface, while there is limited documentation on how these posts are selected. This paper compares two datasets of the same Facebook page, a full dataset obtained before the introduction of the limitation and a partial dataset obtained after, and employs bootstrapping technique to assess the bias caused by the new limitation. (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  • AI4People—an Ethical Framework for a Good AI Society: Opportunities, Risks, Principles, and Recommendations.Luciano Floridi, Josh Cowls, Monica Beltrametti, Raja Chatila, Patrice Chazerand, Virginia Dignum, Christoph Luetge, Robert Madelin, Ugo Pagallo, Francesca Rossi, Burkhard Schafer, Peggy Valcke & Effy Vayena - 2018 - Minds and Machines 28 (4):689-707.
    This article reports the findings of AI4People, an Atomium—EISMD initiative designed to lay the foundations for a “Good AI Society”. We introduce the core opportunities and risks of AI for society; present a synthesis of five ethical principles that should undergird its development and adoption; and offer 20 concrete recommendations—to assess, to develop, to incentivise, and to support good AI—which in some cases may be undertaken directly by national or supranational policy makers, while in others may be led by other (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   128 citations  
  • Algorithmic Decision-Making Based on Machine Learning From Big Data: Can Transparency Restore Accountability?Massimo Durante & Marcello D'Agostino - 2018 - Philosophy and Technology 31 (4):525-541.
    Decision-making assisted by algorithms developed by machine learning is increasingly determining our lives. Unfortunately, full opacity about the process is the norm. Would transparency contribute to restoring accountability for such systems as is often maintained? Several objections to full transparency are examined: the loss of privacy when datasets become public, the perverse effects of disclosure of the very algorithms themselves, the potential loss of companies’ competitive edge, and the limited gains in answerability to be expected since sophisticated algorithms usually are (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   22 citations  
  • Explanations in Software Engineering: The Pragmatic Point of View. [REVIEW]Jan De Winter - 2010 - Minds and Machines 20 (2):277-289.
    This article reveals that explanatory practice in software engineering is in accordance with pragmatic explanatory pluralism, which states that explanations should at least partially be evaluated by their practical use. More specifically, I offer a defense of the idea that several explanation-types are legitimate in software engineering, and that the appropriateness of an explanation-type depends on (a) the engineer’s interests, and (b) the format of the explanation-seeking question he asks, with this format depending on his interests. This idea is defended (...)
    Direct download (12 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • Algorithmic Decision-Making Based on Machine Learning From Big Data: Can Transparency Restore Accountability?Paul B. de Laat - 2018 - Philosophy and Technology 31 (4):525-541.
    Decision-making assisted by algorithms developed by machine learning is increasingly determining our lives. Unfortunately, full opacity about the process is the norm. Would transparency contribute to restoring accountability for such systems as is often maintained? Several objections to full transparency are examined: the loss of privacy when datasets become public, the perverse effects of disclosure of the very algorithms themselves, the potential loss of companies’ competitive edge, and the limited gains in answerability to be expected since sophisticated algorithms usually are (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   25 citations  
  • Appraising Black-Boxed Technology: The Positive Prospects.E. S. Dahl - 2018 - Philosophy and Technology 31 (4):571-591.
    One staple of living in our information society is having access to the web. Web-connected devices interpret our queries and retrieve information from the web in response. Today’s web devices even purport to answer our queries directly without requiring us to comb through search results in order to find the information we want. How do we know whether a web device is trustworthy? One way to know is to learn why the device is trustworthy by inspecting its inner workings, 156–170 (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  • The Epistemology of a Rule-Based Expert System —a Framework for Explanation.William J. Clancey - 1983 - Artificial Intelligence 20 (3):215-251.
  • Eliciting Self‐Explanations Improves Understanding.Michelene T. H. Chi, Nicholas Leeuw, Mei‐Hung Chiu & Christian Lavancher - 1994 - Cognitive Science 18 (3):439-477.
    Direct download  
     
    Export citation  
     
    Bookmark   42 citations  
  • Eliciting Self-Explanations Improves Understanding.M. Chi - 1994 - Cognitive Science 18 (3):439-477.
    Direct download  
     
    Export citation  
     
    Bookmark   37 citations  
  • Eliciting Self-Explanations Improves Understanding.Michelene T. H. Chi, Nicholas De Leeuw, Mei-Hung Chiu & Christian Lavancher - 1994 - Cognitive Science 18 (3):439-477.
    Direct download  
     
    Export citation  
     
    Bookmark   42 citations  
  • How the Machine ‘Thinks’: Understanding Opacity in Machine Learning Algorithms.Jenna Burrell - 2016 - Big Data and Society 3 (1):205395171562251.
    This article considers the issue of opacity as a problem for socially consequential mechanisms of classification and ranking, such as spam filters, credit card fraud detection, search engines, news trends, market segmentation and advertising, insurance or loan qualification, and credit scoring. These mechanisms of classification all frequently rely on computational algorithms, and in many cases on machine learning algorithms to do this work. In this article, I draw a distinction between three forms of opacity: opacity as intentional corporate or state (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   130 citations  
  • Fairer Machine Learning in the Real World: Mitigating Discrimination Without Collecting Sensitive Data.Reuben Binns & Michael Veale - 2017 - Big Data and Society 4 (2).
    Decisions based on algorithmic, machine learning models can be unfair, reproducing biases in historical data used to train them. While computational techniques are emerging to address aspects of these concerns through communities such as discrimination-aware data mining and fairness, accountability and transparency machine learning, their practical implementation faces real-world challenges. For legal, institutional or commercial reasons, organisations might not hold the data on sensitive attributes such as gender, ethnicity, sexuality or disability needed to diagnose and mitigate emergent indirect discrimination-by-proxy, such (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   15 citations  
  • Types of Understanding: Their Nature and Their Relation to Knowledge.Christoph Baumberger - 2014 - Conceptus: Zeitschrift Fur Philosophie 40 (98):67-88.
    What does it mean to understand something? I approach this question by comparing understanding with knowledge. Like knowledge, understanding comes, at least prima facia, in three varieties: propositional, interrogative and objectual. I argue that explanatory understanding (this being the most important form of interrogative understanding) and objectual understanding are not reducible to one another and are neither identical with, nor even a form of, the corresponding type of knowledge (nor any other type of knowledge). My discussion suggests that definitions of (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   13 citations  
  • The Ethics of Algorithms: Mapping the Debate.Brent Mittelstadt, Patrick Allo, Mariarosaria Taddeo, Sandra Wachter & Luciano Floridi - 2016 - Big Data and Society 3 (2).
    In information societies, operations, decisions and choices previously left to humans are increasingly delegated to algorithms, which may advise, if not decide, about how data should be interpreted and what actions should be taken as a result. More and more often, algorithms mediate social processes, business transactions, governmental decisions, and how we perceive, understand, and interact among ourselves and with the environment. Gaps between the design and operation of algorithms and our understanding of their ethical implications can have severe consequences (...)
    Direct download  
     
    Export citation  
     
    Bookmark   138 citations  
  • Explaining Explanations in AI.Brent Mittelstadt - forthcoming - FAT* 2019 Proceedings 1.
    Recent work on interpretability in machine learning and AI has focused on the building of simplified models that approximate the true criteria used to make decisions. These models are a useful pedagogical device for teaching trained professionals how to predict what decisions will be made by the complex system, and most importantly how the system might break. However, when considering any such model it’s important to remember Box’s maxim that "All models are wrong but some are useful." We focus on (...)
    Direct download  
     
    Export citation  
     
    Bookmark   33 citations