Results for 'Algorithmic Opacity'

993 found
Order:
  1. How the machine ‘thinks’: Understanding opacity in machine learning algorithms.Jenna Burrell - 2016 - Big Data and Society 3 (1):205395171562251.
    This article considers the issue of opacity as a problem for socially consequential mechanisms of classification and ranking, such as spam filters, credit card fraud detection, search engines, news trends, market segmentation and advertising, insurance or loan qualification, and credit scoring. These mechanisms of classification all frequently rely on computational algorithms, and in many cases on machine learning algorithms to do this work. In this article, I draw a distinction between three forms of opacity: opacity as intentional (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   185 citations  
  2. AI, Opacity, and Personal Autonomy.Bram Vaassen - 2022 - Philosophy and Technology 35 (4):1-20.
    Advancements in machine learning have fuelled the popularity of using AI decision algorithms in procedures such as bail hearings, medical diagnoses and recruitment. Academic articles, policy texts, and popularizing books alike warn that such algorithms tend to be opaque: they do not provide explanations for their outcomes. Building on a causal account of transparency and opacity as well as recent work on the value of causal explanation, I formulate a moral concern for opaque algorithms that is yet to receive (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  3. Algorithmic and human decision making: for a double standard of transparency.Mario Günther & Atoosa Kasirzadeh - 2022 - AI and Society 37 (1):375-381.
    Should decision-making algorithms be held to higher standards of transparency than human beings? The way we answer this question directly impacts what we demand from explainable algorithms, how we govern them via regulatory proposals, and how explainable algorithms may help resolve the social problems associated with decision making supported by artificial intelligence. Some argue that algorithms and humans should be held to the same standards of transparency and that a double standard of transparency is hardly justified. We give two arguments (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   16 citations  
  4. Algorithmic Decision-Making Based on Machine Learning from Big Data: Can Transparency Restore Accountability?Paul B. de Laat - 2018 - Philosophy and Technology 31 (4):525-541.
    Decision-making assisted by algorithms developed by machine learning is increasingly determining our lives. Unfortunately, full opacity about the process is the norm. Would transparency contribute to restoring accountability for such systems as is often maintained? Several objections to full transparency are examined: the loss of privacy when datasets become public, the perverse effects of disclosure of the very algorithms themselves, the potential loss of companies’ competitive edge, and the limited gains in answerability to be expected since sophisticated algorithms usually (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   29 citations  
  5.  30
    Managing Algorithmic Accountability: Balancing Reputational Concerns, Engagement Strategies, and the Potential of Rational Discourse.Alexander Buhmann, Johannes Paßmann & Christian Fieseler - 2020 - Journal of Business Ethics 163 (2):265-280.
    While organizations today make extensive use of complex algorithms, the notion of algorithmic accountability remains an elusive ideal due to the opacity and fluidity of algorithms. In this article, we develop a framework for managing algorithmic accountability that highlights three interrelated dimensions: reputational concerns, engagement strategies, and discourse principles. The framework clarifies that accountability processes for algorithms are driven by reputational concerns about the epistemic setup, opacity, and outcomes of algorithms; that the way in which organizations (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   24 citations  
  6. Algorithmic content moderation: Technical and political challenges in the automation of platform governance.Christian Katzenbach, Reuben Binns & Robert Gorwa - 2020 - Big Data and Society 7 (1):1–15.
    As government pressure on major technology companies builds, both firms and legislators are searching for technical solutions to difficult platform governance puzzles such as hate speech and misinformation. Automated hash-matching and predictive machine learning tools – what we define here as algorithmic moderation systems – are increasingly being deployed to conduct content moderation at scale by major platforms for user-generated content such as Facebook, YouTube and Twitter. This article provides an accessible technical primer on how algorithmic moderation works; (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   15 citations  
  7. On the ethics of algorithmic decision-making in healthcare.Thomas Grote & Philipp Berens - 2020 - Journal of Medical Ethics 46 (3):205-211.
    In recent years, a plethora of high-profile scientific publications has been reporting about machine learning algorithms outperforming clinicians in medical diagnosis or treatment recommendations. This has spiked interest in deploying relevant algorithms with the aim of enhancing decision-making in healthcare. In this paper, we argue that instead of straightforwardly enhancing the decision-making capabilities of clinicians and healthcare institutions, deploying machines learning algorithms entails trade-offs at the epistemic and the normative level. Whereas involving machine learning might improve the accuracy of medical (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   54 citations  
  8.  18
    Algorithmic Accountability In the Making.Deborah G. Johnson - 2021 - Social Philosophy and Policy 38 (2):111-127.
    Algorithms are now routinely used in decision-making; they are potent components in decisions that affect the lives of individuals and the activities of public and private institutions. Although use of algorithms has many benefits, a number of problems have been identified with their use in certain domains, most notably in domains where safety and fairness are important. Awareness of these problems has generated public discourse calling for algorithmic accountability. However, the current discourse focuses largely on algorithms and their (...). I argue that this reflects a narrow and inadequate understanding of accountability. I sketch an account of accountability that takes accountability to be a social practice constituted by actors, forums, shared beliefs and norms, performativity, and sanctions, and aimed at putting constraints on the exercise of power. On this account, algorithmic accountability is not yet constituted; it is in the making. The account brings to light a set of questions that must be addressed to establish it. (shrink)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  9.  59
    Algorithmic Decision-Making Based on Machine Learning from Big Data: Can Transparency Restore Accountability?Massimo Durante & Marcello D'Agostino - 2018 - Philosophy and Technology 31 (4):525-541.
    Decision-making assisted by algorithms developed by machine learning is increasingly determining our lives. Unfortunately, full opacity about the process is the norm. Would transparency contribute to restoring accountability for such systems as is often maintained? Several objections to full transparency are examined: the loss of privacy when datasets become public, the perverse effects of disclosure of the very algorithms themselves, the potential loss of companies’ competitive edge, and the limited gains in answerability to be expected since sophisticated algorithms usually (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   27 citations  
  10.  33
    Algorithmic Decision-Making Based on Machine Learning from Big Data: Can Transparency Restore Accountability?Paul Laat - 2018 - Philosophy and Technology 31 (4):525-541.
    Decision-making assisted by algorithms developed by machine learning is increasingly determining our lives. Unfortunately, full opacity about the process is the norm. Would transparency contribute to restoring accountability for such systems as is often maintained? Several objections to full transparency are examined: the loss of privacy when datasets become public, the perverse effects of disclosure of the very algorithms themselves (“gaming the system” in particular), the potential loss of companies’ competitive edge, and the limited gains in answerability to be (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   27 citations  
  11. Fair, Transparent, and Accountable Algorithmic Decision-making Processes: The Premise, the Proposed Solutions, and the Open Challenges.Bruno Lepri, Nuria Oliver, Emmanuel Letouzé, Alex Pentland & Patrick Vinck - 2018 - Philosophy and Technology 31 (4):611-627.
    The combination of increased availability of large amounts of fine-grained human behavioral data and advances in machine learning is presiding over a growing reliance on algorithms to address complex societal problems. Algorithmic decision-making processes might lead to more objective and thus potentially fairer decisions than those made by humans who may be influenced by greed, prejudice, fatigue, or hunger. However, algorithmic decision-making has been criticized for its potential to enhance discrimination, information and power asymmetry, and opacity. In (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   48 citations  
  12. Public Trust, Institutional Legitimacy, and the Use of Algorithms in Criminal Justice.Duncan Purves & Jeremy Davis - 2022 - Public Affairs Quarterly 36 (2):136-162.
    A common criticism of the use of algorithms in criminal justice is that algorithms and their determinations are in some sense ‘opaque’—that is, difficult or impossible to understand, whether because of their complexity or because of intellectual property protections. Scholars have noted some key problems with opacity, including that opacity can mask unfair treatment and threaten public accountability. In this paper, we explore a different but related concern with algorithmic opacity, which centers on the role of (...)
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  13.  53
    Algorithmic management in a work context.Will Sutherland, Eliscia Kinder, Christine T. Wolf, Min Kyung Lee, Gemma Newlands & Mohammad Hossein Jarrahi - 2021 - Big Data and Society 8 (2).
    The rapid development of machine-learning algorithms, which underpin contemporary artificial intelligence systems, has created new opportunities for the automation of work processes and management functions. While algorithmic management has been observed primarily within the platform-mediated gig economy, its transformative reach and consequences are also spreading to more standard work settings. Exploring algorithmic management as a sociotechnical concept, which reflects both technological infrastructures and organizational choices, we discuss how algorithmic management may influence existing power and social structures within (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   8 citations  
  14.  4
    Deep Learning Opacity, and the Ethical Accountability of AI Systems. A New Perspective.Gianfranco Basti & Giuseppe Vitiello - 2023 - In Raffaela Giovagnoli & Robert Lowe (eds.), The Logic of Social Practices II. Springer Nature Switzerland. pp. 21-73.
    In this paper we analyse the conditions for attributing to AI autonomous systems the ontological status of “artificial moral agents”, in the context of the “distributed responsibility” between humans and machines in Machine Ethics (ME). In order to address the fundamental issue in ME of the unavoidable “opacity” of their decisions with ethical/legal relevance, we start from the neuroethical evidence in cognitive science. In humans, the “transparency” and then the “ethical accountability” of their actions as responsible moral agents is (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  15. Models, Algorithms, and the Subjects of Transparency.Hajo Greif - 2022 - In Vincent C. Müller (ed.), Philosophy and Theory of Artificial Intelligence 2021. Berlin: Springer. pp. 27-37.
    Concerns over epistemic opacity abound in contemporary debates on Artificial Intelligence (AI). However, it is not always clear to what extent these concerns refer to the same set of problems. We can observe, first, that the terms 'transparency' and 'opacity' are used either in reference to the computational elements of an AI model or to the models to which they pertain. Second, opacity and transparency might either be understood to refer to the properties of AI systems or (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  16.  63
    The ethnographer and the algorithm: beyond the black box.Angèle Christin - 2020 - Theory and Society 49 (5-6):897-918.
    A common theme in social science studies of algorithms is that they are profoundly opaque and function as “black boxes.” Scholars have developed several methodological approaches in order to address algorithmic opacity. Here I argue that we can explicitly enroll algorithms in ethnographic research, which can shed light on unexpected aspects of algorithmic systems—including their opacity. I delineate three meso-level strategies for algorithmic ethnography. The first, algorithmic refraction, examines the reconfigurations that take place when (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  17.  11
    Black box algorithms in mental health apps: An ethical reflection.Tania Manríquez Roa & Nikola Biller-Andorno - 2023 - Bioethics 37 (8):790-797.
    Mental health apps bring unprecedented benefits and risks to individual and public health. A thorough evaluation of these apps involves considering two aspects that are often neglected: the algorithms they deploy and the functions they perform. We focus on mental health apps based on black box algorithms, explore their forms of opacity, discuss the implications derived from their opacity, and propose how to use their outcomes in mental healthcare, self‐care practices, and research. We argue that there is a (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  18.  15
    Author’s Response: Opacity and Complexity of Learning Black Boxes.Elena Esposito - 2021 - Constructivist Foundations 16 (3):377-380.
    : Non-transparent machine learning algorithms can be described as non-trivial machines that do not have to be understood, but controlled as communication partners. From the perspective of ….
    Direct download  
     
    Export citation  
     
    Bookmark  
  19. Transparency in Algorithmic and Human Decision-Making: Is There a Double Standard?John Zerilli, Alistair Knott, James Maclaurin & Colin Gavaghan - 2018 - Philosophy and Technology 32 (4):661-683.
    We are sceptical of concerns over the opacity of algorithmic decision tools. While transparency and explainability are certainly important desiderata in algorithmic governance, we worry that automated decision-making is being held to an unrealistically high standard, possibly owing to an unrealistically high estimate of the degree of transparency attainable from human decision-makers. In this paper, we review evidence demonstrating that much human decision-making is fraught with transparency problems, show in what respects AI fares little worse or better (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   71 citations  
  20.  39
    Big data and algorithmic decision-making.Paul B. de Laat - 2017 - Acm Sigcas Computers and Society 47 (3):39-53.
    Decision-making assisted by algorithms developed by machine learning is increasingly determining our lives. Unfortunately, full opacity about the process is the norm. Can transparency contribute to restoring accountability for such systems? Several objections are examined: the loss of privacy when data sets become public, the perverse effects of disclosure of the very algorithms themselves, the potential loss of competitive edge, and the limited gains in answerability to be expected since sophisticated algorithms are inherently opaque. It is concluded that transparency (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  21.  8
    Phenomenology of Emotions and Algorithms in Cases of Early Rehospitalizations.Susi Ferrarello - 2023 - In Elodie Boublil & Susi Ferrarello (eds.), The Vulnerability of the Human World: Well-being, Health, Technology and the Environment. Springer Verlag. pp. 199-210.
    This paper is going to focus on the problem of emotions in technology, in particular in reference to the case of algorithms developed to track early rehospitalizations. In this paper I am going to discuss how phenomenology can support the integration of emotions in technology and how this integration can improve our chances for that “decent survival” that the founder of bioethics, Potter, has envisioned as the main goal of this discipline (Natl Cancer Inst Monogr 13:111–116, 1964; Bioethics, bridge to (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  22.  28
    Design publicity of black box algorithms: a support to the epistemic and ethical justifications of medical AI systems.Andrea Ferrario - 2022 - Journal of Medical Ethics 48 (7):492-494.
    In their article ‘Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI’, Durán and Jongsma discuss the epistemic and ethical challenges raised by black box algorithms in medical practice. The opacity of black box algorithms is an obstacle to the trustworthiness of their outcomes. Moreover, the use of opaque algorithms is not normatively justified in medical practice. The authors introduce a formalism, called computational reliabilism, which allows generating justified beliefs on (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  23.  7
    Beyond opening up the black box: Investigating the role of algorithmic systems in Wikipedian organizational culture.R. Stuart Geiger - 2017 - Big Data and Society 4 (2).
    Scholars and practitioners across domains are increasingly concerned with algorithmic transparency and opacity, interrogating the values and assumptions embedded in automated, black-boxed systems, particularly in user-generated content platforms. I report from an ethnography of infrastructure in Wikipedia to discuss an often understudied aspect of this topic: the local, contextual, learned expertise involved in participating in a highly automated social–technical environment. Today, the organizational culture of Wikipedia is deeply intertwined with various data-driven algorithmic systems, which Wikipedians rely on (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   6 citations  
  24.  42
    What is morally at stake when using algorithms to make medical diagnoses? Expanding the discussion beyond risks and harms.Bas de Boer & Olya Kudina - 2021 - Theoretical Medicine and Bioethics 42 (5):245-266.
    In this paper, we examine the qualitative moral impact of machine learning-based clinical decision support systems in the process of medical diagnosis. To date, discussions about machine learning in this context have focused on problems that can be measured and assessed quantitatively, such as by estimating the extent of potential harm or calculating incurred risks. We maintain that such discussions neglect the qualitative moral impact of these technologies. Drawing on the philosophical approaches of technomoral change and technological mediation theory, which (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  25.  18
    Challenges in enabling user control over algorithm-based services.Pascal D. König - 2024 - AI and Society 39 (1):195-205.
    Algorithmic systems that provide services to people by supporting or replacing human decision-making promise greater convenience in various areas. The opacity of these applications, however, means that it is not clear how much they truly serve their users. A promising way to address the issue of possible undesired biases consists in giving users control by letting them configure a system and aligning its performance with users’ own preferences. However, as the present paper argues, this form of control over (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  26. Dagfinn f0llesdal.Referential Opacity & Modal Logic - 1998 - In J. H. Fetzer & P. Humphreys (eds.), The New Theory of Reference: Kripke, Marcus, and its Origins. Kluwer Academic Publishers. pp. 270--181.
  27.  13
    Franqois Recanati.I. Opacity - 2000 - In A. Orenstein & Petr Kotatko (eds.), Knowledge, Language and Logic: Questions for Quine. Kluwer Academic Print on Demand. pp. 210--367.
    Direct download  
     
    Export citation  
     
    Bookmark  
  28. Freedom at Work: Understanding, Alienation, and the AI-Driven Workplace.Kate Vredenburgh - 2022 - Canadian Journal of Philosophy 52 (1):78-92.
    This paper explores a neglected normative dimension of algorithmic opacity in the workplace and the labor market. It argues that explanations of algorithms and algorithmic decisions are of noninstrumental value. That is because explanations of the structure and function of parts of the social world form the basis for reflective clarification of our practical orientation toward the institutions that play a central role in our life. Using this account of the noninstrumental value of explanations, the paper diagnoses (...)
    No categories
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  29.  55
    A phenomenology and epistemology of large language models: Transparency, trust, and trustworthiness.Richard Heersmink, Barend de Rooij, María Jimena Clavel Vázquez & Matteo Colombo - forthcoming - Ethics and Information Technology.
    This paper analyses the phenomenology and epistemology of chatbots such as ChatGPT and Bard. The computational architecture underpinning these chatbots are large language models (LLMs), which are generative AI (Artificial Intelligence) systems trained on a massive dataset of text extracted from the Web. We conceptualise these LLMs as multifunctional computational cognitive artifacts, used for various cognitive tasks such as translating, summarizing, answering questions, information-seeking, and much more. Phenomenologically, LLMs can be experienced as a “quasi-other”; when that happens, users anthropomorphise them. (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  30. Legitimacy, Authority, and the Political Value of Explanations.Seth Lazar - manuscript
    Here is my thesis (and the outline of this paper). Increasingly secret, complex and inscrutable computational systems are being used to intensify existing power relations, and to create new ones (Section II). To be all-things-considered morally permissible, new, or newly intense, power relations must in general meet standards of procedural legitimacy and proper authority (Section III). Legitimacy and authority constitutively depend, in turn, on a publicity requirement: reasonably competent members of the political community in which power is being exercised must (...)
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  31.  12
    Rethinking democratizing potential of digital technology.Luyue Ma - 2020 - Journal of Information, Communication and Ethics in Society 18 (1):140-156.
    PurposeThe purpose of this paper is to examine how the shifting conceptualization of the democratizing potential of digital technology can be more comprehensively understood by bringing in science and technology studies perspectives to communication scholarship. The synthesis and discussion are aiming at providing an interdisciplinary theoretical framework for comprehensively understand the democratizing potential of digital technology, and urging researchers to be conscious of assumptions underpinning epistemological positions they take when examining the issue of democratizing potential of digital technology.Design/methodology/approachThe paper is (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  32. Explainable AI lacks regulative reasons: why AI and human decision‑making are not equally opaque.Uwe Peters - forthcoming - AI and Ethics.
    Many artificial intelligence (AI) systems currently used for decision-making are opaque, i.e., the internal factors that determine their decisions are not fully known to people due to the systems’ computational complexity. In response to this problem, several researchers have argued that human decision-making is equally opaque and since simplifying, reason-giving explanations (rather than exhaustive causal accounts) of a decision are typically viewed as sufficient in the human case, the same should hold for algorithmic decision-making. Here, I contend that this (...)
    Direct download  
     
    Export citation  
     
    Bookmark   4 citations  
  33. Peeking Inside the Black Box: A New Kind of Scientific Visualization.Michael T. Stuart & Nancy J. Nersessian - 2018 - Minds and Machines 29 (1):87-107.
    Computational systems biologists create and manipulate computational models of biological systems, but they do not always have straightforward epistemic access to the content and behavioural profile of such models because of their length, coding idiosyncrasies, and formal complexity. This creates difficulties both for modellers in their research groups and for their bioscience collaborators who rely on these models. In this paper we introduce a new kind of visualization that was developed to address just this sort of epistemic opacity. The (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  34. A pluralist hybrid model for moral AIs.Fei Song & Shing Hay Felix Yeung - forthcoming - AI and Society:1-10.
    With the increasing degrees A.I.s and machines are applied across different social contexts, the need for implementing ethics in A.I.s is pressing. In this paper, we argue for a pluralist hybrid model for the implementation of moral A.I.s. We first survey current approaches to moral A.I.s and their inherent limitations. Then we propose the pluralist hybrid approach and show how these limitations of moral A.I.s can be partly alleviated by the pluralist hybrid approach. The core ethical decision-making capacity of an (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  35.  60
    When Something Goes Wrong: Who is Responsible for Errors in ML Decision-making?Andrea Berber & Sanja Srećković - 2023 - AI and Society 38 (2):1-13.
    Because of its practical advantages, machine learning (ML) is increasingly used for decision-making in numerous sectors. This paper demonstrates that the integral characteristics of ML, such as semi-autonomy, complexity, and non-deterministic modeling have important ethical implications. In particular, these characteristics lead to a lack of insight and lack of comprehensibility, and ultimately to the loss of human control over decision-making. Errors, which are bound to occur in any decision-making process, may lead to great harm and human rights violations. It is (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  36. The Boundaries of Meaning: A Case Study in Neural Machine Translation.Yuri Balashov - 2022 - Inquiry: An Interdisciplinary Journal of Philosophy 66.
    The success of deep learning in natural language processing raises intriguing questions about the nature of linguistic meaning and ways in which it can be processed by natural and artificial systems. One such question has to do with subword segmentation algorithms widely employed in language modeling, machine translation, and other tasks since 2016. These algorithms often cut words into semantically opaque pieces, such as ‘period’, ‘on’, ‘t’, and ‘ist’ in ‘period|on|t|ist’. The system then represents the resulting segments in a dense (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  37.  27
    Analogue Models and Universal Machines. Paradigms of Epistemic Transparency in Artificial Intelligence.Hajo Greif - 2022 - Minds and Machines 32 (1):111-133.
    The problem of epistemic opacity in Artificial Intelligence is often characterised as a problem of intransparent algorithms that give rise to intransparent models. However, the degrees of transparency of an AI model should not be taken as an absolute measure of the properties of its algorithms but of the model’s degree of intelligibility to human users. Its epistemically relevant elements are to be specified on various levels above and beyond the computational one. In order to elucidate this claim, I (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  38. Explainable machine learning practices: opening another black box for reliable medical AI.Emanuele Ratti & Mark Graves - 2022 - AI and Ethics:1-14.
    In the past few years, machine learning (ML) tools have been implemented with success in the medical context. However, several practitioners have raised concerns about the lack of transparency—at the algorithmic level—of many of these tools; and solutions from the field of explainable AI (XAI) have been seen as a way to open the ‘black box’ and make the tools more trustworthy. Recently, Alex London has argued that in the medical context we do not need machine learning tools to (...)
    Direct download  
     
    Export citation  
     
    Bookmark   5 citations  
  39.  58
    Discrimination in the age of artificial intelligence.Bert Heinrichs - 2022 - AI and Society 37 (1):143-154.
    In this paper, I examine whether the use of artificial intelligence (AI) and automated decision-making (ADM) aggravates issues of discrimination as has been argued by several authors. For this purpose, I first take up the lively philosophical debate on discrimination and present my own definition of the concept. Equipped with this account, I subsequently review some of the recent literature on the use AI/ADM and discrimination. I explain how my account of discrimination helps to understand that the general claim in (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  40.  15
    “The revolution will not be supervised”: Consent and open secrets in data science.Abibat Rahman-Davies, Madison W. Green & Coleen Carrigan - 2021 - Big Data and Society 8 (2).
    The social impacts of computer technology are often glorified in public discourse, but there is growing concern about its actual effects on society. In this article, we ask: how does “consent” as an analytical framework make visible the social dynamics and power relations in the capture, extraction, and labor of data science knowledge production? We hypothesize that a form of boundary violation in data science workplaces—gender harassment—may correlate with the ways humans’ lived experiences are extracted to produce Big Data. The (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  41.  40
    On Artificial Intelligence and Manipulation.Marcello Ienca - 2023 - Topoi 42 (3):833-842.
    The increasing diffusion of novel digital and online sociotechnical systems for arational behavioral influence based on Artificial Intelligence (AI), such as social media, microtargeting advertising, and personalized search algorithms, has brought about new ways of engaging with users, collecting their data and potentially influencing their behavior. However, these technologies and techniques have also raised concerns about the potential for manipulation, as they offer unprecedented capabilities for targeting and influencing individuals on a large scale and in a more subtle, automated and (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  42.  3
    Using artificial intelligence to enhance patient autonomy in healthcare decision-making.Jose Luis Guerrero Quiñones - forthcoming - AI and Society:1-10.
    The use of artificial intelligence in healthcare contexts is highly controversial for the (bio)ethical conundrums it creates. One of the main problems arising from its implementation is the lack of transparency of machine learning algorithms, which is thought to impede the patient’s autonomous choice regarding their medical decisions. If the patient is unable to clearly understand why and how an AI algorithm reached certain medical decision, their autonomy is being hovered. However, there are alternatives to prevent the negative impact of (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  43.  68
    Data Science as Machinic Neoplatonism.Dan McQuillan - 2018 - Philosophy and Technology 31 (2):253-272.
    Data science is not simply a method but an organising idea. Commitment to the new paradigm overrides concerns caused by collateral damage, and only a counterculture can constitute an effective critique. Understanding data science requires an appreciation of what algorithms actually do; in particular, how machine learning learns. The resulting ‘insight through opacity’ drives the observable problems of algorithmic discrimination and the evasion of due process. But attempts to stem the tide have not grasped the nature of data (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  44.  39
    Uncertainty, Evidence, and the Integration of Machine Learning into Medical Practice.Thomas Grote & Philipp Berens - 2023 - Journal of Medicine and Philosophy 48 (1):84-97.
    In light of recent advances in machine learning for medical applications, the automation of medical diagnostics is imminent. That said, before machine learning algorithms find their way into clinical practice, various problems at the epistemic level need to be overcome. In this paper, we discuss different sources of uncertainty arising for clinicians trying to evaluate the trustworthiness of algorithmic evidence when making diagnostic judgments. Thereby, we examine many of the limitations of current machine learning algorithms (with deep learning in (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  45.  19
    The paradoxical transparency of opaque machine learning.Felix Tun Han Lo - forthcoming - AI and Society:1-13.
    This paper examines the paradoxical transparency involved in training machine-learning models. Existing literature typically critiques the opacity of machine-learning models such as neural networks or collaborative filtering, a type of critique that parallels the black-box critique in technology studies. Accordingly, people in power may leverage the models’ opacity to justify a biased result without subjecting the technical operations to public scrutiny, in what Dan McQuillan metaphorically depicts as an “algorithmic state of exception”. This paper attempts to differentiate (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  46.  9
    STAYING WITH THE DARKNESS: peter sloterdijk’s anthropotechnics for the digital age.Andrea Capra - 2021 - Angelaki 26 (1):124-141.
    This essay discusses Sloterdijk’s anthropotechnical framework as it relates to recent contributions that deal with the inherent opacities of digital technology and processes of blackboxing. I argue that Sloterdijk’s philosophy is a precious case of affirmative, non-nihilistic technophilic thinking that espouses the technogenic provenance of mankind, and leaves space for technologically engendered incomprehensibility while tracing a horizon for human beings’ resoluteness. In the first section of my essay I tackle Sloterdijk’s reflections on the philosophical transition from wonder to horror in (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  47.  61
    Transparent AI: reliabilist and proud.Abhishek Mishra - forthcoming - Journal of Medical Ethics.
    Durán et al argue in ‘Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI’1 that traditionally proposed solutions to make black box machine learning models in medicine less opaque and more transparent are, though necessary, ultimately not sufficient to establish their overall trustworthiness. This is because transparency procedures currently employed, such as the use of an interpretable predictor,2 cannot fully overcome the opacity of such models. Computational reliabilism, an alternate approach (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  48.  23
    Trading spaces: A promissory note to solve relational mapping problems.Karl Haberlandt - 1997 - Behavioral and Brain Sciences 20 (1):74-74.
    Clark & Thornton have demonstrated the paradox between the opacity of the transformations that underlie relational mappings and the ease with which people learn such mappings. However, C&T's trading-spaces proposal resolves the paradox only in the broadest outline. The general-purpose algorithm promised by C&T remains to be developed. The strategy of doing so is to analyze and formulate computational mechanisms for known cases of recoding.
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  49.  12
    Automating anticorruption?María Carolina Jiménez & Emanuela Ceva - 2022 - Ethics and Information Technology 24 (4):1-14.
    The paper explores some normative challenges concerning the integration of Machine Learning (ML) algorithms into anticorruption in public institutions. The challenges emerge from the tensions between an approach treating ML algorithms as allies to an exclusively legalistic conception of anticorruption and an approach seeing them within an institutional ethics of office accountability. We explore two main challenges. One concerns the variable opacity of some ML algorithms, which may affect public officeholders’ capacity to account for institutional processes relying upon ML (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  50. On algorithmic fairness in medical practice.Thomas Grote & Geoff Keeling - 2022 - Cambridge Quarterly of Healthcare Ethics 31 (1):83-94.
    The application of machine-learning technologies to medical practice promises to enhance the capabilities of healthcare professionals in the assessment, diagnosis, and treatment, of medical conditions. However, there is growing concern that algorithmic bias may perpetuate or exacerbate existing health inequalities. Hence, it matters that we make precise the different respects in which algorithmic bias can arise in medicine, and also make clear the normative relevance of these different kinds of algorithmic bias for broader questions about justice and (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
1 — 50 / 993