Results for 'AI-Box'

996 found
Order:
  1. Black-box assisted medical decisions: AI power vs. ethical physician care.Berman Chan - 2023 - Medicine, Health Care and Philosophy 26 (3):285-292.
    Without doctors being able to explain medical decisions to patients, I argue their use of black box AIs would erode the effective and respectful care they provide patients. In addition, I argue that physicians should use AI black boxes only for patients in dire straits, or when physicians use AI as a “co-pilot” (analogous to a spellchecker) but can independently confirm its accuracy. I respond to A.J. London’s objection that physicians already prescribe some drugs without knowing why they work.
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  2.  14
    AI’s black box and the supremacy of standards.Murilo Karasinski & Kleber Bez Birolo Candiotto - 2024 - Filosofia Unisinos 25 (1):1-13.
    This article investigates the metaphor of the “black box” in artificial intelligence, a representation that often suggests that AI is an unfathomable power, politically uncontrollable and shrouded in an aura of opacity. While the concept of the “black box” is legitimate and applicable in deep neural networks due to the in- herent complexity of the process, it has also become a generic pretext for the perception, which we seek to critically analyze, that AI systems are inscrutable and out of control, (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  3.  52
    Black Boxes and Bias in AI Challenge Autonomy.Craig M. Klugman - 2021 - American Journal of Bioethics 21 (7):33-35.
    In “Artificial Intelligence, Social Media and Depression: A New Concept of Health-Related Digital Autonomy,” Laacke and colleagues posit a revised model of autonomy when using digital algori...
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  4.  14
    Black Boxes that Curtail Human Flourishing are no Longer Available for Use in Artificial Intelligence (AI) Design.John W. Murphy & Carlos Largacha-Martinez - 2024 - Filosofija. Sociologija 35 (1).
    AI is considered to be very abstract to a range of critics. In this regard, algorithms are referred to regularly as black boxes and divorced from human intervention. A particular philosophical maneuver supports this outcome. The aim of this article is to (1) bring the philosophy to the surface that has contributed to this distance between AI and people and (2) offer an alternative philosophical position that can bring this technology closer to individuals and communities. The overall goal of the (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  5.  8
    Black-Box Expertise and AI Discourse.Kenneth Boyd - 2023 - The Prindle Post.
  6. Thinking Inside the Box: Controlling and Using an Oracle AI.Stuart Armstrong, Anders Sandberg & Nick Bostrom - 2012 - Minds and Machines 22 (4):299-324.
    There is no strong reason to believe that human-level intelligence represents an upper limit of the capacity of artificial intelligence, should it be realized. This poses serious safety issues, since a superintelligent system would have great power to direct the future according to its possibly flawed motivation system. Solving this issue in general has proven to be considerably harder than expected. This paper looks at one particular approach, Oracle AI. An Oracle AI is an AI that does not act in (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   17 citations  
  7. Transparency and the Black Box Problem: Why We Do Not Trust AI.Warren J. von Eschenbach - 2021 - Philosophy and Technology 34 (4):1607-1622.
    With automation of routine decisions coupled with more intricate and complex information architecture operating this automation, concerns are increasing about the trustworthiness of these systems. These concerns are exacerbated by a class of artificial intelligence that uses deep learning, an algorithmic system of deep neural networks, which on the whole remain opaque or hidden from human comprehension. This situation is commonly referred to as the black box problem in AI. Without understanding how AI reaches its conclusions, it is an open (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   12 citations  
  8.  49
    AI and Ethics: Shedding Light on the Black Box.Katrina Ingram - 2020 - International Review of Information Ethics 28.
    Artificial Intelligence is playing an increasingly prevalent role in our lives. Whether its landing a job interview, getting a bank loan or accessing a government program, organizations are using automated systems informed by AI enabled technologies in ways that have significant consequences for people. At the same time, there is a lack of transparency around how AI technologies work and whether they are ethical, fair or accurate. This paper examines a body of literature related to the ethical considerations surrounding the (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  9.  17
    Opening the black boxes of the black carpet in the era of risk society: a sociological analysis of AI, algorithms and big data at work through the case study of the Greek postal services.Christos Kouroutzas & Venetia Palamari - forthcoming - AI and Society:1-14.
    This article draws on contributions from the Sociology of Science and Technology and Science and Technology Studies, the Sociology of Risk and Uncertainty, and the Sociology of Work, focusing on the transformations of employment regarding expanded automation, robotization and informatization. The new work patterns emerging due to the introduction of software and hardware technologies, which are based on artificial intelligence, algorithms, big data gathering and robotic systems are examined closely. This article attempts to “open the black boxes” of the “black (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  10.  27
    Design publicity of black box algorithms: a support to the epistemic and ethical justifications of medical AI systems.Andrea Ferrario - 2022 - Journal of Medical Ethics 48 (7):492-494.
    In their article ‘Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI’, Durán and Jongsma discuss the epistemic and ethical challenges raised by black box algorithms in medical practice. The opacity of black box algorithms is an obstacle to the trustworthiness of their outcomes. Moreover, the use of opaque algorithms is not normatively justified in medical practice. The authors introduce a formalism, called computational reliabilism, which allows generating justified beliefs on the (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  11.  80
    Thinking inside the box: Using and controlling an oracle AI.Stuart Armstrong, Anders Sandberg & Nick Bostrom - forthcoming - Minds and Machines.
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  12. Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI.Juan Manuel Durán & Karin Rolanda Jongsma - 2021 - Journal of Medical Ethics 47 (5).
    The use of black box algorithms in medicine has raised scholarly concerns due to their opaqueness and lack of trustworthiness. Concerns about potential bias, accountability and responsibility, patient autonomy and compromised trust transpire with black box algorithms. These worries connect epistemic concerns with normative issues. In this paper, we outline that black box algorithms are less problematic for epistemic reasons than many scholars seem to believe. By outlining that more transparency in algorithms is not always necessary, and by explaining that (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   45 citations  
  13.  6
    Responsibility Gaps and Black Box Healthcare AI: Shared Responsibilization as a Solution.Benjamin H. Lang, Sven Nyholm & Jennifer Blumenthal-Barby - 2023 - Digital Society 2 (3):52.
    As sophisticated artificial intelligence software becomes more ubiquitously and more intimately integrated within domains of traditionally human endeavor, many are raising questions over how responsibility (be it moral, legal, or causal) can be understood for an AI’s actions or influence on an outcome. So called “responsibility gaps” occur whenever there exists an apparent chasm in the ordinary attribution of moral blame or responsibility when an AI automates physical or cognitive labor otherwise performed by human beings and commits an error. Healthcare (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  14. Explainable machine learning practices: opening another black box for reliable medical AI.Emanuele Ratti & Mark Graves - 2022 - AI and Ethics:1-14.
    In the past few years, machine learning (ML) tools have been implemented with success in the medical context. However, several practitioners have raised concerns about the lack of transparency—at the algorithmic level—of many of these tools; and solutions from the field of explainable AI (XAI) have been seen as a way to open the ‘black box’ and make the tools more trustworthy. Recently, Alex London has argued that in the medical context we do not need machine learning tools to be (...)
    Direct download  
     
    Export citation  
     
    Bookmark   5 citations  
  15.  45
    Coming to Terms with the Black Box Problem: How to Justify AI Systems in Health Care.Ryan Marshall Felder - 2021 - Hastings Center Report 51 (4):38-45.
    The use of opaque, uninterpretable artificial intelligence systems in health care can be medically beneficial, but it is often viewed as potentially morally problematic on account of this opacity—because the systems are black boxes. Alex John London has recently argued that opacity is not generally problematic, given that many standard therapies are explanatorily opaque and that we can rely on statistical validation of the systems in deciding whether to implement them. But is statistical validation sufficient to justify implementation of these (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  16.  13
    Testing the Black Box: Institutional Investors, Risk Disclosure, and Ethical AI.Trooper Sanders - 2020 - Philosophy and Technology 34 (1):105-109.
    The integration of artificial intelligence throughout the economy makes the ethical risks it poses a mainstream concern beyond technology circles. Building on their growing role bringing greater transparency to climate risk, institutional investors can play a constructive role in advancing the responsible evolution of AI by demanding more rigorous analysis and disclosure of ethical risks.
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  17. Black-box artificial intelligence: an epistemological and critical analysis.Manuel Carabantes - 2020 - AI and Society 35 (2):309-317.
    The artificial intelligence models with machine learning that exhibit the best predictive accuracy, and therefore, the most powerful ones, are, paradoxically, those with the most opaque black-box architectures. At the same time, the unstoppable computerization of advanced industrial societies demands the use of these machines in a growing number of domains. The conjunction of both phenomena gives rise to a control problem on AI that in this paper we analyze by dividing the issue into two. First, we carry out an (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   13 citations  
  18.  25
    GLocalX - From Local to Global Explanations of Black Box AI Models.Mattia Setzu, Riccardo Guidotti, Anna Monreale, Franco Turini, Dino Pedreschi & Fosca Giannotti - 2021 - Artificial Intelligence 294 (C):103457.
  19.  7
    What’s wrong with medical black box AI?Bert Gordijn & Henk ten Have - 2023 - Medicine, Health Care and Philosophy 26 (3):283-284.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  20.  13
    Black boxes, not green: Mythologizing artificial intelligence and omitting the environment.Benedetta Brevini - 2020 - Big Data and Society 7 (2).
    We are repeatedly told that AI will help us to solve some of the world's biggest challenges, from treating chronic diseases and reducing fatality rates in traffic accidents to fighting climate change and anticipating cybersecurity threats. However, the article contends that public discourse on AI systematically avoids considering AI’s environmental costs. Artificial Intelligence- Brevini argues- runs on technology, machines, and infrastructures that deplete scarce resources in their production, consumption, and disposal, thus increasing the amounts of energy in their use, and (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   6 citations  
  21.  82
    AI and the path to envelopment: knowledge as a first step towards the responsible regulation and use of AI-powered machines.Scott Robbins - 2020 - AI and Society 35 (2):391-400.
    With Artificial Intelligence entering our lives in novel ways—both known and unknown to us—there is both the enhancement of existing ethical issues associated with AI as well as the rise of new ethical issues. There is much focus on opening up the ‘black box’ of modern machine-learning algorithms to understand the reasoning behind their decisions—especially morally salient decisions. However, some applications of AI which are no doubt beneficial to society rely upon these black boxes. Rather than requiring algorithms to be (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  22.  71
    Indexical AI.Leif Weatherby & Brian Justie - 2022 - Critical Inquiry 48 (2):381-415.
    This article argues that the algorithms known as neural nets underlie a new form of artificial intelligence that we call indexical AI. Contrasting with the once dominant symbolic AI, large-scale learning systems have become a semiotic infrastructure underlying global capitalism. Their achievements are based on a digital version of the sign-function index, which points rather than describes. As these algorithms spread to parse the increasingly heavy data volumes on platforms, it becomes harder to remain skeptical of their results. We call (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  23. Understanding, Idealization, and Explainable AI.Will Fleisher - 2022 - Episteme 19 (4):534-560.
    Many AI systems that make important decisions are black boxes: how they function is opaque even to their developers. This is due to their high complexity and to the fact that they are trained rather than programmed. Efforts to alleviate the opacity of black box systems are typically discussed in terms of transparency, interpretability, and explainability. However, there is little agreement about what these key concepts mean, which makes it difficult to adjudicate the success or promise of opacity alleviation methods. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  24.  28
    AI transparency: a matter of reconciling design with critique.Tomasz Hollanek - forthcoming - AI and Society.
    In the late 2010s, various international committees, expert groups, and national strategy boards have voiced the demand to ‘open’ the algorithmic black box, to audit, expound, and demystify artificial intelligence. The opening of the algorithmic black box, however, cannot be seen only as an engineering challenge. In this article, I argue that only the sort of transparency that arises from critique—a method of theoretical examination that, by revealing pre-existing power structures, aims to challenge them—can help us produce technological systems that (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  25. Designing AI for Explainability and Verifiability: A Value Sensitive Design Approach to Avoid Artificial Stupidity in Autonomous Vehicles.Steven Umbrello & Roman Yampolskiy - 2022 - International Journal of Social Robotics 14 (2):313-322.
    One of the primary, if not most critical, difficulties in the design and implementation of autonomous systems is the black-boxed nature of the decision-making structures and logical pathways. How human values are embodied and actualised in situ may ultimately prove to be harmful if not outright recalcitrant. For this reason, the values of stakeholders become of particular significance given the risks posed by opaque structures of intelligent agents (IAs). This paper explores how decision matrix algorithms, via the belief-desire-intention model for (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  26.  44
    AI and the need for justification (to the patient).Anantharaman Muralidharan, Julian Savulescu & G. Owen Schaefer - 2024 - Ethics and Information Technology 26 (1):1-12.
    This paper argues that one problem that besets black-box AI is that it lacks algorithmic justifiability. We argue that the norm of shared decision making in medical care presupposes that treatment decisions ought to be justifiable to the patient. Medical decisions are justifiable to the patient only if they are compatible with the patient’s values and preferences and the patient is able to see that this is so. Patient-directed justifiability is threatened by black-box AIs because the lack of rationale provided (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  27. Explaining Explanations in AI.Brent Mittelstadt - forthcoming - FAT* 2019 Proceedings 1.
    Recent work on interpretability in machine learning and AI has focused on the building of simplified models that approximate the true criteria used to make decisions. These models are a useful pedagogical device for teaching trained professionals how to predict what decisions will be made by the complex system, and most importantly how the system might break. However, when considering any such model it’s important to remember Box’s maxim that "All models are wrong but some are useful." We focus on (...)
    Direct download  
     
    Export citation  
     
    Bookmark   42 citations  
  28.  15
    The black box problem revisited. Real and imaginary challenges for automated legal decision making.Bartosz Brożek, Michał Furman, Marek Jakubiec & Bartłomiej Kucharzyk - forthcoming - Artificial Intelligence and Law:1-14.
    This paper addresses the black-box problem in artificial intelligence (AI), and the related problem of explainability of AI in the legal context. We argue, first, that the black box problem is, in fact, a superficial one as it results from an overlap of four different – albeit interconnected – issues: the opacity problem, the strangeness problem, the unpredictability problem, and the justification problem. Thus, we propose a framework for discussing both the black box problem and the explainability of AI. We (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  29. Towards Knowledge-driven Distillation and Explanation of Black-box Models.Roberto Confalonieri, Guendalina Righetti, Pietro Galliani, Nicolas Toquard, Oliver Kutz & Daniele Porello - 2021 - In Roberto Confalonieri, Guendalina Righetti, Pietro Galliani, Nicolas Toquard, Oliver Kutz & Daniele Porello (eds.), Proceedings of the Workshop on Data meets Applied Ontologies in Explainable {AI} {(DAO-XAI} 2021) part of Bratislava Knowledge September {(BAKS} 2021), Bratislava, Slovakia, September 18th to 19th, 2021. CEUR 2998.
    We introduce and discuss a knowledge-driven distillation approach to explaining black-box models by means of two kinds of interpretable models. The first is perceptron (or threshold) connectives, which enrich knowledge representation languages such as Description Logics with linear operators that serve as a bridge between statistical learning and logical reasoning. The second is Trepan Reloaded, an ap- proach that builds post-hoc explanations of black-box classifiers in the form of decision trees enhanced by domain knowledge. Our aim is, firstly, to target (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  30. The virtues of interpretable medical AI.Joshua Hatherley, Robert Sparrow & Mark Howard - forthcoming - Cambridge Quarterly of Healthcare Ethics.
    Artificial intelligence (AI) systems have demonstrated impressive performance across a variety of clinical tasks. However, notoriously, sometimes these systems are “black boxes.” The initial response in the literature was a demand for “explainable AI.” However, recently, several authors have suggested that making AI more explainable or “interpretable” is likely to be at the cost of the accuracy of these systems and that prioritizing interpretability in medical AI may constitute a “lethal prejudice.” In this paper, we defend the value of interpretability (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  31.  58
    Believing in Black Boxes: Must Machine Learning in Healthcare be Explainable to be Evidence-Based?Liam McCoy, Connor Brenna, Stacy Chen, Karina Vold & Sunit Das - forthcoming - Journal of Clinical Epidemiology.
    Objective: To examine the role of explainability in machine learning for healthcare (MLHC), and its necessity and significance with respect to effective and ethical MLHC application. Study Design and Setting: This commentary engages with the growing and dynamic corpus of literature on the use of MLHC and artificial intelligence (AI) in medicine, which provide the context for a focused narrative review of arguments presented in favour of and opposition to explainability in MLHC. Results: We find that concerns regarding explainability are (...)
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  32.  61
    Transparent AI: reliabilist and proud.Abhishek Mishra - forthcoming - Journal of Medical Ethics.
    Durán et al argue in ‘Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI’1 that traditionally proposed solutions to make black box machine learning models in medicine less opaque and more transparent are, though necessary, ultimately not sufficient to establish their overall trustworthiness. This is because transparency procedures currently employed, such as the use of an interpretable predictor,2 cannot fully overcome the opacity of such models. Computational reliabilism, an alternate approach to (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  33.  28
    Toward safe AI.Andres Morales-Forero, Samuel Bassetto & Eric Coatanea - 2023 - AI and Society 38 (2):685-696.
    Since some AI algorithms with high predictive power have impacted human integrity, safety has become a crucial challenge in adopting and deploying AI. Although it is impossible to prevent an algorithm from failing in complex tasks, it is crucial to ensure that it fails safely, especially if it is a critical system. Moreover, due to AI’s unbridled development, it is imperative to minimize the methodological gaps in these systems’ engineering. This paper uses the well-known Box-Jenkins method for statistical modeling as (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  34. Defining the undefinable: the black box problem in healthcare artificial intelligence.Jordan Joseph Wadden - 2022 - Journal of Medical Ethics 48 (10):764-768.
    The ‘black box problem’ is a long-standing talking point in debates about artificial intelligence. This is a significant point of tension between ethicists, programmers, clinicians and anyone else working on developing AI for healthcare applications. However, the precise definition of these systems are often left undefined, vague, unclear or are assumed to be standardised within AI circles. This leads to situations where individuals working on AI talk over each other and has been invoked in numerous debates between opaque and explainable (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  35.  15
    Percentages and reasons: AI explainability and ultimate human responsibility within the medical field.Eva Winkler, Andreas Wabro & Markus Herrmann - 2024 - Ethics and Information Technology 26 (2).
    With regard to current debates on the ethical implementation of AI, especially two demands are linked: the call for explainability and for ultimate human responsibility. In the medical field, both are condensed into the role of one person: It is the physician to whom AI output should be explainable and who should thus bear ultimate responsibility for diagnostic or treatment decisions that are based on such AI output. In this article, we argue that a black box AI indeed creates a (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  36.  29
    Making the black box society transparent.Daniel Innerarity - forthcoming - AI and Society:1-7.
    The growing presence of smart devices in our lives turns all of society into something largely unknown to us. The strategy of demanding transparency stems from the desire to reduce the ignorance to which this automated society seems to condemn us. An evaluation of this strategy first requires that we distinguish the different types of non-transparency. Once we reveal the limits of the transparency needed to confront these devices, the article examines the alternative strategy of explainable artificial intelligence and concludes (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  37. We might be afraid of black-box algorithms.Carissa Veliz, Milo Phillips-Brown, Carina Prunkl & Ted Lechterman - 2021 - Journal of Medical Ethics 47.
    Fears of black-box algorithms are multiplying. Black-box algorithms are said to prevent accountability, make it harder to detect bias and so on. Some fears concern the epistemology of black-box algorithms in medicine and the ethical implications of that epistemology. In ‘Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI,' Durán and Jongsma seek to allay such fears. While some of their arguments are compelling, we still see reasons for fear.
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  38.  98
    Artificial intelligence ethics has a black box problem.Jean-Christophe Bélisle-Pipon, Erica Monteferrante, Marie-Christine Roy & Vincent Couture - 2023 - AI and Society 38 (4):1507-1522.
    It has become a truism that the ethics of artificial intelligence (AI) is necessary and must help guide technological developments. Numerous ethical guidelines have emerged from academia, industry, government and civil society in recent years. While they provide a basis for discussion on appropriate regulation of AI, it is not always clear how these ethical guidelines were developed, and by whom. Using content analysis, we surveyed a sample of the major documents (_n_ = 47) and analyzed the accessible information regarding (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  39.  17
    Thinking outside the Ring of Concussive Punches: Reimagining Boxing.Joseph Lee - 2021 - Sport, Ethics and Philosophy 16 (4):413-426.
    The idea of human-like robots with artificial intelligence (AI) engaging in sports has been considered in the light of robotics, technology and culture. However, robots with AI can also be used to clarify ethical questions in sports such as boxing with its inherent risks of brain injury and even death.This article develops an innovative way to assess the ethical issues in boxing by using a thought experiment, responding to recent medical data and overall concerns about harms and risks to boxers. (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  40.  47
    Box Clever: The Intelligence of Television. [REVIEW]Stuart Nolan - 2003 - AI and Society 17 (1):25-36.
    Television is a global, near-ubiquitous technology that has played a unique role in shaping modern society. It is a member of the family household that is regarded, both consciously and subconsciously, as a social actor, in a way that is remarkably similar to that of other members. Individuals, households and broad social groups form complex relationships with television but its underlying technologies have remained relatively simple until now. This paper looks at how new technologies will add intelligence to television and (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  41. Tableau-resolution based description abduction logics: An A-Box Abduction Problem Solver in Artificial Intelligence.Seyed Ahmad Mirsanei - 2023 - In The 9th International TMU Student Philosophy Conference. Tehran: Tarbiat Modares University - Department of Philosophy. pp. 133-137.
    By introducing and extending description logic (DLs) and growing up their application in knowledge representation and especially in OWLs and semantic web scope, many shortcomings and bugs were identified that weren’t resolvable in classical DLs and so logicians and computer scientists intended to non-classical and non-monotonic reasoning tools. In this paper, I discuses about abduction problem solvers, and by introducing A-Box abduction in description logics (DLs), such as ALC, discuss about decidability and complexity in different introduced algorithms, and report shortly (...)
     
    Export citation  
     
    Bookmark  
  42.  14
    A riddle, wrapped in a mystery, inside an enigma: How semantic black boxes and opaque artificial intelligence confuse medical decision‐making.Robin Pierce, Sigrid Sterckx & Wim Van Biesen - 2021 - Bioethics 36 (2):113-120.
    The use of artificial intelligence (AI) in healthcare comes with opportunities but also numerous challenges. A specific challenge that remains underexplored is the lack of clear and distinct definitions of the concepts used in and/or produced by these algorithms, and how their real world meaning is translated into machine language and vice versa, how their output is understood by the end user. This “semantic” black box adds to the “mathematical” black box present in many AI systems in which the underlying (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  43. Searle's Chinese Box: The Chinese Room Argument and Artificial Intelligence.Larry Hauser - 1993 - Dissertation, University of Michigan
    The apparently intelligent doings of computers occasion philosophical debate about artificial intelligence . Evidence of AI is not bad; arguments against AI are: such is the case for. One argument against AI--currently, perhaps, the most influential--is considered in detail: John Searle's Chinese room argument . This argument and its attendant thought experiment are shown to be unavailing against claims that computers can and even do think. CRA is formally invalid and informally fallacious. CRE's putative experimental result is not robust and (...)
     
    Export citation  
     
    Bookmark   1 citation  
  44. “Just” accuracy? Procedural fairness demands explainability in AI‑based medical resource allocation.Jon Rueda, Janet Delgado Rodríguez, Iris Parra Jounou, Joaquín Hortal-Carmona, Txetxu Ausín & David Rodríguez-Arias - 2022 - AI and Society:1-12.
    The increasing application of artificial intelligence (AI) to healthcare raises both hope and ethical concerns. Some advanced machine learning methods provide accurate clinical predictions at the expense of a significant lack of explainability. Alex John London has defended that accuracy is a more important value than explainability in AI medicine. In this article, we locate the trade-off between accurate performance and explainable algorithms in the context of distributive justice. We acknowledge that accuracy is cardinal from outcome-oriented justice because it helps (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  45.  9
    Epistemic (in)justice, social identity and the Black Box problem in patient care.Muneerah Khan & Cornelius Ewuoso - forthcoming - Medicine, Health Care and Philosophy:1-14.
    This manuscript draws on the moral norms arising from the nuanced accounts of epistemic (in)justice and social identity in relational autonomy to normatively assess and articulate the ethical problems associated with using AI in patient care in light of the Black Box problem. The article also describes how black-boxed AI may be used within the healthcare system. The manuscript highlights what needs to happen to align AI with the moral norms it draws on. Deeper thinking – from other backgrounds other (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  46.  23
    Creating meaningful work in the age of AI: explainable AI, explainability, and why it matters to organizational designers.Kristin Wulff & Hanne Finnestrand - forthcoming - AI and Society:1-14.
    In this paper, we contribute to research on enterprise artificial intelligence (AI), specifically to organizations improving the customer experiences and their internal processes through using the type of AI called machine learning (ML). Many organizations are struggling to get enough value from their AI efforts, and part of this is related to the area of explainability. The need for explainability is especially high in what is called black-box ML models, where decisions are made without anyone understanding how an AI reached (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  47. Searle's Chinese Box: Debunking the Chinese Room Argument. [REVIEW]Larry Hauser - 1997 - Minds and Machines 7 (2):199-226.
    John Searle's Chinese room argument is perhaps the most influential andwidely cited argument against artificial intelligence (AI). Understood astargeting AI proper – claims that computers can think or do think– Searle's argument, despite its rhetorical flash, is logically andscientifically a dud. Advertised as effective against AI proper, theargument, in its main outlines, is an ignoratio elenchi. It musterspersuasive force fallaciously by indirection fostered by equivocaldeployment of the phrase "strong AI" and reinforced by equivocation on thephrase "causal powers" (at least) equal (...)
    Direct download (10 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  48.  79
    Black is the new orange: how to determine AI liability.Paulo Henrique Padovan, Clarice Marinho Martins & Chris Reed - 2023 - Artificial Intelligence and Law 31 (1):133-167.
    Autonomous artificial intelligence (AI) systems can lead to unpredictable behavior causing loss or damage to individuals. Intricate questions must be resolved to establish how courts determine liability. Until recently, understanding the inner workings of “black boxes” has been exceedingly difficult; however, the use of Explainable Artificial Intelligence (XAI) would help simplify the complex problems that can occur with autonomous AI systems. In this context, this article seeks to provide technical explanations that can be given by XAI, and to show how (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  49.  62
    Connecting ethics and epistemology of AI.Federica Russo, Eric Schliesser & Jean Wagemans - forthcoming - AI and Society:1-19.
    The need for fair and just AI is often related to the possibility of understanding AI itself, in other words, of turning an opaque box into a glass box, as inspectable as possible. Transparency and explainability, however, pertain to the technical domain and to philosophy of science, thus leaving the ethics and epistemology of AI largely disconnected. To remedy this, we propose an integrated approach premised on the idea that a glass-box epistemology should explicitly consider how to incorporate values and (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  50.  51
    Moral agency without responsibility? Analysis of three ethical models of human-computer interaction in times of artificial intelligence (AI).Alexis Fritz, Wiebke Brandt, Henner Gimpel & Sarah Bayer - 2020 - De Ethica 6 (1):3-22.
    Philosophical and sociological approaches in technology have increasingly shifted toward describing AI (artificial intelligence) systems as ‘(moral) agents,’ while also attributing ‘agency’ to them. It is only in this way – so their principal argument goes – that the effects of technological components in a complex human-computer interaction can be understood sufficiently in phenomenological-descriptive and ethical-normative respects. By contrast, this article aims to demonstrate that an explanatory model only achieves a descriptively and normatively satisfactory result if the concepts of ‘(moral) (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   7 citations  
1 — 50 / 996