Results for 'trustworthy artificial intelligence'

999 found
Order:
  1.  24
    Trustworthy artificial intelligence and ethical design: public perceptions of trustworthiness of an AI-based decision-support tool in the context of intrapartum care.Angeliki Kerasidou, Antoniya Georgieva & Rachel Dlugatch - 2023 - BMC Medical Ethics 24 (1):1-16.
    BackgroundDespite the recognition that developing artificial intelligence (AI) that is trustworthy is necessary for public acceptability and the successful implementation of AI in healthcare contexts, perspectives from key stakeholders are often absent from discourse on the ethical design, development, and deployment of AI. This study explores the perspectives of birth parents and mothers on the introduction of AI-based cardiotocography (CTG) in the context of intrapartum care, focusing on issues pertaining to trust and trustworthiness.MethodsSeventeen semi-structured interviews were conducted (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  2.  66
    Trustworthy artificial intelligence.Mona Simion & Christoph Kelp - 2020 - Asian Journal of Philosophy 2 (1):1-12.
    This paper develops an account of trustworthy AI. Its central idea is that whether AIs are trustworthy is a matter of whether they live up to their function-based obligations. We argue that this account serves to advance the literature in a couple of important ways. First, it serves to provide a rationale for why a range of properties that are widely assumed in the scientific literature, as well as in policy, to be required of trustworthy AI, such (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  3.  38
    A critical perspective on guidelines for responsible and trustworthy artificial intelligence.Banu Buruk, Perihan Elif Ekmekci & Berna Arda - 2020 - Medicine, Health Care and Philosophy 23 (3):387-399.
    Artificial intelligence is among the fastest developing areas of advanced technology in medicine. The most important qualia of AI which makes it different from other advanced technology products is its ability to improve its original program and decision-making algorithms via deep learning abilities. This difference is the reason that AI technology stands out from the ethical issues of other advanced technology artifacts. The ethical issues of AI technology vary from privacy and confidentiality of personal data to ethical status (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  4. Data governance: organizing data for trustworthy artificial intelligence.M. Janssen - 2020 - Gov. Inf. Q 37:101493.
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  5.  15
    Involving patients in artificial intelligence research to build trustworthy systems.Soumya Banerjee & Sarah Griffiths - forthcoming - AI and Society:1-3.
  6.  45
    Access to Artificial Intelligence for Persons with Disabilities: Legal and Ethical Questions Concerning the Application of Trustworthy AI.Kristi Joamets & Archil Chochia - 2021 - Acta Baltica Historiae Et Philosophiae Scientiarum 9 (1):51-66.
    Digitalisation and emerging technologies affect our lives and are increasingly present in a growing number of fields. Ethical implications of the digitalisation process have therefore long been discussed by the scholars. The rapid development of artificial intelligence has taken the legal and ethical discussion to another level. There is no doubt that AI can have a positive impact on the society. The focus here, however, is on its more negative impact. This article will specifically consider how the law (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  7. Explainable Artificial Intelligence (XAI) 2.0: A Manifesto of Open Challenges and Interdisciplinary Research Directions.Luca Longo, Mario Brcic, Federico Cabitza, Jaesik Choi, Roberto Confalonieri, Javier Del Ser, Riccardo Guidotti, Yoichi Hayashi, Francisco Herrera, Andreas Holzinger, Richard Jiang, Hassan Khosravi, Freddy Lecue, Gianclaudio Malgieri, Andrés Páez, Wojciech Samek, Johannes Schneider, Timo Speith & Simone Stumpf - 2024 - Information Fusion 106 (June 2024).
    As systems based on opaque Artificial Intelligence (AI) continue to flourish in diverse real-world applications, understanding these black box models has become paramount. In response, Explainable AI (XAI) has emerged as a field of research with practical and ethical benefits across various domains. This paper not only highlights the advancements in XAI and its application in real-world scenarios but also addresses the ongoing challenges within XAI, emphasizing the need for broader perspectives and collaborative efforts. We bring together experts (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  8. Part II. A walk around the emerging new world. Russia in an emerging world / excerpt: from "Russia and the solecism of power" by David Holloway ; China in an emerging world.Constraints Excerpt: From "China'S. Demographic Prospects Toopportunities, Excerpt: From "China'S. Rise in Artificial Intelligence: Ingredientsand Economic Implications" by Kai-Fu Lee, Matt Sheehan, Latin America in an Emerging Worldsidebar: Governance Lessons From the Emerging New World: India, Excerpt: From "Latin America: Opportunities, Challenges for the Governance of A. Fragile Continent" by Ernesto Silva, Excerpt: From "Digital Transformation in Central America: Marginalization or Empowerment?" by Richard Aitkenhead, Benjamin Sywulka, the Middle East in an Emerging World Excerpt: From "the Islamic Republic of Iran in an Age of Global Transitions: Challenges for A. Theocratic Iran" by Abbas Milani, Roya Pakzad, Europe in an Emerging World Sidebar: Governance Lessons From the Emerging New World: Japan, Excerpt: From "Europe in the Global Race for Technological Leadership" by Jens Suedekum & Africa in an Emerging World Sidebar: Governance Lessons From the Emerging New Wo Bangladesh - 2020 - In George P. Shultz (ed.), A hinge of history: governance in an emerging new world. Stanford, California: Hoover Institution Press, Stanford University.
     
    Export citation  
     
    Bookmark  
  9. Artificial intelligence ethics has a black box problem.Jean-Christophe Bélisle-Pipon, Erica Monteferrante, Marie-Christine Roy & Vincent Couture - 2023 - AI and Society 38 (4):1507-1522.
    It has become a truism that the ethics of artificial intelligence (AI) is necessary and must help guide technological developments. Numerous ethical guidelines have emerged from academia, industry, government and civil society in recent years. While they provide a basis for discussion on appropriate regulation of AI, it is not always clear how these ethical guidelines were developed, and by whom. Using content analysis, we surveyed a sample of the major documents (_n_ = 47) and analyzed the accessible (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  10.  16
    Learning to Live with Strange Error: Beyond Trustworthiness in Artificial Intelligence Ethics.Charles Rathkopf & Bert Heinrichs - forthcoming - Cambridge Quarterly of Healthcare Ethics:1-13.
    Position papers on artificial intelligence (AI) ethics are often framed as attempts to work out technical and regulatory strategies for attaining what is commonly called trustworthy AI. In such papers, the technical and regulatory strategies are frequently analyzed in detail, but the concept of trustworthy AI is not. As a result, it remains unclear. This paper lays out a variety of possible interpretations of the concept and concludes that none of them is appropriate. The central problem (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  11.  51
    Actionable Principles for Artificial Intelligence Policy: Three Pathways.Charlotte Stix - 2021 - Science and Engineering Ethics 27 (1):1-17.
    In the development of governmental policy for artificial intelligence that is informed by ethics, one avenue currently pursued is that of drawing on “AI Ethics Principles”. However, these AI Ethics Principles often fail to be actioned in governmental policy. This paper proposes a novel framework for the development of ‘Actionable Principles for AI’. The approach acknowledges the relevance of AI Ethics Principles and homes in on methodological elements to increase their practical implementability in policy processes. As a case (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  12.  7
    Trust, artificial intelligence and software practitioners: an interdisciplinary agenda.Sarah Pink, Emma Quilty, John Grundy & Rashina Hoda - forthcoming - AI and Society:1-14.
    Trust and trustworthiness are central concepts in contemporary discussions about the ethics of and qualities associated with artificial intelligence (AI) and the relationships between people, organisations and AI. In this article we develop an interdisciplinary approach, using socio-technical software engineering and design anthropological approaches, to investigate how trust and trustworthiness concepts are articulated and performed by AI software practitioners. We examine how trust and trustworthiness are defined in relation to AI across these disciplines, and investigate how AI, trust (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  13. Instruments, agents, and artificial intelligence: novel epistemic categories of reliability.Eamon Duede - 2022 - Synthese 200 (6):1-20.
    Deep learning (DL) has become increasingly central to science, primarily due to its capacity to quickly, efficiently, and accurately predict and classify phenomena of scientific interest. This paper seeks to understand the principles that underwrite scientists’ epistemic entitlement to rely on DL in the first place and argues that these principles are philosophically novel. The question of this paper is not whether scientists can be justified in trusting in the reliability of DL. While today’s artificial intelligence exhibits characteristics (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  14.  74
    Artificial intelligence as law. [REVIEW]Bart Verheij - 2020 - Artificial Intelligence and Law 28 (2):181-206.
    Information technology is so ubiquitous and AI’s progress so inspiring that also legal professionals experience its benefits and have high expectations. At the same time, the powers of AI have been rising so strongly that it is no longer obvious that AI applications (whether in the law or elsewhere) help promoting a good society; in fact they are sometimes harmful. Hence many argue that safeguards are needed for AI to be trustworthy, social, responsible, humane, ethical. In short: AI should (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  15. In AI We Trust: Ethics, Artificial Intelligence, and Reliability.Mark Ryan - 2020 - Science and Engineering Ethics 26 (5):2749-2767.
    One of the main difficulties in assessing artificial intelligence (AI) is the tendency for people to anthropomorphise it. This becomes particularly problematic when we attach human moral activities to AI. For example, the European Commission’s High-level Expert Group on AI (HLEG) have adopted the position that we should establish a relationship of trust with AI and should cultivate trustworthy AI (HLEG AI Ethics guidelines for trustworthy AI, 2019, p. 35). Trust is one of the most important (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   44 citations  
  16. Science Based on Artificial Intelligence Need not Pose a Social Epistemological Problem.Uwe Peters - 2024 - Social Epistemology Review and Reply Collective 13 (1).
    It has been argued that our currently most satisfactory social epistemology of science can’t account for science that is based on artificial intelligence (AI) because this social epistemology requires trust between scientists that can take full responsibility for the research tools they use, and scientists can’t take full responsibility for the AI tools they use since these systems are epistemically opaque. I think this argument overlooks that much AI-based science can be done without opaque models, and that agents (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  17.  19
    Ethical use of artificial intelligence to prevent sudden cardiac death: an interview study of patient perspectives.Marieke A. R. Bak, Georg L. Lindinger, Hanno L. Tan, Jeannette Pols, Dick L. Willems, Ayca Koçar & Menno T. Maris - 2024 - BMC Medical Ethics 25 (1):1-15.
    BackgroundThe emergence of artificial intelligence (AI) in medicine has prompted the development of numerous ethical guidelines, while the involvement of patients in the creation of these documents lags behind. As part of the European PROFID project we explore patient perspectives on the ethical implications of AI in care for patients at increased risk of sudden cardiac death (SCD).AimExplore perspectives of patients on the ethical use of AI, particularly in clinical decision-making regarding the implantation of an implantable cardioverter-defibrillator (ICD).MethodsSemi-structured, (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  18. Evolutionary and religious perspectives on morality.Artificial Intelligence - forthcoming - Zygon.
  19. Otto Neumaier.Artificial Intelligence - 1987 - In Rainer P. Born (ed.), Artificial Intelligence: The Case Against. St Martin's Press. pp. 132.
    No categories
     
    Export citation  
     
    Bookmark  
  20.  36
    The Future Ethics of Artificial Intelligence in Medicine: Making Sense of Collaborative Models.Torbjørn Gundersen & Kristine Bærøe - 2022 - Science and Engineering Ethics 28 (2):1-16.
    This article examines the role of medical doctors, AI designers, and other stakeholders in making applied AI and machine learning ethically acceptable on the general premises of shared decision-making in medicine. Recent policy documents such as the EU strategy on trustworthy AI and the research literature have often suggested that AI could be made ethically acceptable by increased collaboration between developers and other stakeholders. The article articulates and examines four central alternative models of how AI can be designed and (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  21.  53
    Intentional machines: A defence of trust in medical artificial intelligence.Georg Starke, Rik van den Brule, Bernice Simone Elger & Pim Haselager - 2021 - Bioethics 36 (2):154-161.
    Trust constitutes a fundamental strategy to deal with risks and uncertainty in complex societies. In line with the vast literature stressing the importance of trust in doctor–patient relationships, trust is therefore regularly suggested as a way of dealing with the risks of medical artificial intelligence (AI). Yet, this approach has come under charge from different angles. At least two lines of thought can be distinguished: (1) that trusting AI is conceptually confused, that is, that we cannot trust AI; (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  22.  57
    Trust and Trust-Engineering in Artificial Intelligence Research: Theory and Praxis.Melvin Chen - 2021 - Philosophy and Technology 34 (4):1429-1447.
    In this paper, I will identify two problems of trust in an AI-relevant context: a theoretical problem and a practical one. I will identify and address a number of skeptical challenges to an AI-relevant theory of trust. In addition, I will identify what I shall term the ‘scope challenge’, which I take to hold for any AI-relevant theory of trust that purports to be representationally adequate to the multifarious forms of trust and AI. Thereafter, I will suggest how trust-engineering, a (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  23.  83
    In AI We Trust Incrementally: a Multi-layer Model of Trust to Analyze Human-Artificial Intelligence Interactions.Andrea Ferrario, Michele Loi & Eleonora Viganò - 2020 - Philosophy and Technology 33 (3):523-539.
    Real engines of the artificial intelligence revolution, machine learning models, and algorithms are embedded nowadays in many services and products around us. As a society, we argue it is now necessary to transition into a phronetic paradigm focused on the ethical dilemmas stemming from the conception and application of AIs to define actionable recommendations as well as normative solutions. However, both academic research and society-driven initiatives are still quite far from clearly defining a solid program of study and (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   15 citations  
  24.  31
    Intentional machines: A defence of trust in medical artificial intelligence.Georg Starke, Rik Brule, Bernice Simone Elger & Pim Haselager - 2021 - Bioethics 36 (2):154-161.
    Bioethics, Volume 36, Issue 2, Page 154-161, February 2022.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  25. Transparency you can trust: Transparency requirements for artificial intelligence between legal norms and contextual concerns.Aurelia Tamò-Larrieux, Christoph Lutz, Eduard Fosch Villaronga & Heike Felzmann - 2019 - Big Data and Society 6 (1).
    Transparency is now a fundamental principle for data processing under the General Data Protection Regulation. We explore what this requirement entails for artificial intelligence and automated decision-making systems. We address the topic of transparency in artificial intelligence by integrating legal, social, and ethical aspects. We first investigate the ratio legis of the transparency requirement in the General Data Protection Regulation and its ethical underpinnings, showing its focus on the provision of information and explanation. We then discuss (...)
    Direct download  
     
    Export citation  
     
    Bookmark   14 citations  
  26.  41
    Institutionalised distrust and human oversight of artificial intelligence: towards a democratic design of AI governance under the European Union AI Act.Johann Laux - forthcoming - AI and Society:1-14.
    Human oversight has become a key mechanism for the governance of artificial intelligence (“AI”). Human overseers are supposed to increase the accuracy and safety of AI systems, uphold human values, and build trust in the technology. Empirical research suggests, however, that humans are not reliable in fulfilling their oversight tasks. They may be lacking in competence or be harmfully incentivised. This creates a challenge for human oversight to be effective. In addressing this challenge, this article aims to make (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  27.  26
    The Possible Relationship Between Law and Ethics in the Context of Artificial Intelligence Regulation.Livia Aulino, Maria Cristina Gaeta & Emiliano Troisi - 2023 - Humana Mente 16 (44).
    The latest academic discussion has focused on the potential and risks associated with technological systems. In this perspective, defining a set of legal rules could be the priority but this action appears extremely difficult at the European level and, therefore, in the last years, a set of ethical principles contained in many different documents has been published. The need to develop trustworthy and human-centric AI technologies is accomplished by creating these two types of rule sets: legal and ethical. The (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  28.  7
    On the Risks of Trusting Artificial Intelligence: The Case of Cybersecurity.Mariarosaria Taddeo - 2021 - In Josh Cowls & Jessica Morley (eds.), The 2020 Yearbook of the Digital Ethics Lab. Springer Verlag. pp. 97-108.
    In this chapter, I draw on my previous work on trust and cybersecurity to offer a definition of trust and trustworthiness to understand to what extent trusting AI for cybersecurity tasks is justified and what measures can be put in place to rely on AI in cases where trust is not justified, but the use of AI is still beneficial.
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  29.  8
    Proceedings of the 1986 Conference on Theoretical Aspects of Reasoning about Knowledge: March 19-22, 1988, Monterey, California.Joseph Y. Halpern, International Business Machines Corporation, American Association of Artificial Intelligence, United States & Association for Computing Machinery - 1986
    Direct download  
     
    Export citation  
     
    Bookmark  
  30.  20
    Trustworthy AI: AI made in Germany and Europe?Hartmut Hirsch-Kreinsen & Thorben Krokowski - forthcoming - AI and Society:1-11.
    As the capabilities of artificial intelligence (AI) continue to expand, concerns are also growing about the ethical and social consequences of unregulated development and, above all, use of AI systems in a wide range of social areas. It is therefore indisputable that the application of AI requires social standardization and regulation. For years, innovation policy measures and the most diverse activities of European and German institutions have been directed toward this goal. Under the label “Trustworthy AI” (TAI), (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  31.  8
    Demonstrating Trustworthiness to Patients in Data‐Driven Health Care.Paige Nong - 2023 - Hastings Center Report 53 (S2):69-75.
    Patient data is used to drive an ecosystem of advanced digital tools in health care, like predictive models or artificial intelligence‐based decision support. Patients themselves, however, receive little information about these technologies or how they affect their care. This raises important questions about patient trust and continued engagement in a health care system that extracts their data but does not treat them as key stakeholders. This essay explores these tensions and provides steps forward for health systems as they (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  32. Jacques Ferber.Reactive Distributed Artificial - 1996 - In N. Jennings & G. O'Hare (eds.), Foundations of Distributed Artificial Intelligence. Wiley. pp. 287.
     
    Export citation  
     
    Bookmark  
  33. Michael Wooldridge.Modeling Distributed Artificial - 1996 - In N. Jennings & G. O'Hare (eds.), Foundations of Distributed Artificial Intelligence. Wiley. pp. 269.
    No categories
     
    Export citation  
     
    Bookmark  
  34.  34
    Justifying our Credences in the Trustworthiness of AI Systems: A Reliabilistic Approach.Andrea Ferrario - manuscript
    We address an open problem in the epistemology of artificial intelligence (AI), namely, the justification of the epistemic attitudes we have towards the trustworthiness of AI systems. We start from a key consideration: the trustworthiness of an AI is a time-relative property of the system, with two distinct facets. One is the actual trustworthiness of the AI, and the other is the perceived trustworthiness of the system as assessed by its users while interacting with it. We show that (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  35. Attributions toward Artificial Agents in a modified Moral Turing Test.Eyal Aharoni, Sharlene Fernandes, Daniel Brady, Caelan Alexander, Michael Criner, Kara Queen, Javier Rando, Eddy Nahmias & Victor Crespo - 2024 - Scientific Reports 14 (8458):1-11.
    Advances in artificial intelligence (AI) raise important questions about whether people view moral evaluations by AI systems similarly to human-generated moral evaluations. We conducted a modified Moral Turing Test (m-MTT), inspired by Allen et al. (Exp Theor Artif Intell 352:24–28, 2004) proposal, by asking people to distinguish real human moral evaluations from those made by a popular advanced AI language model: GPT-4. A representative sample of 299 U.S. adults first rated the quality of moral evaluations when blinded to (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  36.  74
    Trustworthy medical AI systems need to know when they don’t know.Thomas Grote - forthcoming - Journal of Medical Ethics.
    There is much to learn from Durán and Jongsma’s paper.1 One particularly important insight concerns the relationship between epistemology and ethics in medical artificial intelligence. In clinical environments, the task of AI systems is to provide risk estimates or diagnostic decisions, which then need to be weighed by physicians. Hence, while the implementation of AI systems might give rise to ethical issues—for example, overtreatment, defensive medicine or paternalism2—the issue that lies at the heart is an epistemic problem: how (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  37.  8
    Adaptable robots, ethics, and trust: a qualitative and philosophical exploration of the individual experience of trustworthy AI.Stephanie Sheir, Arianna Manzini, Helen Smith & Jonathan Ives - forthcoming - AI and Society:1-14.
    Much has been written about the need for trustworthy artificial intelligence (AI), but the underlying meaning of trust and trustworthiness can vary or be used in confusing ways. It is not always clear whether individuals are speaking of a technology’s trustworthiness, a developer’s trustworthiness, or simply of gaining the trust of users by any means. In sociotechnical circles, trustworthiness is often used as a proxy for ‘the good’, illustrating the moral heights to which technologies and developers ought (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  38. Keith S. Decker.Intelligence Testbeds - 1996 - In N. Jennings & G. O'Hare (eds.), Foundations of Distributed Artificial Intelligence. Wiley. pp. 9--119.
    No categories
     
    Export citation  
     
    Bookmark  
  39.  44
    A Leap of Faith: Is There a Formula for “Trustworthy” AI?Matthias Braun, Hannah Bleher & Patrik Hummel - 2021 - Hastings Center Report 51 (3):17-22.
    Trust is one of the big buzzwords in debates about the shaping of society, democracy, and emerging technologies. For example, one prominent idea put forward by the High‐Level Expert Group on Artificial Intelligence appointed by the European Commission is that artificial intelligence should be trustworthy. In this essay, we explore the notion of trust and argue that both proponents and critics of trustworthy AI have flawed pictures of the nature of trust. We develop an (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   10 citations  
  40.  91
    Language and Intelligence.Carlos Montemayor - 2021 - Minds and Machines 31 (4):471-486.
    This paper explores aspects of GPT-3 that have been discussed as harbingers of artificial general intelligence and, in particular, linguistic intelligence. After introducing key features of GPT-3 and assessing its performance in the light of the conversational standards set by Alan Turing in his seminal paper from 1950, the paper elucidates the difference between clever automation and genuine linguistic intelligence. A central theme of this discussion on genuine conversational intelligence is that members of a linguistic (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  41. Can We Make Sense of the Notion of Trustworthy Technology?Philip J. Nickel, Maarten Franssen & Peter Kroes - 2010 - Knowledge, Technology & Policy 23 (3-4):429-444.
    In this paper we raise the question whether technological artifacts can properly speaking be trusted or said to be trustworthy. First, we set out some prevalent accounts of trust and trustworthiness and explain how they compare with the engineer’s notion of reliability. We distinguish between pure rational-choice accounts of trust, which do not differ in principle from mere judgments of reliability, and what we call “motivation-attributing” accounts of trust, which attribute specific motivations to trustworthy entities. Then we consider (...)
    Direct download (8 more)  
     
    Export citation  
     
    Bookmark   10 citations  
  42.  14
    Dual-Use and Trustworthy? A Mixed Methods Analysis of AI Diffusion Between Civilian and Defense R&D.Christian Reuter, Thea Riebe & Stefka Schmid - 2022 - Science and Engineering Ethics 28 (2):1-23.
    Artificial Intelligence (AI) seems to be impacting all industry sectors, while becoming a motor for innovation. The diffusion of AI from the civilian sector to the defense sector, and AI’s dual-use potential has drawn attention from security and ethics scholars. With the publication of the ethical guideline Trustworthy AI by the European Union (EU), normative questions on the application of AI have been further evaluated. In order to draw conclusions on Trustworthy AI as a point of (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  43. Establishing the rules for building trustworthy AI.Luciano Floridi - 2019 - Nature Machine Intelligence 1 (6):261-262.
    AI is revolutionizing everyone’s life, and it is crucial that it does so in the right way. AI’s profound and far-reaching potential for transformation concerns the engineering of systems that have some degree of autonomous agency. This is epochal and requires establishing a new, ethical balance between human and artificial autonomy.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   20 citations  
  44.  11
    Practicing trustworthy machine learning: consistent, transparent, and fair AI pipelines.Yada Pruksachatkun - 2022 - Boston: O'Reilly. Edited by Matthew McAteer & Subhabrata Majumdar.
    With the increasing use of AI in high-stakes domains such as medicine, law, and defense, organizations spend a lot of time and money to make ML models trustworthy. Many books on the subject offer deep dives into theories and concepts. This guide provides a practical starting point to help development teams produce models that are secure, more robust, less biased, and more explainable. Authors Yada Pruksachatkun, Matthew McAteer, and Subhabrata Majumdar translate best practices in the academic literature for curating (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  45.  17
    Toward trustworthy programming for autonomous concurrent systems.Lavindra de Silva & Alan Mycroft - 2023 - AI and Society 38 (2):963-965.
  46. A phenomenology and epistemology of large language models: Transparency, trust, and trustworthiness.Richard Heersmink, Barend de Rooij, María Jimena Clavel Vázquez & Matteo Colombo - forthcoming - Ethics and Information Technology.
    This paper analyses the phenomenology and epistemology of chatbots such as ChatGPT and Bard. The computational architecture underpinning these chatbots are large language models (LLMs), which are generative AI (Artificial Intelligence) systems trained on a massive dataset of text extracted from the Web. We conceptualise these LLMs as multifunctional computational cognitive artifacts, used for various cognitive tasks such as translating, summarizing, answering questions, information-seeking, and much more. Phenomenologically, LLMs can be experienced as a “quasi-other”; when that happens, users (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  47.  18
    Can robots be trustworthy?Ines Schröder, Oliver Müller, Helena Scholl, Shelly Levy-Tzedek & Philipp Kellmeyer - 2023 - Ethik in der Medizin 35 (2):221-246.
    Definition of the problem This article critically addresses the conceptualization of trust in the ethical discussion on artificial intelligence (AI) in the specific context of social robots in care. First, we attempt to define in which respect we can speak of ‘social’ robots and how their ‘social affordances’ affect the human propensity to trust in human–robot interaction. Against this background, we examine the use of the concept of ‘trust’ and ‘trustworthiness’ with respect to the guidelines and recommendations of (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  48.  13
    Questioning the Role of Moral AI as an Adviser within the Framework of Trustworthiness Ethics.Silviya Serafimova - 2021 - Filosofiya-Philosophy 30 (4):402-412.
    The main objective of this article is to demonstrate why despite the growing interest in justifying AI’s trustworthiness, one can argue for AI’s reliability. By analyzing why trustworthiness ethics in Nickel’s sense provides some wellgrounded hints for rethinking the rational, affective and normative accounts of trust in respect to AI, I examine some concerns about the trustworthiness of Savulescu and Maslen’s model of moral AI as an adviser. Specifically, I tackle one of its exemplifications regarding Klincewicz’s hypothetical scenario of John (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  49.  42
    Keep trusting! A plea for the notion of Trustworthy AI.Giacomo Zanotti, Mattia Petrolo, Daniele Chiffi & Viola Schiaffonati - forthcoming - AI and Society:1-12.
    A lot of attention has recently been devoted to the notion of Trustworthy AI (TAI). However, the very applicability of the notions of trust and trustworthiness to AI systems has been called into question. A purely epistemic account of trust can hardly ground the distinction between trustworthy and merely reliable AI, while it has been argued that insisting on the importance of the trustee’s motivations and goodwill makes the notion of TAI a categorical error. After providing an overview (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  50.  25
    Artificial agents’ explainability to support trust: considerations on timing and context.Guglielmo Papagni, Jesse de Pagter, Setareh Zafari, Michael Filzmoser & Sabine T. Koeszegi - 2023 - AI and Society 38 (2):947-960.
    Strategies for improving the explainability of artificial agents are a key approach to support the understandability of artificial agents’ decision-making processes and their trustworthiness. However, since explanations are not inclined to standardization, finding solutions that fit the algorithmic-based decision-making processes of artificial agents poses a compelling challenge. This paper addresses the concept of trust in relation to complementary aspects that play a role in interpersonal and human–agent relationships, such as users’ confidence and their perception of artificial (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
1 — 50 / 999