Results for 'trust in artificial intelligence'

999 found
Order:
  1. Part II. A walk around the emerging new world. Russia in an emerging world / excerpt: from "Russia and the solecism of power" by David Holloway ; China in an emerging world.Constraints Excerpt: From "China'S. Demographic Prospects Toopportunities, Excerpt: From "China'S. Rise in Artificial Intelligence: Ingredientsand Economic Implications" by Kai-Fu Lee, Matt Sheehan, Latin America in an Emerging Worldsidebar: Governance Lessons From the Emerging New World: India, Excerpt: From "Latin America: Opportunities, Challenges for the Governance of A. Fragile Continent" by Ernesto Silva, Excerpt: From "Digital Transformation in Central America: Marginalization or Empowerment?" by Richard Aitkenhead, Benjamin Sywulka, the Middle East in an Emerging World Excerpt: From "the Islamic Republic of Iran in an Age of Global Transitions: Challenges for A. Theocratic Iran" by Abbas Milani, Roya Pakzad, Europe in an Emerging World Sidebar: Governance Lessons From the Emerging New World: Japan, Excerpt: From "Europe in the Global Race for Technological Leadership" by Jens Suedekum & Africa in an Emerging World Sidebar: Governance Lessons From the Emerging New Wo Bangladesh - 2020 - In George P. Shultz (ed.), A hinge of history: governance in an emerging new world. Stanford, California: Hoover Institution Press, Stanford University.
     
    Export citation  
     
    Bookmark  
  2.  6
    How Transparency Modulates Trust in Artificial Intelligence.John Zerilli, Umang Bhatt & Adrian Weller - 2022 - Patterns 3 (4):1-10.
    We review the literature on how perceiving an AI making mistakes violates trust and how such violations might be repaired. In doing so, we discuss the role played by various forms of algorithmic transparency in the process of trust repair, including explanations of algorithms, uncertainty estimates, and performance metrics.
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  3. In AI We Trust: Ethics, Artificial Intelligence, and Reliability.Mark Ryan - 2020 - Science and Engineering Ethics 26 (5):2749-2767.
    One of the main difficulties in assessing artificial intelligence (AI) is the tendency for people to anthropomorphise it. This becomes particularly problematic when we attach human moral activities to AI. For example, the European Commission’s High-level Expert Group on AI (HLEG) have adopted the position that we should establish a relationship of trust with AI and should cultivate trustworthy AI (HLEG AI Ethics guidelines for trustworthy AI, 2019, p. 35). Trust is one of the most important (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   44 citations  
  4.  54
    Trust and Trust-Engineering in Artificial Intelligence Research: Theory and Praxis.Melvin Chen - 2021 - Philosophy and Technology 34 (4):1429-1447.
    In this paper, I will identify two problems of trust in an AI-relevant context: a theoretical problem and a practical one. I will identify and address a number of skeptical challenges to an AI-relevant theory of trust. In addition, I will identify what I shall term the ‘scope challenge’, which I take to hold for any AI-relevant theory of trust that purports to be representationally adequate to the multifarious forms of trust and AI. Thereafter, I will (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  5. Trusting artificial intelligence in cybersecurity is a double-edged sword.Mariarosaria Taddeo, Tom McCutcheon & Luciano Floridi - 2019 - Philosophy and Technology 32 (1):1-15.
    Applications of artificial intelligence (AI) for cybersecurity tasks are attracting greater attention from the private and the public sectors. Estimates indicate that the market for AI in cybersecurity will grow from US$1 billion in 2016 to a US$34.8 billion net worth by 2025. The latest national cybersecurity and defence strategies of several governments explicitly mention AI capabilities. At the same time, initiatives to define new standards and certification procedures to elicit users’ trust in AI are emerging on (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   18 citations  
  6. Trust in Medical Artificial Intelligence: A Discretionary Account.Philip J. Nickel - 2022 - Ethics and Information Technology 24 (1):1-10.
    This paper sets out an account of trust in AI as a relationship between clinicians, AI applications, and AI practitioners in which AI is given discretionary authority over medical questions by clinicians. Compared to other accounts in recent literature, this account more adequately explains the normative commitments created by practitioners when inviting clinicians’ trust in AI. To avoid committing to an account of trust in AI applications themselves, I sketch a reductive view on which discretionary authority is (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  7.  27
    Explainable Artificial Intelligence (XAI) to Enhance Trust Management in Intrusion Detection Systems Using Decision Tree Model.Basim Mahbooba, Mohan Timilsina, Radhya Sahal & Martin Serrano - 2021 - Complexity 2021:1-11.
    Despite the growing popularity of machine learning models in the cyber-security applications ), most of these models are perceived as a black-box. The eXplainable Artificial Intelligence has become increasingly important to interpret the machine learning models to enhance trust management by allowing human experts to understand the underlying data evidence and causal reasoning. According to IDS, the critical role of trust management is to understand the impact of the malicious data to detect any intrusion in the (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  8.  33
    Application of artificial intelligence: risk perception and trust in the work context with different impact levels and task types.Uwe Klein, Jana Depping, Laura Wohlfahrt & Pantaleon Fassbender - forthcoming - AI and Society:1-12.
    Following the studies of Araujo et al. (AI Soc 35:611–623, 2020) and Lee (Big Data Soc 5:1–16, 2018), this empirical study uses two scenario-based online experiments. The sample consists of 221 subjects from Germany, differing in both age and gender. The original studies are not replicated one-to-one. New scenarios are constructed as realistically as possible and focused on everyday work situations. They are based on the AI acceptance model of Scheuer (Grundlagen intelligenter KI-Assistenten und deren vertrauensvolle Nutzung. Springer, Wiesbaden, 2020) (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  9. In AI we trust? Perceptions about automated decision-making by artificial intelligence.Theo Araujo, Natali Helberger, Sanne Kruikemeier & Claes H. de Vreese - 2020 - AI and Society 35 (3):611-623.
    Fueled by ever-growing amounts of (digital) data and advances in artificial intelligence, decision-making in contemporary societies is increasingly delegated to automated processes. Drawing from social science theories and from the emerging body of research about algorithmic appreciation and algorithmic perceptions, the current study explores the extent to which personal characteristics can be linked to perceptions of automated decision-making by AI, and the boundary conditions of these perceptions, namely the extent to which such perceptions differ across media, (public) health, (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   40 citations  
  10.  31
    Trust criteria for artificial intelligence in health: normative and epistemic considerations.Kristin Kostick-Quenet, Benjamin H. Lang, Jared Smith, Meghan Hurley & Jennifer Blumenthal-Barby - forthcoming - Journal of Medical Ethics.
    Rapid advancements in artificial intelligence and machine learning (AI/ML) in healthcare raise pressing questions about how much users should trust AI/ML systems, particularly for high stakes clinical decision-making. Ensuring that user trust is properly calibrated to a tool’s computational capacities and limitations has both practical and ethical implications, given that overtrust or undertrust can influence over-reliance or under-reliance on algorithmic tools, with significant implications for patient safety and health outcomes. It is, thus, important to better understand (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  11.  7
    Trust, artificial intelligence and software practitioners: an interdisciplinary agenda.Sarah Pink, Emma Quilty, John Grundy & Rashina Hoda - forthcoming - AI and Society:1-14.
    Trust and trustworthiness are central concepts in contemporary discussions about the ethics of and qualities associated with artificial intelligence (AI) and the relationships between people, organisations and AI. In this article we develop an interdisciplinary approach, using socio-technical software engineering and design anthropological approaches, to investigate how trust and trustworthiness concepts are articulated and performed by AI software practitioners. We examine how trust and trustworthiness are defined in relation to AI across these disciplines, and investigate (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  12.  53
    Intentional machines: A defence of trust in medical artificial intelligence.Georg Starke, Rik van den Brule, Bernice Simone Elger & Pim Haselager - 2021 - Bioethics 36 (2):154-161.
    Trust constitutes a fundamental strategy to deal with risks and uncertainty in complex societies. In line with the vast literature stressing the importance of trust in doctor–patient relationships, trust is therefore regularly suggested as a way of dealing with the risks of medical artificial intelligence (AI). Yet, this approach has come under charge from different angles. At least two lines of thought can be distinguished: (1) that trusting AI is conceptually confused, that is, that we (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  13.  9
    The influence of customer trust and artificial intelligence on customer engagement and loyalty – The case of the home-sharing industry.Ying Chen, Catherine Prentice, Scott Weaven & Aaron Hisao - 2022 - Frontiers in Psychology 13.
    Trust is an essential factor in online and offline transactions. However, the role of customer trust has received limited attention in the home-sharing economy. Drawing on the revised stimulus organism response model and trust transfer theory, this paper examines how customer trust in home-sharing hosts and platforms affects customer relationships, manifested in customer engagement and loyalty. As artificial intelligence is extensively utilized within home-sharing platforms to facilitate business operations and enhance the customer experience, this (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  14.  62
    Rebooting Ai: Building Artificial Intelligence We Can Trust.Gary Marcus & Ernest Davis - 2019 - Vintage.
    Two leaders in the field offer a compelling analysis of the current state of the art and reveal the steps we must take to achieve a truly robust artificial intelligence. Despite the hype surrounding AI, creating an intelligence that rivals or exceeds human levels is far more complicated than we have been led to believe. Professors Gary Marcus and Ernest Davis have spent their careers at the forefront of AI research and have witnessed some of the greatest (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   12 citations  
  15.  27
    Deep Learning Meets Deep Democracy: Deliberative Governance and Responsible Innovation in Artificial Intelligence.Alexander Buhmann & Christian Fieseler - forthcoming - Business Ethics Quarterly:1-34.
    Responsible innovation in artificial intelligence calls for public deliberation: well-informed “deep democratic” debate that involves actors from the public, private, and civil society sectors in joint efforts to critically address the goals and means of AI. Adopting such an approach constitutes a challenge, however, due to the opacity of AI and strong knowledge boundaries between experts and citizens. This undermines trust in AI and undercuts key conditions for deliberation. We approach this challenge as a problem of situating (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  16.  22
    When the frameworks don’t work: data protection, trust and artificial intelligence.Zoë Fritz - 2022 - Journal of Medical Ethics 48 (4):213-214.
    With new technologies come new ethical challenges. Often, we can apply previously established principles, even though it may take some time to fully understand the detail of the new technology - or the questions that arise from it. The International Commission on Radiological Protection, for example, was founded in 1928 and has based its advice on balancing the radiation exposure associated with X-rays and CT scans with the diagnostic benefits of the new investigations. They have regularly updated their advice as (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  17.  14
    Research ethics and artificial intelligence for global health: perspectives from the global forum on bioethics in research.James Shaw, Joseph Ali, Caesar A. Atuire, Phaik Yeong Cheah, Armando Guio Español, Judy Wawira Gichoya, Adrienne Hunt, Daudi Jjingo, Katherine Littler, Daniela Paolotti & Effy Vayena - 2024 - BMC Medical Ethics 25 (1):1-9.
    Background The ethical governance of Artificial Intelligence (AI) in health care and public health continues to be an urgent issue for attention in policy, research, and practice. In this paper we report on central themes related to challenges and strategies for promoting ethics in research involving AI in global health, arising from the Global Forum on Bioethics in Research (GFBR), held in Cape Town, South Africa in November 2022. Methods The GFBR is an annual meeting organized by the (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  18.  82
    In AI We Trust Incrementally: a Multi-layer Model of Trust to Analyze Human-Artificial Intelligence Interactions.Andrea Ferrario, Michele Loi & Eleonora Viganò - 2020 - Philosophy and Technology 33 (3):523-539.
    Real engines of the artificial intelligence revolution, machine learning models, and algorithms are embedded nowadays in many services and products around us. As a society, we argue it is now necessary to transition into a phronetic paradigm focused on the ethical dilemmas stemming from the conception and application of AIs to define actionable recommendations as well as normative solutions. However, both academic research and society-driven initiatives are still quite far from clearly defining a solid program of study and (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   15 citations  
  19.  29
    Intentional machines: A defence of trust in medical artificial intelligence.Georg Starke, Rik Brule, Bernice Simone Elger & Pim Haselager - 2021 - Bioethics 36 (2):154-161.
    Bioethics, Volume 36, Issue 2, Page 154-161, February 2022.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  20.  72
    AI in the headlines: the portrayal of the ethical issues of artificial intelligence in the media.Leila Ouchchy, Allen Coin & Veljko Dubljević - 2020 - AI and Society 35 (4):927-936.
    As artificial intelligence technologies become increasingly prominent in our daily lives, media coverage of the ethical considerations of these technologies has followed suit. Since previous research has shown that media coverage can drive public discourse about novel technologies, studying how the ethical issues of AI are portrayed in the media may lead to greater insight into the potential ramifications of this public discourse, particularly with regard to development and regulation of AI. This paper expands upon previous research by (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   19 citations  
  21.  24
    Trustworthy artificial intelligence and ethical design: public perceptions of trustworthiness of an AI-based decision-support tool in the context of intrapartum care.Angeliki Kerasidou, Antoniya Georgieva & Rachel Dlugatch - 2023 - BMC Medical Ethics 24 (1):1-16.
    BackgroundDespite the recognition that developing artificial intelligence (AI) that is trustworthy is necessary for public acceptability and the successful implementation of AI in healthcare contexts, perspectives from key stakeholders are often absent from discourse on the ethical design, development, and deployment of AI. This study explores the perspectives of birth parents and mothers on the introduction of AI-based cardiotocography (CTG) in the context of intrapartum care, focusing on issues pertaining to trust and trustworthiness.MethodsSeventeen semi-structured interviews were conducted (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  22.  25
    Philosophical evaluation of the conceptualisation of trust in the NHS’ Code of Conduct for artificial intelligence-driven technology.Soogeun Samuel Lee - 2022 - Journal of Medical Ethics 48 (4):272-277.
    The UK Government’s Code of Conduct for data-driven health and care technologies, specifically artificial intelligence -driven technologies, comprises 10 principles that outline a gold-standard of ethical conduct for AI developers and implementers within the National Health Service. Considering the importance of trust in medicine, in this essay I aim to evaluate the conceptualisation of trust within this piece of ethical governance. I examine the Code of Conduct, specifically Principle 7, and extract two positions: a principle of (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  23.  12
    Philosophical evaluation of the conceptualisation of trust in the NHS Code of Conduct for artificial intelligence-driven technology.Soogeun Samuel Lee - 2022 - Journal of Medical Ethics Recent Issues 48 (4):272-277.
    The UK Government’s Code of Conduct for data-driven health and care technologies, specifically artificial intelligence -driven technologies, comprises 10 principles that outline a gold-standard of ethical conduct for AI developers and implementers within the National Health Service. Considering the importance of trust in medicine, in this essay I aim to evaluate the conceptualisation of trust within this piece of ethical governance. I examine the Code of Conduct, specifically Principle 7, and extract two positions: a principle of (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  24.  19
    Artificial Intelligence in medicine: reshaping the face of medical practice.Max Tretter, David Samhammer & Peter Dabrock - 2023 - Ethik in der Medizin 36 (1):7-29.
    Background The use of Artificial Intelligence (AI) has the potential to provide relief in the challenging and often stressful clinical setting for physicians. So far, however, the actual changes in work for physicians remain a prediction for the future, including new demands on the social level of medical practice. Thus, the question of how the requirements for physicians will change due to the implementation of AI is addressed. Methods The question is approached through conceptual considerations based on the (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  25.  18
    Trust Toward Robots and Artificial Intelligence: An Experimental Approach to Human–Technology Interactions Online.Atte Oksanen, Nina Savela, Rita Latikka & Aki Koivula - 2020 - Frontiers in Psychology 11.
    Robotization and artificial intelligence are expected to change societies profoundly. Trust is an important factor of human–technology interactions, as robots and AI increasingly contribute to tasks previously handled by humans. Currently, there is a need for studies investigating trust toward AI and robots, especially in first-encounter meetings. This article reports findings from a study investigating trust toward robots and AI in an online trust game experiment. The trust game manipulated the hypothetical opponents that (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  26.  19
    Ethical use of artificial intelligence to prevent sudden cardiac death: an interview study of patient perspectives.Marieke A. R. Bak, Georg L. Lindinger, Hanno L. Tan, Jeannette Pols, Dick L. Willems, Ayca Koçar & Menno T. Maris - 2024 - BMC Medical Ethics 25 (1):1-15.
    BackgroundThe emergence of artificial intelligence (AI) in medicine has prompted the development of numerous ethical guidelines, while the involvement of patients in the creation of these documents lags behind. As part of the European PROFID project we explore patient perspectives on the ethical implications of AI in care for patients at increased risk of sudden cardiac death (SCD).AimExplore perspectives of patients on the ethical use of AI, particularly in clinical decision-making regarding the implantation of an implantable cardioverter-defibrillator (ICD).MethodsSemi-structured, (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  27.  2
    Artificial Intelligence and Cybercrime in Nigeria.Ikechukwu A. Kanu, Dokpesi T. Adidi & Catherine C. Kanu - 2024 - Dialogue and Universalism 34 (1):207-221.
    The rapid advancement of artificial intelligence has brought about significant positive changes across various sectors. However, it has also created new opportunities for cybercrime. Nigeria, in particular, has witnessed a surge in cybercriminal activities, which have had severe economic and social consequences. The paper explored the relationship between AI, cybercrime, and the underground business economy in Nigeria, focusing on the rise of fraud, identity theft, and hacking. It discussed the ethical implications of AI, cybercrime, and the underground business (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  28. Transparency you can trust: Transparency requirements for artificial intelligence between legal norms and contextual concerns.Aurelia Tamò-Larrieux, Christoph Lutz, Eduard Fosch Villaronga & Heike Felzmann - 2019 - Big Data and Society 6 (1).
    Transparency is now a fundamental principle for data processing under the General Data Protection Regulation. We explore what this requirement entails for artificial intelligence and automated decision-making systems. We address the topic of transparency in artificial intelligence by integrating legal, social, and ethical aspects. We first investigate the ratio legis of the transparency requirement in the General Data Protection Regulation and its ethical underpinnings, showing its focus on the provision of information and explanation. We then discuss (...)
    Direct download  
     
    Export citation  
     
    Bookmark   14 citations  
  29.  7
    On the Risks of Trusting Artificial Intelligence: The Case of Cybersecurity.Mariarosaria Taddeo - 2021 - In Josh Cowls & Jessica Morley (eds.), The 2020 Yearbook of the Digital Ethics Lab. Springer Verlag. pp. 97-108.
    In this chapter, I draw on my previous work on trust and cybersecurity to offer a definition of trust and trustworthiness to understand to what extent trusting AI for cybersecurity tasks is justified and what measures can be put in place to rely on AI in cases where trust is not justified, but the use of AI is still beneficial.
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  30.  30
    Misplaced Trust and Distrust: How Not to Engage with Medical Artificial Intelligence.Georg Starke & Marcello Ienca - forthcoming - Cambridge Quarterly of Healthcare Ethics:1-10.
    Artificial intelligence (AI) plays a rapidly increasing role in clinical care. Many of these systems, for instance, deep learning-based applications using multilayered Artificial Neural Nets, exhibit epistemic opacity in the sense that they preclude comprehensive human understanding. In consequence, voices from industry, policymakers, and research have suggested trust as an attitude for engaging with clinical AI systems. Yet, in the philosophical and ethical literature on medical AI, the notion of trust remains fiercely debated. Trust (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  31. Artificial Intelligence as a Means to Moral Enhancement.Michał Klincewicz - 2016 - Studies in Logic, Grammar and Rhetoric 48 (1):171-187.
    This paper critically assesses the possibility of moral enhancement with ambient intelligence technologies and artificial intelligence presented in Savulescu and Maslen (2015). The main problem with their proposal is that it is not robust enough to play a normative role in users’ behavior. A more promising approach, and the one presented in the paper, relies on an artifi-cial moral reasoning engine, which is designed to present its users with moral arguments grounded in first-order normative theories, such as (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   15 citations  
  32.  74
    Is explainable artificial intelligence intrinsically valuable?Nathan Colaner - 2022 - AI and Society 37 (1):231-238.
    There is general consensus that explainable artificial intelligence is valuable, but there is significant divergence when we try to articulate why, exactly, it is desirable. This question must be distinguished from two other kinds of questions asked in the XAI literature that are sometimes asked and addressed simultaneously. The first and most obvious is the ‘how’ question—some version of: ‘how do we develop technical strategies to achieve XAI?’ Another question is specifying what kind of explanation is worth having (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  33.  5
    The Human Roots of Artificial Intelligence: A Commentary on Susan Schneider's Artificial You.Inês Hipólito - 2024 - Philosophy East and West 74 (2):297-305.
    In lieu of an abstract, here is a brief excerpt of the content:The Human Roots of Artificial Intelligence:A Commentary on Susan Schneider's Artificial YouInês Hipólito (bio)Technologies are not mere tools waiting to be picked up and used by human agents, but rather are material-discursive practices that play a role in shaping and co-constituting the world in which we live.Karen BaradIntroductionSusan Schneider's book Artificial You: AI and the Future of Your Mind presents a compelling and bold argument (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  34.  45
    Artificial Intelligence and Medical Humanities.Kirsten Ostherr - 2020 - Journal of Medical Humanities 43 (2):211-232.
    The use of artificial intelligence in healthcare has led to debates about the role of human clinicians in the increasingly technological contexts of medicine. Some researchers have argued that AI will augment the capacities of physicians and increase their availability to provide empathy and other uniquely human forms of care to their patients. The human vulnerabilities experienced in the healthcare context raise the stakes of new technologies such as AI, and the human dimensions of AI in healthcare have (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  35. Instruments, agents, and artificial intelligence: novel epistemic categories of reliability.Eamon Duede - 2022 - Synthese 200 (6):1-20.
    Deep learning (DL) has become increasingly central to science, primarily due to its capacity to quickly, efficiently, and accurately predict and classify phenomena of scientific interest. This paper seeks to understand the principles that underwrite scientists’ epistemic entitlement to rely on DL in the first place and argues that these principles are philosophically novel. The question of this paper is not whether scientists can be justified in trusting in the reliability of DL. While today’s artificial intelligence exhibits characteristics (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  36.  40
    Artificial intelligence for good health: a scoping review of the ethics literature.Jennifer Gibson, Vincci Lui, Nakul Malhotra, Jia Ce Cai, Neha Malhotra, Donald J. Willison, Ross Upshur, Erica Di Ruggiero & Kathleen Murphy - 2021 - BMC Medical Ethics 22 (1):1-17.
    BackgroundArtificial intelligence has been described as the “fourth industrial revolution” with transformative and global implications, including in healthcare, public health, and global health. AI approaches hold promise for improving health systems worldwide, as well as individual and population health outcomes. While AI may have potential for advancing health equity within and between countries, we must consider the ethical implications of its deployment in order to mitigate its potential harms, particularly for the most vulnerable. This scoping review addresses the following (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  37.  33
    The social and ethical impacts of artificial intelligence in agriculture: mapping the agricultural AI literature.Mark Ryan - 2023 - AI and Society 38 (6):2473-2485.
    This paper will examine the social and ethical impacts of using artificial intelligence (AI) in the agricultural sector. It will identify what are some of the most prevalent challenges and impacts identified in the literature, how this correlates with those discussed in the domain of AI ethics, and are being implemented into AI ethics guidelines. This will be achieved by examining published articles and conference proceedings that focus on societal or ethical impacts of AI in the agri-food sector, (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  38. Shortcuts to Artificial Intelligence.Nello Cristianini - forthcoming - In Marcello Pelillo & Teresa Scantamburlo (eds.), Machines We Trust. MIT Press.
    The current paradigm of Artificial Intelligence emerged as the result of a series of cultural innovations, some technical and some social. Among them are apparently small design decisions, that led to a subtle reframing of the field’s original goals, and are by now accepted as standard. They correspond to technical shortcuts, aimed at bypassing problems that were otherwise too complicated or too expensive to solve, while still delivering a viable version of AI. Far from being a series of (...)
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  39.  18
    Public perceptions of the use of artificial intelligence in Defence: a qualitative exploration.Lee Hadlington, Maria Karanika-Murray, Jane Slater, Jens Binder, Sarah Gardner & Sarah Knight - forthcoming - AI and Society:1-14.
    There are a wide variety of potential applications of artificial intelligence (AI) in Defence settings, ranging from the use of autonomous drones to logistical support. However, limited research exists exploring how the public view these, especially in view of the value of public attitudes for influencing policy-making. An accurate understanding of the public’s perceptions is essential for crafting informed policy, developing responsible governance, and building responsive assurance relating to the development and use of AI in military settings. This (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  40. Science Based on Artificial Intelligence Need not Pose a Social Epistemological Problem.Uwe Peters - 2024 - Social Epistemology Review and Reply Collective 13 (1).
    It has been argued that our currently most satisfactory social epistemology of science can’t account for science that is based on artificial intelligence (AI) because this social epistemology requires trust between scientists that can take full responsibility for the research tools they use, and scientists can’t take full responsibility for the AI tools they use since these systems are epistemically opaque. I think this argument overlooks that much AI-based science can be done without opaque models, and that (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  41. Invisible Influence: Artificial Intelligence and the Ethics of Adaptive Choice Architectures.Daniel Susser - 2019 - Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society 1.
    For several years, scholars have (for good reason) been largely preoccupied with worries about the use of artificial intelligence and machine learning (AI/ML) tools to make decisions about us. Only recently has significant attention turned to a potentially more alarming problem: the use of AI/ML to influence our decision-making. The contexts in which we make decisions—what behavioral economists call our choice architectures—are increasingly technologically-laden. Which is to say: algorithms increasingly determine, in a wide variety of contexts, both the (...)
    Direct download  
     
    Export citation  
     
    Bookmark   7 citations  
  42.  21
    COVID-19, artificial intelligence, ethical challenges and policy implications.Muhammad Anshari, Mahani Hamdan, Norainie Ahmad, Emil Ali & Hamizah Haidi - 2023 - AI and Society 38 (2):707-720.
    As the COVID-19 outbreak remains an ongoing issue, there are concerns about its disruption, the level of its disruption, how long this pandemic is going to last, and how innovative technological solutions like Artificial Intelligence (AI) and expert systems can assist to deal with this pandemic. AI has the potential to provide extremely accurate insights for an organization to make better decisions based on collected data. Despite the numerous advantages that may be achieved by AI, the use of (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  43. Who is afraid of black box algorithms? On the epistemological and ethical basis of trust in medical AI.Juan Manuel Durán & Karin Rolanda Jongsma - 2021 - Journal of Medical Ethics 47 (5):medethics - 2020-106820.
    The use of black box algorithms in medicine has raised scholarly concerns due to their opaqueness and lack of trustworthiness. Concerns about potential bias, accountability and responsibility, patient autonomy and compromised trust transpire with black box algorithms. These worries connect epistemic concerns with normative issues. In this paper, we outline that black box algorithms are less problematic for epistemic reasons than many scholars seem to believe. By outlining that more transparency in algorithms is not always necessary, and by explaining (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   45 citations  
  44. AI or Your Lying Eyes: Some Shortcomings of Artificially Intelligent Deepfake Detectors.Keith Raymond Harris - 2024 - Philosophy and Technology 37 (7):1-19.
    Deepfakes pose a multi-faceted threat to the acquisition of knowledge. It is widely hoped that technological solutions—in the form of artificially intelligent systems for detecting deepfakes—will help to address this threat. I argue that the prospects for purely technological solutions to the problem of deepfakes are dim. Especially given the evolving nature of the threat, technological solutions cannot be expected to prevent deception at the hands of deepfakes, or to preserve the authority of video footage. Moreover, the success of such (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  45. Limits of trust in medical AI.Joshua James Hatherley - 2020 - Journal of Medical Ethics 46 (7):478-481.
    Artificial intelligence (AI) is expected to revolutionise the practice of medicine. Recent advancements in the field of deep learning have demonstrated success in variety of clinical tasks: detecting diabetic retinopathy from images, predicting hospital readmissions, aiding in the discovery of new drugs, etc. AI’s progress in medicine, however, has led to concerns regarding the potential effects of this technology on relationships of trust in clinical practice. In this paper, I will argue that there is merit to these (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   22 citations  
  46.  8
    Drivers behind the public perception of artificial intelligence: insights from major Australian cities.Tan Yigitcanlar, Kenan Degirmenci & Tommi Inkinen - forthcoming - AI and Society:1-21.
    Artificial intelligence is not only disrupting industries and businesses, particularly the ones have fallen behind the adoption, but also significantly impacting public life as well. This calls for government authorities pay attention to public opinions and sentiments towards AI. Nonetheless, there is limited knowledge on what the drivers behind the public perception of AI are. Bridging this gap is the rationale of this paper. As the methodological approach, the study conducts an online public perception survey with the residents (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  47.  7
    Inclusion of Clinicians in the Development and Evaluation of Clinical Artificial Intelligence Tools: A Systematic Literature Review.Stephanie Tulk Jesso, Aisling Kelliher, Harsh Sanghavi, Thomas Martin & Sarah Henrickson Parker - 2022 - Frontiers in Psychology 13.
    The application of machine learning and artificial intelligence in healthcare domains has received much attention in recent years, yet significant questions remain about how these new tools integrate into frontline user workflow, and how their design will impact implementation. Lack of acceptance among clinicians is a major barrier to the translation of healthcare innovations into clinical practice. In this systematic review, we examine when and how clinicians are consulted about their needs and desires for clinical AI tools. Forty-five (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  48.  40
    Institutionalised distrust and human oversight of artificial intelligence: towards a democratic design of AI governance under the European Union AI Act.Johann Laux - forthcoming - AI and Society:1-14.
    Human oversight has become a key mechanism for the governance of artificial intelligence (“AI”). Human overseers are supposed to increase the accuracy and safety of AI systems, uphold human values, and build trust in the technology. Empirical research suggests, however, that humans are not reliable in fulfilling their oversight tasks. They may be lacking in competence or be harmfully incentivised. This creates a challenge for human oversight to be effective. In addressing this challenge, this article aims to (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  49.  97
    Computer-mediated trust in self-interested expert recommendations.Jonathan Ben-Naim, Jean-François Bonnefon, Andreas Herzig, Sylvie Leblois & Emiliano Lorini - 2010 - AI and Society 25 (4):413-422.
    Important decisions are often based on a distributed process of information processing, from a knowledge base that is itself distributed among agents. The simplest such situation is that where a decision-maker seeks the recommendations of experts. Because experts may have vested interests in the consequences of their recommendations, decision-makers usually seek the advice of experts they trust. Trust, however, is a commodity that is usually built through repeated face time and social interaction and thus cannot easily be built (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  50.  11
    Towards a United Nations Internal Regulation for Artificial Intelligence.Eleonore Fournier-Tombs - 2021 - Big Data and Society 8 (2).
    This article sets out the rationale for a United Nations Regulation for Artificial Intelligence, which is needed to set out the modes of engagement of the organisation when using artificial intelligence technologies in the attainment of its mission. It argues that given the increasing use of artificial intelligence by the United Nations, including in some activities considered high risk by the European Commission, a regulation is urgent. It also contends that rules of engagement for (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
1 — 50 / 999