Results for 'responsible AI'

997 found
Order:
  1.  42
    Responsible AI Through Conceptual Engineering.Johannes Himmelreich & Sebastian Köhler - 2022 - Philosophy and Technology 35 (3):1-30.
    The advent of intelligent artificial systems has sparked a dispute about the question of who is responsible when such a system causes a harmful outcome. This paper champions the idea that this dispute should be approached as a conceptual engineering problem. Towards this claim, the paper first argues that the dispute about the responsibility gap problem is in part a conceptual dispute about the content of responsibility and related concepts. The paper then argues that the way forward is to (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  2.  21
    Responsible AI: Two Frameworks for Ethical Design and Practice.Dorian Peters, Karina Vold, Diana Robinson & Rafael Calvo - 2020 - IEEE Transactions on Technology and Society 1 (1).
    In 2019, the IEEE launched the P7000 standards projects intended to address ethical issues in the design of autonomous and intelligent systems. This move came amidst a growing public concern over the unintended consequences of artificial intelligence (AI), compounded by the lack of an anticipatory process for attending to ethical impact within professional practice. However, the difficulty in moving from principles to practice presents a significant challenge to the implementation of ethical guidelines. Herein, we describe two complementary frameworks for integrating (...)
    Direct download  
     
    Export citation  
     
    Bookmark   7 citations  
  3.  13
    Companies Committed to Responsible AI: From Principles towards Implementation and Regulation?Paul B. de Laat - 2021 - Philosophy and Technology 34 (4):1135-1193.
    The term ‘responsible AI’ has been coined to denote AI that is fair and non-biased, transparent and explainable, secure and safe, privacy-proof, accountable, and to the benefit of mankind. Since 2016, a great many organizations have pledged allegiance to such principles. Amongst them are 24 AI companies that did so by posting a commitment of the kind on their website and/or by joining the ‘Partnership on AI’. By means of a comprehensive web search, two questions are addressed by this (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  4.  34
    Understanding responsibility in Responsible AI. Dianoetic virtues and the hard problem of context.Mihaela Constantinescu, Cristina Voinea, Radu Uszkai & Constantin Vică - 2021 - Ethics and Information Technology 23 (4):803-814.
    During the last decade there has been burgeoning research concerning the ways in which we should think of and apply the concept of responsibility for Artificial Intelligence. Despite this conceptual richness, there is still a lack of consensus regarding what Responsible AI entails on both conceptual and practical levels. The aim of this paper is to connect the ethical dimension of responsibility in Responsible AI with Aristotelian virtue ethics, where notions of context and dianoetic virtues play a grounding (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   12 citations  
  5.  24
    Promoting responsible AI : A European perspective on the governance of artificial intelligence in media and journalism.Colin Porlezza - 2023 - Communications 48 (3):370-394.
    Artificial intelligence and automation have become pervasive in news media, influencing journalism from news gathering to news distribution. As algorithms are increasingly determining editorial decisions, specific concerns have been raised with regard to the responsible and accountable use of AI-driven tools by news media, encompassing new regulatory and ethical questions. This contribution aims to analyze whether and to what extent the use of AI technology in news media and journalism is currently regulated and debated within the European Union and (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  6.  43
    Challenges of responsible AI in practice: scoping review and recommended actions.Malak Sadek, Emma Kallina, Thomas Bohné, Céline Mougenot, Rafael A. Calvo & Stephen Cave - forthcoming - AI and Society:1-17.
    Responsible AI (RAI) guidelines aim to ensure that AI systems respect democratic values. While a step in the right direction, they currently fail to impact practice. Our work discusses reasons for this lack of impact and clusters them into five areas: (1) the abstract nature of RAI guidelines, (2) the problem of selecting and reconciling values, (3) the difficulty of operationalising RAI success metrics, (4) the fragmentation of the AI pipeline, and (5) the lack of internal advocacy and accountability. (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  7.  64
    RESPONSIBLE AI: INTRODUCTION OF “NOMADIC AI PRINCIPLES” FOR CENTRAL ASIA.Ammar Younas - 2020 - Conference Proceeding of International Conference Organized by Jizzakh Polytechnical Institute Uzbekistan.
    We think that Central Asia should come up with its own AI Ethics Principles which we propose to name as “Nomadic AI Principles”.
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  8.  15
    Responsible AI and moral responsibility: a common appreciation.Daniel W. Tigard - 2021 - AI and Ethics 1 (2):113-117.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  9.  14
    Is Explainable AI Responsible AI?Isaac Taylor - forthcoming - AI and Society.
    When artificial intelligence (AI) is used to make high-stakes decisions, some worry that this will create a morally troubling responsibility gap—that is, a situation in which nobody is morally responsible for the actions and outcomes that result. Since the responsibility gap might be thought to result from individuals lacking knowledge of the future behavior of AI systems, it can be and has been suggested that deploying explainable artificial intelligence (XAI) techniques will help us to avoid it. These techniques provide (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  10. Proposing Central Asian AI Ethics Principles: A Multilevel Approach for Responsible AI.Ammar Younas & Yi Zeng - 2024 - AI and Ethics 4.
    This paper puts forth Central Asian AI ethics principles and proposes a layered strategy tailored for the development of ethical principles in the field of artificial intelligence (AI) in Central Asian countries. This approach includes the customization of AI ethics principles to resonate with local nuances, the formulation of national and regional-level AI ethics principles, and the implementation of sector-specific principles. While countering the narrative of ineffectiveness of the AI ethics principles, this paper underscores the importance of stakeholder collaboration, provides (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark  
  11.  3
    Association Between Children’s Theory of Mind and Responses to Insincere Praise Following Failure.Ai Mizokawa - 2018 - Frontiers in Psychology 9.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  12.  39
    How to teach responsible AI in Higher Education: challenges and opportunities.Andrea Aler Tubella, Marçal Mora-Cantallops & Juan Carlos Nieves - 2023 - Ethics and Information Technology 26 (1):1-14.
    In recent years, the European Union has advanced towards responsible and sustainable Artificial Intelligence (AI) research, development and innovation. While the Ethics Guidelines for Trustworthy AI released in 2019 and the AI Act in 2021 set the starting point for a European Ethical AI, there are still several challenges to translate such advances into the public debate, education and practical learning. This paper contributes towards closing this gap by reviewing the approaches that can be found in the existing literature (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  13.  20
    Caring in the in-between: a proposal to introduce responsible AI and robotics to healthcare.Núria Vallès-Peris & Miquel Domènech - 2023 - AI and Society 38 (4):1685-1695.
    In the scenario of growing polarization of promises and dangers that surround artificial intelligence (AI), how to introduce responsible AI and robotics in healthcare? In this paper, we develop an ethical–political approach to introduce democratic mechanisms to technological development, what we call “Caring in the In-Between”. Focusing on the multiple possibilities for action that emerge in the realm of uncertainty, we propose an ethical and responsible framework focused on care actions in between fears and hopes. Using the theoretical (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  14.  12
    Moving from Models to Responsible AI as a Moat.Ashwini Nagappan - 2023 - American Journal of Bioethics 23 (10):113-115.
    In his article, “What Should ChatGPT Mean for Bioethics?” Cohen (2023) highlights novel bioethical issues raised by the emergence of ChatGPT and generative AI more broadly. Among the thought-provok...
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  15.  20
    Caring in the in-between: a proposal to introduce responsible AI and robotics to healthcare.Núria Vallès-Peris & Miquel Domènech - 2021 - AI and Society:1-11.
    In the scenario of growing polarization of promises and dangers that surround artificial intelligence (AI), how to introduce responsible AI and robotics in healthcare? In this paper, we develop an ethical–political approach to introduce democratic mechanisms to technological development, what we call “Caring in the In-Between”. Focusing on the multiple possibilities for action that emerge in the realm of uncertainty, we propose an ethical and responsible framework focused on care actions in between fears and hopes. Using the theoretical (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  16.  19
    Bridging the civilian-military divide in responsible AI principles and practices.Rachel Azafrani & Abhishek Gupta - 2023 - Ethics and Information Technology 25 (2):1-5.
    Advances in AI research have brought increasingly sophisticated capabilities to AI systems and heightened the societal consequences of their use. Researchers and industry professionals have responded by contemplating responsible principles and practices for AI system design. At the same time, defense institutions are contemplating ethical guidelines and requirements for the development and use of AI for warfare. However, varying ethical and procedural approaches to technological development, research emphasis on offensive uses of AI, and lack of appropriate venues for multistakeholder (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  17.  32
    Possibilities and ethical issues of entrusting nursing tasks to robots and artificial intelligence.Tomohide Ibuki, Ai Ibuki & Eisuke Nakazawa - forthcoming - Nursing Ethics.
    In recent years, research in robotics and artificial intelligence (AI) has made rapid progress. It is expected that robots and AI will play a part in the field of nursing and their role might broaden in the future. However, there are areas of nursing practice that cannot or should not be entrusted to robots and AI, because nursing is a highly humane practice, and therefore, there would, perhaps, be some practices that should not be replicated by robots or AI. Therefore, (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  18.  19
    Characteristics and challenges in the industries towards responsible AI: a systematic literature review.Marianna Anagnostou, Olga Karvounidou, Chrysovalantou Katritzidaki, Christina Kechagia, Kyriaki Melidou, Eleni Mpeza, Ioannis Konstantinidis, Eleni Kapantai, Christos Berberidis, Ioannis Magnisalis & Vassilios Peristeras - 2022 - Ethics and Information Technology 24 (3):1-18.
    Today humanity is in the midst of the massive expansion of new and fundamental technology, represented by advanced artificial intelligence (AI) systems. The ongoing revolution of these technologies and their profound impact across various sectors, has triggered discussions about the characteristics and values that should guide their use and development in a responsible manner. In this paper, we conduct a systematic literature review with the aim of pointing out existing challenges and required principles in AI-based systems in different industries. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  19.  15
    Is it possible to create a responsible AI technology to be used and understood within workplaces and unblocked CEOs’ mindsets?John W. Murphy & Carlos Largacha-Martínez - 2023 - AI and Society 38 (6):2641-2652.
    Most workers report that they are alienated from their jobs and find their workplaces to be stifling and uninviting. Given this condition, the introduction of computer technology, including AI, will only make matters worse, unless a more humane organizational culture is created. The key point in this article is the need to produce a responsible technology, so that employees are not further overworked and manipulated. To achieve this end, phenomenology is invoked, particularly the life world, to provide technology with (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  20.  12
    Is it possible to create a responsible AI technology to be used and understood within workplaces and unblocked CEOs’ mindsets?John W. Murphy & Carlos Largacha-Martínez - 2021 - AI and Society:1-12.
    Most workers report that they are alienated from their jobs and find their workplaces to be stifling and uninviting. Given this condition, the introduction of computer technology, including AI, will only make matters worse, unless a more humane organizational culture is created. The key point in this article is the need to produce a responsible technology, so that employees are not further overworked and manipulated. To achieve this end, phenomenology is invoked, particularly the life world, to provide technology with (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  21.  12
    Expert responsibility in AI development.Maria Hedlund & Erik Persson - 2022 - AI and Society:1-12.
    The purpose of this paper is to discuss the responsibility of AI experts for guiding the development of AI in a desirable direction. More specifically, the aim is to answer the following research question: To what extent are AI experts responsible in a forward-looking way for effects of AI technology that go beyond the immediate concerns of the programmer or designer? AI experts, in this paper conceptualised as experts regarding the technological aspects of AI, have knowledge and control of (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  22.  39
    Moral Responsibility for AI Systems.Sander Beckers - forthcoming - Advances in Neural Information Processing Systems 36 (Neurips 2023).
    As more and more decisions that have a significant ethical dimension are being outsourced to AI systems, it is important to have a definition of moral responsibility that can be applied to AI systems. Moral responsibility for an outcome of an agent who performs some action is commonly taken to involve both a causal condition and an epistemic condition: the action should cause the outcome, and the agent should have been aware -- in some form or other -- of the (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  23.  12
    AI and the path to envelopment: knowledge as a first step towards the responsible regulation and use of AI-powered machines.Scott Robbins - 2020 - AI and Society 35 (2):391-400.
    With Artificial Intelligence entering our lives in novel ways—both known and unknown to us—there is both the enhancement of existing ethical issues associated with AI as well as the rise of new ethical issues. There is much focus on opening up the ‘black box’ of modern machine-learning algorithms to understand the reasoning behind their decisions—especially morally salient decisions. However, some applications of AI which are no doubt beneficial to society rely upon these black boxes. Rather than requiring algorithms to be (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   10 citations  
  24. AI and the future of humanity: ChatGPT-4, philosophy and education – Critical responses.Michael A. Peters, Liz Jackson, Marianna Papastephanou, Petar Jandrić, George Lazaroiu, Colin W. Evers, Bill Cope, Mary Kalantzis, Daniel Araya, Marek Tesar, Carl Mika, Lei Chen, Chengbing Wang, Sean Sturm, Sharon Rider & Steve Fuller - forthcoming - Educational Philosophy and Theory.
    Michael A PetersBeijing Normal UniversityChatGPT is an AI chatbot released by OpenAI on November 30, 2022 and a ‘stable release’ on February 13, 2023. It belongs to OpenAI’s GPT-3 family (generativ...
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  25.  11
    Responsible reliance concerning development and use of AI in the military domain.Dustin A. Lewis & Vincent Boulanin - 2023 - Ethics and Information Technology 25 (1):1-5.
    In voicing commitments to the principle that the adoption of artificial-intelligence (AI) tools by armed forces should be done responsibly, a growing number of states have referred to a concept of “Responsible AI.” As part of an effort to help develop the substantive contours of that concept in meaningful ways, this position paper introduces a notion of “responsible reliance.” It is submitted that this notion could help the policy conversation expand from its current relatively narrow focus on interactions (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  26.  5
    Automated Reasoning with Complex Ethical Theories--A Case Study Towards Responsible AI.David Fuenmayor & Christoph Benzmüller - unknown
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  27.  20
    Responsibility, second opinions and peer-disagreement: ethical and epistemological challenges of using AI in clinical diagnostic contexts.Hendrik Kempt & Saskia K. Nagel - 2022 - Journal of Medical Ethics 48 (4):222-229.
    In this paper, we first classify different types of second opinions and evaluate the ethical and epistemological implications of providing those in a clinical context. Second, we discuss the issue of how artificial intelligent could replace the human cognitive labour of providing such second opinion and find that several AI reach the levels of accuracy and efficiency needed to clarify their use an urgent ethical issue. Third, we outline the normative conditions of how AI may be used as second opinion (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   18 citations  
  28.  15
    The AI Carbon Footprint and Responsibilities of AI Scientists.Guglielmo Tamburrini - 2022 - Philosophies 7 (1):4.
    This article examines ethical implications of the growing AI carbon footprint, focusing on the fair distribution of prospective responsibilities among groups of involved actors. First, major groups of involved actors are identified, including AI scientists, AI industry, and AI infrastructure providers, from datacenters to electrical energy suppliers. Second, responsibilities of AI scientists concerning climate warming mitigation actions are disentangled from responsibilities of other involved actors. Third, to implement these responsibilities nudging interventions are suggested, leveraging on AI competitive games which would (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  29. Responsible nudging for social good: new healthcare skills for AI-driven digital personal assistants.Marianna Capasso & Steven Umbrello - 2022 - Medicine, Health Care and Philosophy 25 (1):11-22.
    Traditional medical practices and relationships are changing given the widespread adoption of AI-driven technologies across the various domains of health and healthcare. In many cases, these new technologies are not specific to the field of healthcare. Still, they are existent, ubiquitous, and commercially available systems upskilled to integrate these novel care practices. Given the widespread adoption, coupled with the dramatic changes in practices, new ethical and social issues emerge due to how these systems nudge users into making decisions and changing (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  30. AI and Structural Injustice: Foundations for Equity, Values, and Responsibility.Johannes Himmelreich & Désirée Lim - 2023 - In Justin B. Bullock, Yu-Che Chen, Johannes Himmelreich, Valerie M. Hudson, Anton Korinek, Matthew M. Young & Baobao Zhang (eds.), The Oxford Handbook of AI Governance. Oxford University Press.
    This chapter argues for a structural injustice approach to the governance of AI. Structural injustice has an analytical and an evaluative component. The analytical component consists of structural explanations that are well-known in the social sciences. The evaluative component is a theory of justice. Structural injustice is a powerful conceptual tool that allows researchers and practitioners to identify, articulate, and perhaps even anticipate, AI biases. The chapter begins with an example of racial bias in AI that arises from structural injustice. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  31.  17
    Responsibility beyond design: Physicians’ requirements for ethical medical AI.Martin Sand, Juan Manuel Durán & Karin Rolanda Jongsma - 2021 - Bioethics 36 (2):162-169.
    Bioethics, Volume 36, Issue 2, Page 162-169, February 2022.
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   12 citations  
  32.  11
    Clinical AI: opacity, accountability, responsibility and liability.Helen Smith - 2021 - AI and Society 36 (2):535-545.
    The aim of this literature review was to compose a narrative review supported by a systematic approach to critically identify and examine concerns about accountability and the allocation of responsibility and legal liability as applied to the clinician and the technologist as applied the use of opaque AI-powered systems in clinical decision making. This review questions if it is permissible for a clinician to use an opaque AI system in clinical decision making and if a patient was harmed as a (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  33. A way forward for responsibility in the age of AI.Dane Leigh Gogoshin - 2024 - Inquiry: An Interdisciplinary Journal of Philosophy:1-34.
    Whatever one makes of the relationship between free will and moral responsibility – e.g. whether it’s the case that we can have the latter without the former and, if so, what conditions must be met; whatever one thinks about whether artificially intelligent agents might ever meet such conditions, one still faces the following questions. What is the value of moral responsibility? If we take moral responsibility to be a matter of being a fitting target of moral blame or praise, what (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  34. Ethical funding for trustworthy AI: proposals to address the responsibilities of funders to ensure that projects adhere to trustworthy AI practice.Marie Oldfield - 2021 - AI and Ethics 1 (1):1.
    AI systems that demonstrate significant bias or lower than claimed accuracy, and resulting in individual and societal harms, continue to be reported. Such reports beg the question as to why such systems continue to be funded, developed and deployed despite the many published ethical AI principles. This paper focusses on the funding processes for AI research grants which we have identified as a gap in the current range of ethical AI solutions such as AI procurement guidelines, AI impact assessments and (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  35.  14
    Responsibility of AI Systems.Mehdi Dastani & Vahid Yazdanpanah - 2023 - AI and Society 38 (2):843-852.
    To support the trustworthiness of AI systems, it is essential to have precise methods to determine what or who is to account for the behaviour, or the outcome, of AI systems. The assignment of responsibility to an AI system is closely related to the identification of individuals or elements that have caused the outcome of the AI system. In this work, we present an overview of approaches that aim at modelling responsibility of AI systems, discuss their advantages and shortcomings to (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  36. Responsibility Internalism and Responsibility for AI.Huzeyfe Demirtas - 2023 - Dissertation, Syracuse University
    I argue for responsibility internalism. That is, moral responsibility (i.e., accountability, or being apt for praise or blame) depends only on factors internal to agents. Employing this view, I also argue that no one is responsible for what AI does but this isn’t morally problematic in a way that counts against developing or using AI. Responsibility is grounded in three potential conditions: the control (or freedom) condition, the epistemic (or awareness) condition, and the causal responsibility condition (or consequences). I (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  37.  27
    Responsible Artificial Intelligence: How to Develop and Use Ai in a Responsible Way.Virginia Dignum - 2019 - Springer Verlag.
    In this book, the author examines the ethical implications of Artificial Intelligence systems as they integrate and replace traditional social structures in new sociocognitive-technological environments. She discusses issues related to the integrity of researchers, technologists, and manufacturers as they design, construct, use, and manage artificially intelligent systems; formalisms for reasoning about moral decisions as part of the behavior of artificial autonomous systems such as agents and robots; and design methodologies for social agents based on societal, moral, and legal values. Throughout (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   36 citations  
  38.  25
    Tailoring responsible research and innovation to the translational context: the case of AI-supported exergaming.Sabrina Blank, Celeste Mason, Frank Steinicke & Christian Herzog - 2024 - Ethics and Information Technology 26 (2):1-16.
    We discuss the implementation of Responsible Research and Innovation (RRI) within a project for the development of an AI-supported exergame for assisted movement training, outline outcomes and reflect on methodological opportunities and limitations. We adopted the responsibility-by-design (RbD) standard (CEN CWA 17796:2021) supplemented by methods for collaborative, ethical reflection to foster and support a shift towards a culture of trustworthiness inherent to the entire development process. An embedded ethicist organised the procedure to instantiate a collaborative learning effort and implement (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  39.  13
    AI, agency and responsibility: the VW fraud case and beyond.Deborah G. Johnson & Mario Verdicchio - 2019 - AI and Society 34 (3):639-647.
    The concept of agency as applied to technological artifacts has become an object of heated debate in the context of AI research because some AI researchers ascribe to programs the type of agency traditionally associated with humans. Confusion about agency is at the root of misconceptions about the possibilities for future AI. We introduce the concept of a triadic agency that includes the causal agency of artifacts and the intentional agency of humans to better describe what happens in AI as (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  40.  6
    Minangkabaunese matrilineal: The correlation between the Qur’an and gender.Halimatussa’Diyah Halimatussa’Diyah, Kusnadi Kusnadi, Ai Y. Yuliyanti, Deddy Ilyas & Eko Zulfikar - 2024 - HTS Theological Studies 80 (1):7.
    Upon previous research, the matrilineal system seems to oppose Islamic teaching. However, the matrilineal system practiced by the Minangkabau society in West Sumatra, Indonesia has its uniqueness. Thus, this study aims to examine the correlation between the Qur’an and gender roles within the context of Minangkabau customs, specifically focusing on the matrilineal aspect. The present study employs qualitative methods for conducting library research through critical analysis. This study discovered that the matrilineal system practiced by the Minangkabau society aligns with Qur’anic (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  41.  10
    Is AI a Problem for Forward Looking Moral Responsibility? The Problem Followed by a Solution.Fabio Tollon - 2022 - In Communications in Computer and Information Science. Cham: pp. 307-318.
    Recent work in AI ethics has come to bear on questions of responsibility. Specifically, questions of whether the nature of AI-based systems render various notions of responsibility inappropriate. While substantial attention has been given to backward-looking senses of responsibility, there has been little consideration of forward-looking senses of responsibility. This paper aims to plug this gap, and will concern itself with responsibility as moral obligation, a particular kind of forward-looking sense of responsibility. Responsibility as moral obligation is predicated on the (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  42.  7
    Perceived responsibility in AI-supported medicine.S. Krügel, J. Ammeling, M. Aubreville, A. Fritz, A. Kießig & Matthias Uhl - forthcoming - AI and Society:1-11.
    In a representative vignette study in Germany with 1,653 respondents, we investigated laypeople’s attribution of moral responsibility in collaborative medical diagnosis. Specifically, we compare people’s judgments in a setting in which physicians are supported by an AI-based recommender system to a setting in which they are supported by a human colleague. It turns out that people tend to attribute moral responsibility to the artificial agent, although this is traditionally considered a category mistake in normative ethics. This tendency is stronger when (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  43.  29
    Can we Bridge AI’s responsibility gap at Will?Maximilian Kiener - 2022 - Ethical Theory and Moral Practice 25 (4):575-593.
    Artificial intelligence increasingly executes tasks that previously only humans could do, such as drive a car, fight in war, or perform a medical operation. However, as the very best AI systems tend to be the least controllable and the least transparent, some scholars argued that humans can no longer be morally responsible for some of the AI-caused outcomes, which would then result in a responsibility gap. In this paper, I assume, for the sake of argument, that at least some (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  44.  41
    Blame It on the AI? On the Moral Responsibility of Artificial Moral Advisors.Mihaela Constantinescu, Constantin Vică, Radu Uszkai & Cristina Voinea - 2022 - Philosophy and Technology 35 (2):1-26.
    Deep learning AI systems have proven a wide capacity to take over human-related activities such as car driving, medical diagnosing, or elderly care, often displaying behaviour with unpredictable consequences, including negative ones. This has raised the question whether highly autonomous AI may qualify as morally responsible agents. In this article, we develop a set of four conditions that an entity needs to meet in order to be ascribed moral responsibility, by drawing on Aristotelian ethics and contemporary philosophical research. We (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  45. Should we discourage AI extension? Epistemic responsibility and AI.Hadeel Naeem & Julian Hauser - forthcoming - Philosophy and Technology.
    We might worry that our seamless reliance on AI systems makes us prone to adopting the strange errors that these systems commit. One proposed solution is to design AI systems so that they are not phenomenally transparent to their users. This stops cognitive extension and the automatic uptake of errors. Although we acknowledge that some aspects of AI extension are concerning, we can address these concerns without discouraging transparent employment altogether. First, we believe that the potential danger should be put (...)
    No categories
     
    Export citation  
     
    Bookmark  
  46.  6
    Responsibility Gaps and Black Box Healthcare AI: Shared Responsibilization as a Solution.Benjamin H. Lang, Sven Nyholm & Jennifer Blumenthal-Barby - 2023 - Digital Society 2 (3):52.
    As sophisticated artificial intelligence software becomes more ubiquitously and more intimately integrated within domains of traditionally human endeavor, many are raising questions over how responsibility (be it moral, legal, or causal) can be understood for an AI’s actions or influence on an outcome. So called “responsibility gaps” occur whenever there exists an apparent chasm in the ordinary attribution of moral blame or responsibility when an AI automates physical or cognitive labor otherwise performed by human beings and commits an error. Healthcare (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark  
  47. The future of AI in our hands? - To what extent are we as individuals morally responsible for guiding the development of AI in a desirable direction?Erik Persson & Maria Hedlund - 2022 - AI and Ethics 2:683-695.
    Artificial intelligence (AI) is becoming increasingly influential in most people’s lives. This raises many philosophical questions. One is what responsibility we have as individuals to guide the development of AI in a desirable direction. More specifically, how should this responsibility be distributed among individuals and between individuals and other actors? We investigate this question from the perspectives of five principles of distribution that dominate the discussion about responsibility in connection with climate change: effectiveness, equality, desert, need, and ability. Since much (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  48.  5
    Building a decoder of perceptual decisions from microsaccades and pupil size.Ryohei Nakayama, Jean-Baptiste Bardin, Ai Koizumi, Isamu Motoyoshi & Kaoru Amano - 2022 - Frontiers in Psychology 13.
    Many studies have reported neural correlates of visual awareness across several brain regions, including the sensory, parietal, and frontal areas. In most of these studies, participants were instructed to explicitly report their perceptual experience through a button press or verbal report. It is conceivable, however, that explicit reporting itself may trigger specific neural responses that can confound the direct examination of the neural correlates of visual awareness. This suggests the need to assess visual awareness without explicit reporting. One way to (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  49.  23
    Editors’ Statement on the Responsible Use of Generative AI Technologies in Scholarly Journal Publishing.Gregory E. Kaebnick, David Christopher Magnus, Audiey Kao, Mohammad Hosseini, David Resnik, Veljko Dubljević, Christy Rentmeester, Bert Gordijn & Mark J. Cherry - 2023 - Hastings Center Report 53 (5):3-6.
    Generative artificial intelligence (AI) has the potential to transform many aspects of scholarly publishing. Authors, peer reviewers, and editors might use AI in a variety of ways, and those uses might augment their existing work or might instead be intended to replace it. We are editors of bioethics and humanities journals who have been contemplating the implications of this ongoing transformation. We believe that generative AI may pose a threat to the goals that animate our work but could also be (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  50.  23
    Moral agency without responsibility? Analysis of three ethical models of human-computer interaction in times of artificial intelligence (AI).Alexis Fritz, Wiebke Brandt, Henner Gimpel & Sarah Bayer - 2020 - De Ethica 6 (1):3-22.
    Philosophical and sociological approaches in technology have increasingly shifted toward describing AI (artificial intelligence) systems as ‘(moral) agents,’ while also attributing ‘agency’ to them. It is only in this way – so their principal argument goes – that the effects of technological components in a complex human-computer interaction can be understood sufficiently in phenomenological-descriptive and ethical-normative respects. By contrast, this article aims to demonstrate that an explanatory model only achieves a descriptively and normatively satisfactory result if the concepts of ‘(moral) (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   7 citations  
1 — 50 / 997