Results for 'AI safety'

985 found
Order:
  1. AI Safety: A Climb To Armageddon?Herman Cappelen, Dever Josh & Hawthorne John - manuscript
    This paper presents an argument that certain AI safety measures, rather than mitigating existential risk, may instead exacerbate it. Under certain key assumptions - the inevitability of AI failure, the expected correlation between an AI system's power at the point of failure and the severity of the resulting harm, and the tendency of safety measures to enable AI systems to become more powerful before failing - safety efforts have negative expected utility. The paper examines three response strategies: (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  2.  49
    AI safety: necessary, but insufficient and possibly problematic.Deepak P. - forthcoming - AI and Society:1-3.
  3. Acceleration AI Ethics, the Debate between Innovation and Safety, and Stability AI’s Diffusion versus OpenAI’s Dall-E.James Brusseau - manuscript
    One objection to conventional AI ethics is that it slows innovation. This presentation responds by reconfiguring ethics as an innovation accelerator. The critical elements develop from a contrast between Stability AI’s Diffusion and OpenAI’s Dall-E. By analyzing the divergent values underlying their opposed strategies for development and deployment, five conceptions are identified as common to acceleration ethics. Uncertainty is understood as positive and encouraging, rather than discouraging. Innovation is conceived as intrinsically valuable, instead of worthwhile only as mediated by social (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  4.  23
    Applying ethics to AI in the workplace: the design of a scorecard for Australian workplace health and safety.Andreas Cebulla, Zygmunt Szpak, Catherine Howell, Genevieve Knight & Sazzad Hussain - 2023 - AI and Society 38 (2):919-935.
    Artificial Intelligence (AI) is taking centre stage in economic growth and business operations alike. Public discourse about the practical and ethical implications of AI has mainly focussed on the societal level. There is an emerging knowledge base on AI risks to human rights around data security and privacy concerns. A separate strand of work has highlighted the stresses of working in the gig economy. This prevailing focus on human rights and gig impacts has been at the expense of a closer (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  5.  8
    Identify and Assess Hydropower Project’s Multidimensional Social Impacts with Rough Set and Projection Pursuit Model.Hui An, Wenjing Yang, Jin Huang, Ai Huang, Zhongchi Wan & Min An - 2020 - Complexity 2020:1-16.
    To realize the coordinated and sustainable development of hydropower projects and regional society, comprehensively evaluating hydropower projects’ influence is critical. Usually, hydropower project development has an impact on environmental geology and social and regional cultural development. Based on comprehensive consideration of complicated geological conditions, fragile ecological environment, resettlement of reservoir area, and other factors of future hydropower development in each country, we have constructed a comprehensive evaluation index system of hydropower projects, including 4 first-level indicators of social economy, environment, (...), and fairness, which contain 26 second-level indicators. To solve the problem that existing models cannot evaluate dynamic nonlinear optimization, a projection pursuit model is constructed by using rough set reduction theory to simplify the index. Then, an accelerated genetic algorithm based on real number coding is used to solve the model and empirical study is carried out with the Y hydropower station as a sample. The evaluation results show that the evaluation index system and assessment model constructed in our paper effectively reduce the subjectivity of index weight. Applying our model to the social impact assessment of related international hydropower projects can not only comprehensively analyze the social impact of hydropower projects but also identify important social influencing factors and effectively analyze the social impact level of each dimension. Furthermore, SIA assessment can be conducive to project decision-making, avoiding social risks and social stability. (shrink)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  6. Safety Engineering for Artificial General Intelligence.Roman Yampolskiy & Joshua Fox - 2012 - Topoi 32 (2):217-226.
    Machine ethics and robot rights are quickly becoming hot topics in artificial intelligence and robotics communities. We will argue that attempts to attribute moral agency and assign rights to all intelligent machines are misguided, whether applied to infrahuman or superhuman AIs, as are proposals to limit the negative effects of AIs by constraining their behavior. As an alternative, we propose a new science of safety engineering for intelligent artificial agents based on maximizing for what humans value. In particular, we (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  7.  33
    Shutdown-seeking AI.Simon Goldstein & Pamela Robinson - forthcoming - Philosophical Studies:1-13.
    We propose developing AIs whose only final goal is being shut down. We argue that this approach to AI safety has three benefits: (i) it could potentially be implemented in reinforcement learning, (ii) it avoids some dangerous instrumental convergence dynamics, and (iii) it creates trip wires for monitoring dangerous capabilities. We also argue that the proposal can overcome a key challenge raised by Soares et al. (2015), that shutdown-seeking AIs will manipulate humans into shutting them down. We conclude by (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  8. Transdisciplinary AI Observatory—Retrospective Analyses and Future-Oriented Contradistinctions.Nadisha-Marie Aliman, Leon Kester & Roman Yampolskiy - 2021 - Philosophies 6 (1):6.
    In the last years, artificial intelligence (AI) safety gained international recognition in the light of heterogeneous safety-critical and ethical issues that risk overshadowing the broad beneficial impacts of AI. In this context, the implementation of AI observatory endeavors represents one key research direction. This paper motivates the need for an inherently _transdisciplinary_ AI observatory approach integrating diverse retrospective and counterfactual views. We delineate aims and limitations while providing hands-on-advice utilizing _concrete practical examples_. Distinguishing between unintentionally and intentionally triggered (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  9.  30
    Understanding and Avoiding AI Failures: A Practical Guide.Robert Williams & Roman Yampolskiy - 2021 - Philosophies 6 (3):53.
    As AI technologies increase in capability and ubiquity, AI accidents are becoming more common. Based on normal accident theory, high reliability theory, and open systems theory, we create a framework for understanding the risks associated with AI applications. This framework is designed to direct attention to pertinent system properties without requiring unwieldy amounts of accuracy. In addition, we also use AI safety principles to quantify the unique risks of increased intelligence and human-like qualities in AI. Together, these two fields (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  10. Unjustified untrue "beliefs": AI hallucinations and justification logics.Kristina Šekrst - forthcoming - In Kordula Świętorzecka, Filip Grgić & Anna Brozek (eds.), Logic, Knowledge, and Tradition. Essays in Honor of Srecko Kovac.
    In artificial intelligence (AI), responses generated by machine-learning models (most often large language models) may be unfactual information presented as a fact. For example, a chatbot might state that the Mona Lisa was painted in 1815. Such phenomenon is called AI hallucinations, seeking inspiration from human psychology, with a great difference of AI ones being connected to unjustified beliefs (that is, AI “beliefs”) rather than perceptual failures). -/- AI hallucinations may have their source in the data itself, that is, the (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  11.  50
    Social Choice Should Guide AI Alignment in Dealing with Diverse Human Feedback.Vincent Conitzer, Rachel Freedman, Jobst Heitzig, Wesley H. Holliday, Bob M. Jacobs, Nathan Lambert, Milan Mosse, Eric Pacuit, Stuart Russell, Hailey Schoelkopf, Emanuel Tewolde & William S. Zwicker - forthcoming - Proceedings of the Forty-First International Conference on Machine Learning.
    Foundation models such as GPT-4 are fine-tuned to avoid unsafe or otherwise problematic behavior, such as helping to commit crimes or producing racist text. One approach to fine-tuning, called reinforcement learning from human feedback, learns from humans' expressed preferences over multiple outputs. Another approach is constitutional AI, in which the input from humans is a list of high-level principles. But how do we deal with potentially diverging input from humans? How can we aggregate the input into consistent data about "collective" (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  12.  86
    AI-Related Misdirection Awareness in AIVR.Nadisha-Marie Aliman & Leon Kester - manuscript
    Recent AI progress led to a boost in beneficial applications from multiple research areas including VR. Simultaneously, in this newly unfolding deepfake era, ethically and security-relevant disagreements arose in the scientific community regarding the epistemic capabilities of present-day AI. However, given what is at stake, one can postulate that for a responsible approach, prior to engaging in a rigorous epistemic assessment of AI, humans may profit from a self-questioning strategy, an examination and calibration of the experience of their own epistemic (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  13.  49
    Anthropomorphism in AI.Arleen Salles, Kathinka Evers & Michele Farisco - 2020 - American Journal of Bioethics Neuroscience 11 (2):88-95.
    AI research is growing rapidly raising various ethical issues related to safety, risks, and other effects widely discussed in the literature. We believe that in order to adequately address those issues and engage in a productive normative discussion it is necessary to examine key concepts and categories. One such category is anthropomorphism. It is a well-known fact that AI’s functionalities and innovations are often anthropomorphized. The general public’s anthropomorphic attitudes and some of their ethical consequences have been widely discussed (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   22 citations  
  14. The argument for near-term human disempowerment through AI.Leonard Dung - 2024 - AI and Society:1-14.
    Many researchers and intellectuals warn about extreme risks from artificial intelligence. However, these warnings typically came without systematic arguments in support. This paper provides an argument that AI will lead to the permanent disempowerment of humanity, e.g. human extinction, by 2100. It rests on four substantive premises which it motivates and defends: first, the speed of advances in AI capability, as well as the capability level current systems have already reached, suggest that it is practically possible to build AI systems (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  15. AI as IA: The use and abuse of artificial intelligence (AI) for human enhancement through intellectual augmentation (IA).Alexandre Erler & Vincent C. Müller - 2023 - In Fabrice Jotterand & Marcello Ienca (eds.), The Routledge Handbook of the Ethics of Human Enhancement. Routledge. pp. 187-199.
    This paper offers an overview of the prospects and ethics of using AI to achieve human enhancement, and more broadly what we call intellectual augmentation (IA). After explaining the central notions of human enhancement, IA, and AI, we discuss the state of the art in terms of the main technologies for IA, with or without brain-computer interfaces. Given this picture, we discuss potential ethical problems, namely inadequate performance, safety, coercion and manipulation, privacy, cognitive liberty, authenticity, and fairness in more (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  16. Safety requirements vs. crashing ethically: what matters most for policies on autonomous vehicles.Björn Lundgren - forthcoming - AI and Society:1-11.
    The philosophical–ethical literature and the public debate on autonomous vehicles have been obsessed with ethical issues related to crashing. In this article, these discussions, including more empirical investigations, will be critically assessed. It is argued that a related and more pressing issue is questions concerning safety. For example, what should we require from autonomous vehicles when it comes to safety? What do we mean by ‘safety’? How do we measure it? In response to these questions, the article (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  17. The Shutdown Problem: An AI Engineering Puzzle for Decision Theorists.Elliott Thornley - forthcoming - Philosophical Studies:1-28.
    I explain the shutdown problem: the problem of designing artificial agents that (1) shut down when a shutdown button is pressed, (2) don’t try to prevent or cause the pressing of the shutdown button, and (3) otherwise pursue goals competently. I prove three theorems that make the difficulty precise. These theorems show that agents satisfying some innocuous-seeming conditions will often try to prevent or cause the pressing of the shutdown button, even in cases where it’s costly to do so. And (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  18. How to design AI for social good: seven essential factors.Luciano Floridi, Josh Cowls, Thomas C. King & Mariarosaria Taddeo - 2020 - Science and Engineering Ethics 26 (3):1771–1796.
    The idea of artificial intelligence for social good is gaining traction within information societies in general and the AI community in particular. It has the potential to tackle social problems through the development of AI-based solutions. Yet, to date, there is only limited understanding of what makes AI socially good in theory, what counts as AI4SG in practice, and how to reproduce its initial successes in terms of policies. This article addresses this gap by identifying seven ethical factors that are (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   37 citations  
  19. Will AI take away your job? [REVIEW]Marie Oldfield - 2020 - Tech Magazine.
    Will AI take away your job? The answer is probably not. AI systems can be good predictive systems and be very good at pattern recognition. AI systems have a very repetitive approach to sets of data, which can be useful in certain circumstances. However, AI does make obvious mistakes. This is because AI does not have a sense of context. As Humans we have years of experience in the real world. We have vast amounts of contextual data stored in our (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  20.  44
    AI Case Studies: Potential for Human Health, Space Exploration and Colonisation and a Proposed Superimposition of the Kubler-Ross Change Curve on the Hype Cycle.Martin Braddock & Matthew Williams - 2019 - Studia Humana 8 (1):3-18.
    The development and deployment of artificial intelligence (AI) is and will profoundly reshape human society, the culture and the composition of civilisations which make up human kind. All technological triggers tend to drive a hype curve which over time is realised by an output which is often unexpected, taking both pessimistic and optimistic perspectives and actions of drivers, contributors and enablers on a journey where the ultimate destination may be unclear. In this paper we hypothesise that this journey is not (...)
    No categories
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  21.  25
    Safety by simulation: theorizing the future of robot regulation.Mika Viljanen - 2024 - AI and Society 39 (1):139-154.
    Mobility robots may soon be among us, triggering a need for safety regulation. Robot safety regulation, however, remains underexplored, with only a few articles analyzing what regulatory approaches could be feasible. This article offers an account of the available regulatory strategies and attempts to theorize the effects of simulation-based safety regulation. The article first discusses the distinctive features of mobility robots as regulatory targets and argues that emergent behavior constitutes the key regulatory concern in designing robot (...) regulation regimes. In contrast to many accounts, the article posits that emergent behavior dynamics do not arise from robot autonomy, learning capability, or code unexplainability. Instead, they emerge from the complexity of robot technological constitutions coupled with near-infinite environmental variability and non-linear performance dynamics of the machine learning components. Second, the article reviews rules-based and performance-based regulation and argues that both will fail adequately constrain emergent robot behaviors. The article claims that controlling mobility robots requires a simulation-based regulatory approach. Simulation-based regulation is a novelty with significant theoretical and practical implications. The article argues that the approach signifies a radical break in regulatory forms of knowledge and temporalities. Simulations enact virtual futures to create a new regulatory knowledge type. Practically, the novel safety knowledge type may destabilize the existing conceptual space of safety politics and liability allocation patterns. (shrink)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  22. Deontology and Safe Artificial Intelligence.William D’Alessandro - forthcoming - Philosophical Studies:1-24.
    The field of AI safety aims to prevent increasingly capable artificially intelligent systems from causing humans harm. Research on moral alignment is widely thought to offer a promising safety strategy: if we can equip AI systems with appropriate ethical rules, according to this line of thought, they'll be unlikely to disempower, destroy or otherwise seriously harm us. Deontological morality looks like a particularly attractive candidate for an alignment target, given its popularity, relative technical tractability and commitment to harm-avoidance (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  23.  20
    Evaluating approaches for reducing catastrophic risks from AI.Leonard Dung - 2024 - AI and Ethics.
    According to a growing number of researchers, AI may pose catastrophic – or even existential – risks to humanity. Catastrophic risks may be taken to be risks of 100 million human deaths, or a similarly bad outcome. I argue that such risks – while contested – are sufficiently likely to demand rigorous discussion of potential societal responses. Subsequently, I propose four desiderata for approaches to the reduction of catastrophic risks from AI. The quality of such approaches can be assessed by (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  24.  12
    AI and suicide risk prediction: Facebook live and its aftermath.Dolores Peralta - forthcoming - AI and Society:1-13.
    As suicide rates increase worldwide, the mental health industry has reached an impasse in attempts to assess patients, predict risk, and prevent suicide. Traditional assessment tools are no more accurate than chance, prompting the need to explore new avenues in artificial intelligence (AI). Early studies into these tools show potential with higher accuracy rates than previous methods alone. Medical researchers, computer scientists, and social media companies are exploring these avenues. While Facebook leads the pack, its efforts stem from scrutiny following (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  25.  28
    Toward safe AI.Andres Morales-Forero, Samuel Bassetto & Eric Coatanea - 2023 - AI and Society 38 (2):685-696.
    Since some AI algorithms with high predictive power have impacted human integrity, safety has become a crucial challenge in adopting and deploying AI. Although it is impossible to prevent an algorithm from failing in complex tasks, it is crucial to ensure that it fails safely, especially if it is a critical system. Moreover, due to AI’s unbridled development, it is imperative to minimize the methodological gaps in these systems’ engineering. This paper uses the well-known Box-Jenkins method for statistical modeling (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  26.  15
    Clinicians and AI use: where is the professional guidance?Helen Smith, John Downer & Jonathan Ives - 2024 - Journal of Medical Ethics 50 (7):437-441.
    With the introduction of artificial intelligence (AI) to healthcare, there is also a need for professional guidance to support its use. New (2022) reports from National Health Service AI Lab & Health Education England focus on healthcare workers’ understanding and confidence in AI clinical decision support systems (AI-CDDSs), and are concerned with developing trust in, and the trustworthiness of these systems. While they offer guidance to aid developers and purchasers of such systems, they offer little specific guidance for the clinical (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  27. From Confucius to Coding and Avicenna to Algorithms: Cultivating Ethical AI Development through Cross-Cultural Ancient Wisdom.Ammar Younas & Yi Zeng - manuscript
    This paper explores the potential of integrating ancient educational principles from diverse eastern cultures into modern AI ethics curricula. It draws on the rich educational traditions of ancient China, India, Arabia, Persia, Japan, Tibet, Mongolia, and Korea, highlighting their emphasis on philosophy, ethics, holistic development, and critical thinking. By examining these historical educational systems, the paper establishes a correlation with modern AI ethics principles, advocating for the inclusion of these ancient teachings in current AI development and education. The proposed integration (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  28. AI armageddon and the three laws of robotics.Lee McCauley - 2007 - Ethics and Information Technology 9 (2):153-164.
    After 50 years, the fields of artificial intelligence and robotics capture the imagination of the general public while, at the same time, engendering a great deal of fear and skepticism. Isaac Asimov recognized this deep-seated misconception of technology and created the Three Laws of Robotics. The first part of this paper examines the underlying fear of intelligent robots, revisits Asimov’s response, and reports on some current opinions on the use of the Three Laws by practitioners. Finally, an argument against robotic (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  29.  16
    Public perception of military AI in the context of techno-optimistic society.Eleri Lillemäe, Kairi Talves & Wolfgang Wagner - forthcoming - AI and Society:1-15.
    In this study, we analyse the public perception of military AI in Estonia, a techno-optimistic country with high support for science and technology. This study involved quantitative survey data from 2021 on the public’s attitudes towards AI-based technology in general, and AI in developing and using weaponised unmanned ground systems (UGS) in particular. UGS are a technology that has been tested in militaries in recent years with the expectation of increasing effectiveness and saving manpower in dangerous military tasks. However, developing (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  30.  5
    When can we Kick (Some) Humans “Out of the Loop”? An Examination of the use of AI in Medical Imaging for Lumbar Spinal Stenosis.Kathryn Muyskens, Yonghui Ma, Jerry Menikoff, James Hallinan & Julian Savulescu - forthcoming - Asian Bioethics Review:1-17.
    Artificial intelligence (AI) has attracted an increasing amount of attention, both positive and negative. Its potential applications in healthcare are indeed manifold and revolutionary, and within the realm of medical imaging and radiology (which will be the focus of this paper), significant increases in accuracy and speed, as well as significant savings in cost, stand to be gained through the adoption of this technology. Because of its novelty, a norm of keeping humans “in the loop” wherever AI mechanisms are deployed (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  31.  57
    The autonomy-safety-paradox of service robotics in Europe and Japan: a comparative analysis.Hironori Matsuzaki & Gesa Lindemann - 2016 - AI and Society 31 (4):501-517.
    Service and personal care robots are starting to cross the threshold into the wilderness of everyday life, where they are supposed to interact with inexperienced lay users in a changing environment. In order to function as intended, robots must become independent entities that monitor themselves and improve their own behaviours based on learning outcomes in practice. This poses a great challenge to robotics, which we are calling the “autonomy-safety-paradox” (ASP). The integration of robot applications into society requires the reconciliation (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  32.  32
    On the Justified Use of AI Decision Support in Evidence-Based Medicine: Validity, Explainability, and Responsibility.Sune Holm - forthcoming - Cambridge Quarterly of Healthcare Ethics:1-7.
    When is it justified to use opaque artificial intelligence (AI) output in medical decision-making? Consideration of this question is of central importance for the responsible use of opaque machine learning (ML) models, which have been shown to produce accurate and reliable diagnoses, prognoses, and treatment suggestions in medicine. In this article, I discuss the merits of two answers to the question. According to the Explanation View, clinicians must have access to an explanation of why an output was produced. According to (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  33.  48
    Achieving Equity with Predictive Policing Algorithms: A Social Safety Net Perspective.Chun-Ping Yen & Tzu-Wei Hung - 2021 - Science and Engineering Ethics 27 (3):1-16.
    Whereas using artificial intelligence (AI) to predict natural hazards is promising, applying a predictive policing algorithm (PPA) to predict human threats to others continues to be debated. Whereas PPAs were reported to be initially successful in Germany and Japan, the killing of Black Americans by police in the US has sparked a call to dismantle AI in law enforcement. However, although PPAs may statistically associate suspects with economically disadvantaged classes and ethnic minorities, the targeted groups they aim to protect are (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  34. Machines learning values.Steve Petersen - 2020 - In S. Matthew Liao (ed.), Ethics of Artificial Intelligence. Oxford University Press.
    Whether it would take one decade or several centuries, many agree that it is possible to create a *superintelligence*---an artificial intelligence with a godlike ability to achieve its goals. And many who have reflected carefully on this fact agree that our best hope for a "friendly" superintelligence is to design it to *learn* values like ours, since our values are too complex to program or hardwire explicitly. But the value learning approach to AI safety faces three particularly philosophical puzzles: (...)
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  35. Thinking Inside the Box: Controlling and Using an Oracle AI.Stuart Armstrong, Anders Sandberg & Nick Bostrom - 2012 - Minds and Machines 22 (4):299-324.
    There is no strong reason to believe that human-level intelligence represents an upper limit of the capacity of artificial intelligence, should it be realized. This poses serious safety issues, since a superintelligent system would have great power to direct the future according to its possibly flawed motivation system. Solving this issue in general has proven to be considerably harder than expected. This paper looks at one particular approach, Oracle AI. An Oracle AI is an AI that does not act (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   18 citations  
  36.  8
    An Outlook for AI Innovation in Multimodal Communication Research.Alexander Henlein, Reetu Bhattacharjee & Jens Lemanski - 2024 - In Duffy Vincent G. (ed.), Digital Human Modeling and Applications in Health, Safety, Ergonomics and Risk Management (HCII 2024). pp. 182–234.
    In the rapidly evolving landscape of multimodal communication research, this paper explores the transformative role of machine learning (ML), particularly using multimodal large language models, in tracking, augmenting, annotating, and analyzing multimodal data. Building upon the foundations laid in our previous work, we explore the capabilities that have emerged over the past years. The integration of ML allows researchers to gain richer insights from multimodal data, enabling a deeper understanding of human (and non-human) communication across modalities. In particular, augmentation methods (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  37. Transparency and the Black Box Problem: Why We Do Not Trust AI.Warren J. von Eschenbach - 2021 - Philosophy and Technology 34 (4):1607-1622.
    With automation of routine decisions coupled with more intricate and complex information architecture operating this automation, concerns are increasing about the trustworthiness of these systems. These concerns are exacerbated by a class of artificial intelligence that uses deep learning, an algorithmic system of deep neural networks, which on the whole remain opaque or hidden from human comprehension. This situation is commonly referred to as the black box problem in AI. Without understanding how AI reaches its conclusions, it is an open (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   14 citations  
  38.  23
    Embedding artificial intelligence in society: looking beyond the EU AI master plan using the culture cycle.Simone Borsci, Ville V. Lehtola, Francesco Nex, Michael Ying Yang, Ellen-Wien Augustijn, Leila Bagheriye, Christoph Brune, Ourania Kounadi, Jamy Li, Joao Moreira, Joanne Van Der Nagel, Bernard Veldkamp, Duc V. Le, Mingshu Wang, Fons Wijnhoven, Jelmer M. Wolterink & Raul Zurita-Milla - forthcoming - AI and Society:1-20.
    The European Union Commission’s whitepaper on Artificial Intelligence proposes shaping the emerging AI market so that it better reflects common European values. It is a master plan that builds upon the EU AI High-Level Expert Group guidelines. This article reviews the masterplan, from a culture cycle perspective, to reflect on its potential clashes with current societal, technical, and methodological constraints. We identify two main obstacles in the implementation of this plan: the lack of a coherent EU vision to drive future (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  39.  20
    Toward Sociotechnical AI: Mapping Vulnerabilities for Machine Learning in Context.Roel Dobbe & Anouk Wolters - 2024 - Minds and Machines 34 (2):1-51.
    This paper provides an empirical and conceptual account on seeing machine learning models as part of a sociotechnical system to identify relevant vulnerabilities emerging in the context of use. As ML is increasingly adopted in socially sensitive and safety-critical domains, many ML applications end up not delivering on their promises, and contributing to new forms of algorithmic harm. There is still a lack of empirical insights as well as conceptual tools and frameworks to properly understand and design for the (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  40. Artificial thinking and doomsday projections: a discourse on trust, ethics and safety.Jeffrey White, Dietrich Brandt, Jan Söffner & Larry Stapleton - 2023 - AI and Society 38 (6):2119-2124.
    The article reflects on where AI is headed and the world along with it, considering trust, ethics and safety. Implicit in artificial thinking and doomsday appraisals is the engineered divorce from reality of sublime human embodiment. Jeffrey White, Dietrich Brandt, Jan Soeffner, and Larry Stapleton, four scholars associated with AI & Society, address these issues, and more, in the following exchange.
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  41.  45
    Ethics of automated vehicles: breaking traffic rules for road safety.Nick Reed, Tania Leiman, Paula Palade, Marieke Martens & Leon Kester - 2021 - Ethics and Information Technology 23 (4):777-789.
    In this paper, we explore and describe what is needed to allow connected and automated vehicles to break traffic rules in order to minimise road safety risk and to operate with appropriate transparency. Reviewing current traffic rules with particular reference to two driving situations, we illustrate why current traffic rules are not suitable for CAVs and why making new traffic rules specifically for CAVs would be inappropriate. In defining an alternative approach to achieving safe CAV driving behaviours, we describe (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  42.  37
    On meaningful human control of AI.Jovana Davidovic - manuscript
    Meaningful human control over AI is exalted as a key tool for assuring safety, dignity, and responsibility for AI and automated decision-systems. It is a central topic especially in fields that deal with the use of AI for decisions that could cause significant harm, like AI-enabled weapons systems. This paper argues that discussions regarding meaningful human control commonly fail to identify the purpose behind the call for meaningful human control and that stating that purpose is a necessary step in (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  43.  94
    On the person-based predictive policing of AI.Tzu-Wei Hung & Chun-Ping Yen - 2020 - Ethics and Information Technology 23 (3):165-176.
    Should you be targeted by police for a crime that AI predicts you will commit? In this paper, we analyse when, and to what extent, the person-based predictive policing (PP) — using AI technology to identify and handle individuals who are likely to breach the law — could be justifiably employed. We first examine PP’s epistemological limits, and then argue that these defects by no means refrain from its usage; they are worse in humans. Next, based on major AI ethics (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  44.  30
    Deny, dismiss and downplay: developers’ attitudes towards risk and their role in risk creation in the field of healthcare-AI.Shaul A. Duke - 2022 - Ethics and Information Technology 24 (1).
    Developers are often the engine behind the creation and implementation of new technologies, including in the artificial intelligence surge that is currently underway. In many cases these new technologies introduce significant risk to affected stakeholders; risks that can be reduced and mitigated by such a dominant party. This is fully recognized by texts that analyze risks in the current AI transformation, which suggest voluntary adoption of ethical standards and imposing ethical standards via regulation and oversight as tools to compel developers (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  45.  51
    Bringing older people’s perspectives on consumer socially assistive robots into debates about the future of privacy protection and AI governance.Andrea Slane & Isabel Pedersen - forthcoming - AI and Society:1-20.
    A growing number of consumer technology companies are aiming to convince older people that humanoid robots make helpful tools to support aging-in-place. As hybrid devices, socially assistive robots (SARs) are situated between health monitoring tools, familiar digital assistants, security aids, and more advanced AI-powered devices. Consequently, they implicate older people’s privacy in complex ways. Such devices are marketed to perform functions common to smart speakers (e.g., Amazon Echo) and smart home platforms (e.g., Google Home), while other functions are more specific (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  46.  44
    Institutionalised distrust and human oversight of artificial intelligence: towards a democratic design of AI governance under the European Union AI Act.Johann Laux - forthcoming - AI and Society:1-14.
    Human oversight has become a key mechanism for the governance of artificial intelligence (“AI”). Human overseers are supposed to increase the accuracy and safety of AI systems, uphold human values, and build trust in the technology. Empirical research suggests, however, that humans are not reliable in fulfilling their oversight tasks. They may be lacking in competence or be harmfully incentivised. This creates a challenge for human oversight to be effective. In addressing this challenge, this article aims to make three (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  47. Artificial Intelligence: Arguments for Catastrophic Risk.Adam Bales, William D'Alessandro & Cameron Domenico Kirk-Giannini - 2024 - Philosophy Compass 19 (2):e12964.
    Recent progress in artificial intelligence (AI) has drawn attention to the technology’s transformative potential, including what some see as its prospects for causing large-scale harm. We review two influential arguments purporting to show how AI could pose catastrophic risks. The first argument — the Problem of Power-Seeking — claims that, under certain assumptions, advanced AI systems are likely to engage in dangerous power-seeking behavior in pursuit of their goals. We review reasons for thinking that AI systems might seek power, that (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  48.  13
    Moral Engagement and Disengagement in Health Care AI Development.Ariadne A. Nichol, Meghan Halley, Carole Federico, Mildred K. Cho & Pamela L. Sankar - forthcoming - AJOB Empirical Bioethics.
    Background Machine learning (ML) is utilized increasingly in health care, and can pose harms to patients, clinicians, health systems, and the public. In response, regulators have proposed an approach that would shift more responsibility to ML developers for mitigating potential harms. To be effective, this approach requires ML developers to recognize, accept, and act on responsibility for mitigating harms. However, little is known regarding the perspectives of developers themselves regarding their obligations to mitigate harms.Methods We conducted 40 semi-structured interviews with (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  49.  23
    Johan Berglund: Why safety cultures degenerate and how to revive them.Richard Ennals - 2017 - AI and Society 32 (2):293-294.
  50. Facing Immersive “Post-Truth” in AIVR?Nadisha-Marie Aliman & Leon Kester - 2020 - Philosophies 5 (4):45.
    In recent years, prevalent global societal issues related to fake news, fakery, misinformation, and disinformation were brought to the fore, leading to the construction of descriptive labels such as “post-truth” to refer to the supposedly new emerging era. Thereby, the (mis-)use of technologies such as AI and VR has been argued to potentially fuel this new loss of “ground-truth”, for instance, via the ethically relevant deepfakes phenomena and the creation of realistic fake worlds, presumably undermining experiential veracity. Indeed, _unethical_ and (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
1 — 50 / 985