Results for 'AI Risk'

994 found
Order:
  1. AI Risk Assessment: A Scenario-Based, Proportional Methodology for the AI Act.Claudio Novelli, Federico Casolari, Antonino Rotolo, Mariarosaria Taddeo & Luciano Floridi - 2024 - Digital Society 3 (13):1-29.
    The EU Artificial Intelligence Act (AIA) defines four risk categories for AI systems: unacceptable, high, limited, and minimal. However, it lacks a clear methodology for the assessment of these risks in concrete situations. Risks are broadly categorized based on the application areas of AI systems and ambiguous risk factors. This paper suggests a methodology for assessing AI risk magnitudes, focusing on the construction of real-world risk scenarios. To this scope, we propose to integrate the AIA with (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  2. Taking AI Risks Seriously: a New Assessment Model for the AI Act.Claudio Novelli, Casolari Federico, Antonino Rotolo, Mariarosaria Taddeo & Luciano Floridi - 2023 - AI and Society 38 (3):1-5.
    The EU proposal for the Artificial Intelligence Act (AIA) defines four risk categories: unacceptable, high, limited, and minimal. However, as these categories statically depend on broad fields of application of AI, the risk magnitude may be wrongly estimated, and the AIA may not be enforced effectively. This problem is particularly challenging when it comes to regulating general-purpose AI (GPAI), which has versatile and often unpredictable applications. Recent amendments to the compromise text, though introducing context-specific assessments, remain insufficient. To (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  3. Extinction Risks from AI: Invisible to Science?Vojtech Kovarik, Christiaan van Merwijk & Ida Mattsson - manuscript
    In an effort to inform the discussion surrounding existential risks from AI, we formulate Extinction-level Goodhart’s Law as “Virtually any goal specification, pursued to the extreme, will result in the extinction of humanity”, and we aim to understand which formal models are suitable for investigating this hypothesis. Note that we remain agnostic as to whether Extinction-level Goodhart’s Law holds or not. As our key contribution, we identify a set of conditions that are necessary for a model that aims to be (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  4. Existential risk from AI and orthogonality: Can we have it both ways?Vincent C. Müller & Michael Cannon - 2021 - Ratio 35 (1):25-36.
    The standard argument to the conclusion that artificial intelligence (AI) constitutes an existential risk for the human species uses two premises: (1) AI may reach superintelligent levels, at which point we humans lose control (the ‘singularity claim’); (2) Any level of intelligence can go along with any goal (the ‘orthogonality thesis’). We find that the singularity claim requires a notion of ‘general intelligence’, while the orthogonality thesis requires a notion of ‘instrumental intelligence’. If this interpretation is correct, they cannot (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  5. Medical AI, Inductive Risk, and the Communication of Uncertainty: The Case of Disorders of Consciousness.Jonathan Birch - forthcoming - Journal of Medical Ethics.
    Some patients, following brain injury, do not outwardly respond to spoken commands, yet show patterns of brain activity that indicate responsiveness. This is “cognitive-motor dissociation” (CMD). Recent research has used machine learning to diagnose CMD from electroencephalogram (EEG) recordings. These techniques have high false discovery rates, raising a serious problem of inductive risk. It is no solution to communicate the false discovery rates directly to the patient’s family, because this information may confuse, alarm and mislead. Instead, we need a (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  6.  2
    AI and the falling sky: interrogating X-Risk.Nancy S. Jecker, Caesar Alimsinya Atuire, Jean-Christophe Bélisle-Pipon, Vardit Ravitsky & Anita Ho - forthcoming - Journal of Medical Ethics.
    The Buddhist Jātaka tells the tale of a hare lounging under a palm tree who becomes convinced the Earth is coming to an end when a ripe bael fruit falls on its head. Soon all the hares are running; other animals join them, forming a stampede of deer, boar, elk, buffalo, wild oxen, rhinoceros, tigers and elephants, loudly proclaiming the earth is ending.1 In the American retelling, the hare is ‘chicken little,’ and the exaggerated fear is that the sky is (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  7.  7
    Matched design for marginal causal effect on restricted mean survival time in observational studies.Bo Lu, Ai Ni & Zihan Lin - 2023 - Journal of Causal Inference 11 (1).
    Investigating the causal relationship between exposure and time-to-event outcome is an important topic in biomedical research. Previous literature has discussed the potential issues of using hazard ratio (HR) as the marginal causal effect measure due to noncollapsibility. In this article, we advocate using restricted mean survival time (RMST) difference as a marginal causal effect measure, which is collapsible and has a simple interpretation as the difference of area under survival curves over a certain time horizon. To address both measured and (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  8.  80
    AI Deception: A Survey of Examples, Risks, and Potential Solutions.Peter Park, Simon Goldstein, Aidan O'Gara, Michael Chen & Dan Hendrycks - manuscript
    This paper argues that a range of current AI systems have learned how to deceive humans. We define deception as the systematic inducement of false beliefs in the pursuit of some outcome other than the truth. We first survey empirical examples of AI deception, discussing both special-use AI systems (including Meta's CICERO) built for specific competitive situations, and general-purpose AI systems (such as large language models). Next, we detail several risks from AI deception, such as fraud, election tampering, and losing (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  9.  34
    Fairness and accountability of AI in disaster risk management: Opportunities and challenges.Caroline Gevaert, Mary Carman, Benjamin Rosman, Yola Georgiadou & Robert Soden - 2021 - Patterns 11 (2).
    Artificial Intelligence (AI) is increasingly being used in disaster risk management applications to predict the effect of upcoming disasters, plan for mitigation strategies, and determine who needs how much aid after a disaster strikes. The media is filled with unintended ethical concerns of AI algorithms, such as image recognition algorithms not recognizing persons of color or racist algorithmic predictions of whether offenders will recidivate. We know such unintended ethical consequences must play a role in DRM as well, yet there (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  10.  49
    Current cases of AI misalignment and their implications for future risks.Leonard Dung - 2023 - Synthese 202 (5):1-23.
    How can one build AI systems such that they pursue the goals their designers want them to pursue? This is the alignment problem. Numerous authors have raised concerns that, as research advances and systems become more powerful over time, misalignment might lead to catastrophic outcomes, perhaps even to the extinction or permanent disempowerment of humanity. In this paper, I analyze the severity of this risk based on current instances of misalignment. More specifically, I argue that contemporary large language models (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  11. Evaluating approaches for reducing catastrophic risks from AI.Leonard Dung - 2024 - AI and Ethics.
    According to a growing number of researchers, AI may pose catastrophic – or even existential – risks to humanity. Catastrophic risks may be taken to be risks of 100 million human deaths, or a similarly bad outcome. I argue that such risks – while contested – are sufficiently likely to demand rigorous discussion of potential societal responses. Subsequently, I propose four desiderata for approaches to the reduction of catastrophic risks from AI. The quality of such approaches can be assessed by (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  12.  4
    AI-Related Risk: An Epistemological Approach.Giacomo Zanotti, Daniele Chiffi & Viola Schiaffonati - 2024 - Philosophy and Technology 37 (2):1-18.
    Risks connected with AI systems have become a recurrent topic in public and academic debates, and the European proposal for the AI Act explicitly adopts a risk-based tiered approach that associates different levels of regulation with different levels of risk. However, a comprehensive and general framework to think about AI-related risk is still lacking. In this work, we aim to provide an epistemological analysis of such risk building upon the existing literature on disaster risk analysis (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  13.  6
    Innovation, risk and control: The true trend is ‘from tool to purpose’—A discussion on the standardization of AI.Oriana Chaves - forthcoming - AI and Society:1-12.
    In this text, our question is what is the current regulatory trend in countries that are not considered central in the development of artificial intelligence, such as Brazil: a preventive approach, or an experimental approach? We will analyze the bills (PL) that are being processed in legislative houses at the state level, and at the federal level, highlighting some elements, such as: Delimitation of the object (conceptualization), fundamental principles, ethical guidelines, relationship with human work, human supervision, and guidelines for public (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  14.  12
    AI and suicide risk prediction: Facebook live and its aftermath.Dolores Peralta - forthcoming - AI and Society:1-13.
    As suicide rates increase worldwide, the mental health industry has reached an impasse in attempts to assess patients, predict risk, and prevent suicide. Traditional assessment tools are no more accurate than chance, prompting the need to explore new avenues in artificial intelligence (AI). Early studies into these tools show potential with higher accuracy rates than previous methods alone. Medical researchers, computer scientists, and social media companies are exploring these avenues. While Facebook leads the pack, its efforts stem from scrutiny (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  15.  8
    Identify and Assess Hydropower Project’s Multidimensional Social Impacts with Rough Set and Projection Pursuit Model.Hui An, Wenjing Yang, Jin Huang, Ai Huang, Zhongchi Wan & Min An - 2020 - Complexity 2020:1-16.
    To realize the coordinated and sustainable development of hydropower projects and regional society, comprehensively evaluating hydropower projects’ influence is critical. Usually, hydropower project development has an impact on environmental geology and social and regional cultural development. Based on comprehensive consideration of complicated geological conditions, fragile ecological environment, resettlement of reservoir area, and other factors of future hydropower development in each country, we have constructed a comprehensive evaluation index system of hydropower projects, including 4 first-level indicators of social economy, environment, safety, (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  16. Risks of artificial intelligence.Vincent C. Müller (ed.) - 2016 - CRC Press - Chapman & Hall.
    Papers from the conference on AI Risk (published in JETAI), supplemented by additional work. --- If the intelligence of artificial systems were to surpass that of humans, humanity would face significant risks. The time has come to consider these issues, and this consideration must include progress in artificial intelligence (AI) as much as insights from AI theory. -- Featuring contributions from leading experts and thinkers in artificial intelligence, Risks of Artificial Intelligence is the first volume of collected chapters dedicated (...)
  17. Innovating with confidence: embedding AI governance and fairness in a financial services risk management framework.Luciano Floridi, Michelle Seng Ah Lee & Alexander Denev - 2020 - Berkeley Technology Law Journal 34.
    An increasing number of financial services (FS) companies are adopting solutions driven by artificial intelligence (AI) to gain operational efficiencies, derive strategic insights, and improve customer engagement. However, the rate of adoption has been low, in part due to the apprehension around its complexity and self-learning capability, which makes auditability a challenge in a highly regulated industry. There is limited literature on how FS companies can implement the governance and controls specific to AI-driven solutions. AI auditing cannot be performed in (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  18. Why AI Doomsayers are Like Sceptical Theists and Why it Matters.John Danaher - 2015 - Minds and Machines 25 (3):231-246.
    An advanced artificial intelligence could pose a significant existential risk to humanity. Several research institutes have been set-up to address those risks. And there is an increasing number of academic publications analysing and evaluating their seriousness. Nick Bostrom’s superintelligence: paths, dangers, strategies represents the apotheosis of this trend. In this article, I argue that in defending the credibility of AI risk, Bostrom makes an epistemic move that is analogous to one made by so-called sceptical theists in the debate (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  19. AI4People—an ethical framework for a good AI society: opportunities, risks, principles, and recommendations.Luciano Floridi, Josh Cowls, Monica Beltrametti, Raja Chatila, Patrice Chazerand, Virginia Dignum, Christoph Luetge, Robert Madelin, Ugo Pagallo, Francesca Rossi, Burkhard Schafer, Peggy Valcke & Effy Vayena - 2018 - Minds and Machines 28 (4):689-707.
    This article reports the findings of AI4People, an Atomium—EISMD initiative designed to lay the foundations for a “Good AI Society”. We introduce the core opportunities and risks of AI for society; present a synthesis of five ethical principles that should undergird its development and adoption; and offer 20 concrete recommendations—to assess, to develop, to incentivise, and to support good AI—which in some cases may be undertaken directly by national or supranational policy makers, while in others may be led by other (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   181 citations  
  20.  20
    Ethics in Online AI-Based Systems: Risks and Opportunities in Current Technological Trends.Joan Casas-Roma, Santi Caballe & Jordi Conesa (eds.) - 2024 - Academic Press.
    Recent technological advancements have deeply transformed society and the way people interact with each other. Instantaneous communication platforms have allowed connections with other people, forming global communities, and creating unprecedented opportunities in many sectors, making access to online resources more ubiquitous by reducing limitations imposed by geographical distance and temporal constrains. These technological developments bear ethically relevant consequences with their deployment, and legislations often lag behind such advancements. Because the appearance and deployment of these technologies happen much faster than legislative (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  21. All too human? Identifying and mitigating ethical risks of Social AI.Henry Shevlin - manuscript
    This paper presents an overview of the risks and benefits of Social AI, understood as conversational AI systems that cater to human social needs like romance, companionship, or entertainment. Section 1 of the paper provides a brief history of conversational AI systems and introduces conceptual distinctions to help distinguish varieties of Social AI and pathways to their deployment. Section 2 of the paper adds further context via a brief discussion of anthropomorphism and its relevance to assessment of human-chatbot relationships. Section (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  22. How to deal with risks of AI suffering.Leonard Dung - forthcoming - Inquiry: An Interdisciplinary Journal of Philosophy.
    1. 1.1. Suffering is bad. This is why, ceteris paribus, there are strong moral reasons to prevent suffering. Moreover, typically, those moral reasons are stronger when the amount of suffering at st...
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  23.  26
    Three lines of defense against risks from AI.Jonas Schuett - forthcoming - AI and Society:1-15.
    Organizations that develop and deploy artificial intelligence (AI) systems need to manage the associated risks—for economic, legal, and ethical reasons. However, it is not always clear who is responsible for AI risk management. The three lines of defense (3LoD) model, which is considered best practice in many industries, might offer a solution. It is a risk management framework that helps organizations to assign and coordinate risk management roles and responsibilities. In this article, I suggest ways in which (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  24. Editorial: Risks of artificial intelligence.Vincent C. Müller - 2015 - In Risks of general intelligence. CRC Press - Chapman & Hall. pp. 1-8.
    If the intelligence of artificial systems were to surpass that of humans significantly, this would constitute a significant risk for humanity. Time has come to consider these issues, and this consideration must include progress in AI as much as insights from the theory of AI. The papers in this volume try to make cautious headway in setting the problem, evaluating predictions on the future of AI, proposing ways to ensure that AI systems will be beneficial to humans – and (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  25.  24
    The Emotional Risk Posed by AI (Artificial Intelligence) in the Workplace.Maria Danielsen - 2023 - Norsk Filosofisk Tidsskrift 58 (2-3):106-117.
    The existential risk posed by ubiquitous artificial intelligence (AI) is a subject of frequent discussion with descriptions of the prospect of misuse, the fear of mass destruction, and the singularity. In this paper I address an under-explored category of existential risk posed by AI, namely emotional risk. Values are a main source of emotions. By challenging some of our most essential values, AI systems are therefore likely to expose us to emotional risks such as loss of care (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  26.  31
    The Blueprint for an AI Bill of Rights: In Search of Enaction, at Risk of Inaction.Emmie Hine & Luciano Floridi - 2023 - Minds and Machines 33 (2):285-292.
    The US is promoting a new vision of a “Good AI Society” through its recent AI Bill of Rights. This offers a promising vision of community-oriented equity unique amongst peer countries. However, it leaves the door open for potential rights violations. Furthermore, it may have some federal impact, but it is non-binding, and without concrete legislation, the private sector is likely to ignore it.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  27.  28
    Welcome to the Machine: AI, Existential Risk, and the Iron Cage of Modernity.Jay A. Gupta - 2023 - Telos: Critical Theory of the Contemporary 2023 (203):163-169.
    ExcerptRecent advances in the functional power of artificial intelligence (AI) have prompted an urgent warning from industry leaders and researchers concerning its “profound risks to society and humanity.”1 Their open letter is admirable not only for its succinct identification of said risks, which include the mass dissemination of misinformation, loss of jobs, and even the possible extinction of our species, but also for its clear normative framing of the problem: “Should we let machines flood our information channels with propaganda and (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  28.  54
    When Doctors and AI Interact: on Human Responsibility for Artificial Risks.Mario Verdicchio & Andrea Perin - 2022 - Philosophy and Technology 35 (1):1-28.
    A discussion concerning whether to conceive Artificial Intelligence systems as responsible moral entities, also known as “artificial moral agents”, has been going on for some time. In this regard, we argue that the notion of “moral agency” is to be attributed only to humans based on their autonomy and sentience, which AI systems lack. We analyze human responsibility in the presence of AI systems in terms of meaningful control and due diligence and argue against fully automated systems in medicine. With (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  29. Classification of Global Catastrophic Risks Connected with Artificial Intelligence.Alexey Turchin & David Denkenberger - 2020 - AI and Society 35 (1):147-163.
    A classification of the global catastrophic risks of AI is presented, along with a comprehensive list of previously identified risks. This classification allows the identification of several new risks. We show that at each level of AI’s intelligence power, separate types of possible catastrophes dominate. Our classification demonstrates that the field of AI risks is diverse, and includes many scenarios beyond the commonly discussed cases of a paperclip maximizer or robot-caused unemployment. Global catastrophic failure could happen at various levels of (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  30.  38
    AI-Related Misdirection Awareness in AIVR.Nadisha-Marie Aliman & Leon Kester - manuscript
    Recent AI progress led to a boost in beneficial applications from multiple research areas including VR. Simultaneously, in this newly unfolding deepfake era, ethically and security-relevant disagreements arose in the scientific community regarding the epistemic capabilities of present-day AI. However, given what is at stake, one can postulate that for a responsible approach, prior to engaging in a rigorous epistemic assessment of AI, humans may profit from a self-questioning strategy, an examination and calibration of the experience of their own epistemic (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  31.  21
    Exploring Factors of the Willingness to Accept AI-Assisted Learning Environments: An Empirical Investigation Based on the UTAUT Model and Perceived Risk Theory.Wentao Wu, Ben Zhang, Shuting Li & Hehai Liu - 2022 - Frontiers in Psychology 13.
    Artificial intelligence technology has been widely applied in many fields. AI-assisted learning environments have been implemented in classrooms to facilitate the innovation of pedagogical models. However, college students' willingness to accept AI-assisted learning environments has been ignored. Exploring the factors that influence college students' willingness to use AI can promote AI technology application in higher education. Based on the Unified Theory of Acceptance and Use of Technology and the theory of perceived risk, this study identified six factors that influence (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  32. New developments in the philosophy of AI.Vincent C. Müller - 2016 - In Vincent C. Müller (ed.), Fundamental Issues of Artificial Intelligence. Cham: Springer.
    The philosophy of AI has seen some changes, in particular: 1) AI moves away from cognitive science, and 2) the long term risks of AI now appear to be a worthy concern. In this context, the classical central concerns – such as the relation of cognition and computation, embodiment, intelligence & rationality, and information – will regain urgency.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  33.  30
    Understanding and Avoiding AI Failures: A Practical Guide.Robert Williams & Roman Yampolskiy - 2021 - Philosophies 6 (3):53.
    As AI technologies increase in capability and ubiquity, AI accidents are becoming more common. Based on normal accident theory, high reliability theory, and open systems theory, we create a framework for understanding the risks associated with AI applications. This framework is designed to direct attention to pertinent system properties without requiring unwieldy amounts of accuracy. In addition, we also use AI safety principles to quantify the unique risks of increased intelligence and human-like qualities in AI. Together, these two fields give (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  34.  30
    Deny, dismiss and downplay: developers’ attitudes towards risk and their role in risk creation in the field of healthcare-AI.Shaul A. Duke - 2022 - Ethics and Information Technology 24 (1).
    Developers are often the engine behind the creation and implementation of new technologies, including in the artificial intelligence surge that is currently underway. In many cases these new technologies introduce significant risk to affected stakeholders; risks that can be reduced and mitigated by such a dominant party. This is fully recognized by texts that analyze risks in the current AI transformation, which suggest voluntary adoption of ethical standards and imposing ethical standards via regulation and oversight as tools to compel (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  35. Ethical considerations in Risk management of autonomous and intelligent systems.Anetta Jedličková - 2024 - Ethics and Bioethics (in Central Europe) 14 (1-2):80-95.
    The rapid development of Artificial Intelligence (AI) has raised concerns regarding the potential risks it may pose to humans, society, and the environment. Recent advancements have intensified these concerns, emphasizing the need for a deeper understanding of the technical, societal, and ethical aspects that could lead to adverse or harmful failures in decisions made by autonomous and intelligent systems (AIS). This paper aims to examine the ethical dimensions of risk management in AIS. Its objective is to highlight the significance (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  36.  21
    AI research ethics is in its infancy: the EU’s AI Act can make it a grown-up.Anaïs Resseguier & Fabienne Ufert - 2024 - Research Ethics 20 (2):143-155.
    As the artificial intelligence (AI) ethics field is currently working towards its operationalisation, ethics review as carried out by research ethics committees (RECs) constitutes a powerful, but so far underdeveloped, framework to make AI ethics effective in practice at the research level. This article contributes to the elaboration of research ethics frameworks for research projects developing and/or using AI. It highlights that these frameworks are still in their infancy and in need of a structure and criteria to ensure AI research (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  37. Clinical Decisions Using AI Must Consider Patient Values.Jonathan Birch, Kathleen A. Creel, Abhinav K. Jha & Anya Plutynski - 2022 - Nature Medicine 28:229–232.
    Built-in decision thresholds for AI diagnostics are ethically problematic, as patients may differ in their attitudes about the risk of false-positive and false-negative results, which will require that clinicians assess patient values.
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  38.  42
    No we shouldn’t be afraid of medical AI; it involves risks and opportunities.Rosalind J. McDougall - 2019 - Journal of Medical Ethics 45 (8):559-559.
    In contrast to Di Nucci’s characterisation, my argument is not a technoapocalyptic one. The view I put forward is that systems like IBM’s Watson for Oncology create both risks and opportunities from the perspective of shared decision-making. In this response, I address the issues that Di Nucci raises and highlight the importance of bioethicists engaging critically with these developing technologies.
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  39. Risks of artificial general intelligence.Vincent C. Müller (ed.) - 2014 - Taylor & Francis (JETAI).
    Special Issue “Risks of artificial general intelligence”, Journal of Experimental and Theoretical Artificial Intelligence, 26/3 (2014), ed. Vincent C. Müller. http://www.tandfonline.com/toc/teta20/26/3# - Risks of general artificial intelligence, Vincent C. Müller, pages 297-301 - Autonomous technology and the greater human good - Steve Omohundro - pages 303-315 - - - The errors, insights and lessons of famous AI predictions – and what they mean for the future - Stuart Armstrong, Kaj Sotala & Seán S. Ó hÉigeartaigh - pages 317-342 - - (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  40.  17
    Opening the black boxes of the black carpet in the era of risk society: a sociological analysis of AI, algorithms and big data at work through the case study of the Greek postal services.Christos Kouroutzas & Venetia Palamari - forthcoming - AI and Society:1-14.
    This article draws on contributions from the Sociology of Science and Technology and Science and Technology Studies, the Sociology of Risk and Uncertainty, and the Sociology of Work, focusing on the transformations of employment regarding expanded automation, robotization and informatization. The new work patterns emerging due to the introduction of software and hardware technologies, which are based on artificial intelligence, algorithms, big data gathering and robotic systems are examined closely. This article attempts to “open the black boxes” of the (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  41. The argument for near-term human disempowerment through AI.Leonard Dung - 2024 - AI and Society:1-14.
    Many researchers and intellectuals warn about extreme risks from artificial intelligence. However, these warnings typically came without systematic arguments in support. This paper provides an argument that AI will lead to the permanent disempowerment of humanity, e.g. human extinction, by 2100. It rests on four substantive premises which it motivates and defends: first, the speed of advances in AI capability, as well as the capability level current systems have already reached, suggest that it is practically possible to build AI systems (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  42. Combating Disinformation with AI: Epistemic and Ethical Challenges.Benjamin Lange & Ted Lechterman - 2021 - IEEE International Symposium on Ethics in Engineering, Science and Technology (ETHICS) 1:1-5.
    AI-supported methods for identifying and combating disinformation are progressing in their development and application. However, these methods face a litany of epistemic and ethical challenges. These include (1) robustly defining disinformation, (2) reliably classifying data according to this definition, and (3) navigating ethical risks in the deployment of countermeasures, which involve a mixture of harms and benefits. This paper seeks to expose and offer preliminary analysis of these challenges.
    Direct download  
     
    Export citation  
     
    Bookmark  
  43.  52
    Against AI-improved Personal Memory.Björn Lundgren - 2020 - In Aging between Participation and Simulation. pp. 223–234.
    In 2017, Tom Gruber held a TED talk, in which he presented a vision of improving and enhancing humanity with AI technology. Specifically, Gruber suggested that an AI-improved personal memory (APM) would benefit people by improving their “mental gain”, making us more creative, improving our “social grace”, enabling us to do “science on our own data about what makes us feel good and stay healthy”, and, for people suffering from dementia, it “could make a difference between a life of isolation (...)
    Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  44. Language Agents Reduce the Risk of Existential Catastrophe.Simon Goldstein & Cameron Domenico Kirk-Giannini - forthcoming - AI and Society:1-11.
    Recent advances in natural language processing have given rise to a new kind of AI architecture: the language agent. By repeatedly calling an LLM to perform a variety of cognitive tasks, language agents are able to function autonomously to pursue goals specified in natural language and stored in a human-readable format. Because of their architecture, language agents exhibit behavior that is predictable according to the laws of folk psychology: they function as though they have desires and beliefs, and then make (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  45. AI Safety: A Climb To Armageddon?Herman Cappelen, Dever Josh & Hawthorne John - manuscript
    This paper presents an argument that certain AI safety measures, rather than mitigating existential risk, may instead exacerbate it. Under certain key assumptions - the inevitability of AI failure, the expected correlation between an AI system's power at the point of failure and the severity of the resulting harm, and the tendency of safety measures to enable AI systems to become more powerful before failing - safety efforts have negative expected utility. The paper examines three response strategies: Optimism, Mitigation, (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  46.  6
    Existentialist risk and value misalignment.Ariela Tubert & Justin Tiehen - forthcoming - Philosophical Studies.
    We argue that two long-term goals of AI research stand in tension with one another. The first involves creating AI that is safe, where this is understood as solving the problem of value alignment. The second involves creating artificial general intelligence, meaning AI that operates at or beyond human capacity across all or many intellectual domains. Our argument focuses on the human capacity to make what we call “existential choices”, choices that transform who we are as persons, including transforming what (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  47. Living with Uncertainty: Full Transparency of AI isn’t Needed for Epistemic Trust in AI-based Science.Uwe Peters - forthcoming - Social Epistemology Review and Reply Collective.
    Can AI developers be held epistemically responsible for the processing of their AI systems when these systems are epistemically opaque? And can explainable AI (XAI) provide public justificatory reasons for opaque AI systems’ outputs? Koskinen (2024) gives negative answers to both questions. Here, I respond to her and argue for affirmative answers. More generally, I suggest that when considering people’s uncertainty about the factors causally determining an opaque AI’s output, it might be worth keeping in mind that a degree of (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  48. The AI gambit — leveraging artificial intelligence to combat climate change: opportunities, challenges, and recommendations.Josh Cowls, Andreas Tsamados, Mariarosaria Taddeo & Luciano Floridi - 2021 - In Josh Cowls, Andreas Tsamados, Mariarosaria Taddeo & Luciano Floridi (eds.), Vodafone Institute for Society and Communications.
    In this article we analyse the role that artificial intelligence (AI) could play, and is playing, to combat global climate change. We identify two crucial opportunities that AI offers in this domain: it can help improve and expand current understanding of climate change and it contribute to combating the climate crisis effectively. However, the development of AI also raises two sets of problems when considering climate change: the possible exacerbation of social and ethical challenges already associated with AI, and the (...)
    Direct download  
     
    Export citation  
     
    Bookmark   4 citations  
  49.  26
    The AI gambit: leveraging artificial intelligence to combat climate change—opportunities, challenges, and recommendations.Josh Cowls, Andreas Tsamados, Mariarosaria Taddeo & Luciano Floridi - 2021 - AI and Society:1-25.
    In this article, we analyse the role that artificial intelligence (AI) could play, and is playing, to combat global climate change. We identify two crucial opportunities that AI offers in this domain: it can help improve and expand current understanding of climate change, and it can contribute to combatting the climate crisis effectively. However, the development of AI also raises two sets of problems when considering climate change: the possible exacerbation of social and ethical challenges already associated with AI, and (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   10 citations  
  50.  16
    Domesticating AI technology in public services. The case of the City of Espoo’s artificial intelligence experiment.Marja Alastalo, Jaana Parviainen & Marta Choroszewicz - 2022 - Yhteiskuntapolitiikka 87 (3):185–196.
    Public sector institutions are increasingly investing resources in data collection and data analytics to provide better public services at lower cost, to anticipate demand for services, to identify high-risk groups, and to develop targeted interventions. Prior research has shown that the media shape understanding of the possibilities of technology and creates related expectations. In this article we explore how artificial intelligence and emerging data-driven technologies are made familiar and by whose voices they are talked about in the media. Empirically, (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 994