Results for 'ai safety'

984 found
Order:
  1.  6
    Identify and Assess Hydropower Project’s Multidimensional Social Impacts with Rough Set and Projection Pursuit Model.Hui An, Wenjing Yang, Jin Huang, Ai Huang, Zhongchi Wan & Min An - 2020 - Complexity 2020:1-16.
    To realize the coordinated and sustainable development of hydropower projects and regional society, comprehensively evaluating hydropower projects’ influence is critical. Usually, hydropower project development has an impact on environmental geology and social and regional cultural development. Based on comprehensive consideration of complicated geological conditions, fragile ecological environment, resettlement of reservoir area, and other factors of future hydropower development in each country, we have constructed a comprehensive evaluation index system of hydropower projects, including 4 first-level indicators of social economy, environment, (...), and fairness, which contain 26 second-level indicators. To solve the problem that existing models cannot evaluate dynamic nonlinear optimization, a projection pursuit model is constructed by using rough set reduction theory to simplify the index. Then, an accelerated genetic algorithm based on real number coding is used to solve the model and empirical study is carried out with the Y hydropower station as a sample. The evaluation results show that the evaluation index system and assessment model constructed in our paper effectively reduce the subjectivity of index weight. Applying our model to the social impact assessment of related international hydropower projects can not only comprehensively analyze the social impact of hydropower projects but also identify important social influencing factors and effectively analyze the social impact level of each dimension. Furthermore, SIA assessment can be conducive to project decision-making, avoiding social risks and social stability. (shrink)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  2.  14
    AI safety: necessary, but insufficient and possibly problematic.Deepak P. - forthcoming - AI and Society:1-3.
  3. Acceleration AI Ethics, the Debate between Innovation and Safety, and Stability AI’s Diffusion versus OpenAI’s Dall-E.James Brusseau - manuscript
    One objection to conventional AI ethics is that it slows innovation. This presentation responds by reconfiguring ethics as an innovation accelerator. The critical elements develop from a contrast between Stability AI’s Diffusion and OpenAI’s Dall-E. By analyzing the divergent values underlying their opposed strategies for development and deployment, five conceptions are identified as common to acceleration ethics. Uncertainty is understood as positive and encouraging, rather than discouraging. Innovation is conceived as intrinsically valuable, instead of worthwhile only as mediated by social (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  4.  14
    Applying ethics to AI in the workplace: the design of a scorecard for Australian workplace health and safety.Andreas Cebulla, Zygmunt Szpak, Catherine Howell, Genevieve Knight & Sazzad Hussain - 2023 - AI and Society 38 (2):919-935.
    Artificial Intelligence (AI) is taking centre stage in economic growth and business operations alike. Public discourse about the practical and ethical implications of AI has mainly focussed on the societal level. There is an emerging knowledge base on AI risks to human rights around data security and privacy concerns. A separate strand of work has highlighted the stresses of working in the gig economy. This prevailing focus on human rights and gig impacts has been at the expense of a closer (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  5. Safety Engineering for Artificial General Intelligence.Roman Yampolskiy & Joshua Fox - 2013 - Topoi 32 (2):217-226.
    Machine ethics and robot rights are quickly becoming hot topics in artificial intelligence and robotics communities. We will argue that attempts to attribute moral agency and assign rights to all intelligent machines are misguided, whether applied to infrahuman or superhuman AIs, as are proposals to limit the negative effects of AIs by constraining their behavior. As an alternative, we propose a new science of safety engineering for intelligent artificial agents based on maximizing for what humans value. In particular, we (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  6. Social Choice for AI Alignment: Dealing with Diverse Human Feedback.Vincent Conitzer, Rachel Freedman, Jobst Heitzig, Wesley H. Holliday, Bob M. Jacobs, Nathan Lambert, Milan Mosse, Eric Pacuit, Stuart Russell, Hailey Schoelkopf, Emanuel Tewolde & William S. Zwicker - manuscript
    Foundation models such as GPT-4 are fine-tuned to avoid unsafe or otherwise problematic behavior, so that, for example, they refuse to comply with requests for help with committing crimes or with producing racist text. One approach to fine-tuning, called reinforcement learning from human feedback, learns from humans' expressed preferences over multiple outputs. Another approach is constitutional AI, in which the input from humans is a list of high-level principles. But how do we deal with potentially diverging input from humans? How (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  7.  26
    Understanding and Avoiding AI Failures: A Practical Guide.Robert Williams & Roman Yampolskiy - 2021 - Philosophies 6 (3):53.
    As AI technologies increase in capability and ubiquity, AI accidents are becoming more common. Based on normal accident theory, high reliability theory, and open systems theory, we create a framework for understanding the risks associated with AI applications. This framework is designed to direct attention to pertinent system properties without requiring unwieldy amounts of accuracy. In addition, we also use AI safety principles to quantify the unique risks of increased intelligence and human-like qualities in AI. Together, these two fields (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  8. AI as IA: The use and abuse of artificial intelligence (AI) for human enhancement through intellectual augmentation (IA).Alexandre Erler & Vincent C. Müller - 2023 - In Fabrice Jotterand & Marcello Ienca (eds.), The Routledge Handbook of the Ethics of Human Enhancement. Routledge. pp. 187-199.
    This paper offers an overview of the prospects and ethics of using AI to achieve human enhancement, and more broadly what we call intellectual augmentation (IA). After explaining the central notions of human enhancement, IA, and AI, we discuss the state of the art in terms of the main technologies for IA, with or without brain-computer interfaces. Given this picture, we discuss potential ethical problems, namely inadequate performance, safety, coercion and manipulation, privacy, cognitive liberty, authenticity, and fairness in more (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  9. The Shutdown Problem: An AI Engineering Puzzle for Decision Theorists.Elliott Thornley - forthcoming - Philosophical Studies.
    I explain the shutdown problem: the problem of designing artificial agents that (1) shut down when a shutdown button is pressed, (2) don’t try to prevent or cause the pressing of the shutdown button, and (3) otherwise pursue goals competently. I prove three theorems that make the difficulty precise. These theorems show that agents satisfying some innocuous-seeming conditions will often try to prevent or cause the pressing of the shutdown button, even in cases where it’s costly to do so. And (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  10. The argument for near-term human disempowerment through AI.Leonard Dung - 2024 - AI and Society:1-14.
    Many researchers and intellectuals warn about extreme risks from artificial intelligence. However, these warnings typically came without systematic arguments in support. This paper provides an argument that AI will lead to the permanent disempowerment of humanity, e.g. human extinction, by 2100. It rests on four substantive premises which it motivates and defends: first, the speed of advances in AI capability, as well as the capability level current systems have already reached, suggest that it is practically possible to build AI systems (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  11.  21
    Safety by simulation: theorizing the future of robot regulation.Mika Viljanen - 2024 - AI and Society 39 (1):139-154.
    Mobility robots may soon be among us, triggering a need for safety regulation. Robot safety regulation, however, remains underexplored, with only a few articles analyzing what regulatory approaches could be feasible. This article offers an account of the available regulatory strategies and attempts to theorize the effects of simulation-based safety regulation. The article first discusses the distinctive features of mobility robots as regulatory targets and argues that emergent behavior constitutes the key regulatory concern in designing robot (...) regulation regimes. In contrast to many accounts, the article posits that emergent behavior dynamics do not arise from robot autonomy, learning capability, or code unexplainability. Instead, they emerge from the complexity of robot technological constitutions coupled with near-infinite environmental variability and non-linear performance dynamics of the machine learning components. Second, the article reviews rules-based and performance-based regulation and argues that both will fail adequately constrain emergent robot behaviors. The article claims that controlling mobility robots requires a simulation-based regulatory approach. Simulation-based regulation is a novelty with significant theoretical and practical implications. The article argues that the approach signifies a radical break in regulatory forms of knowledge and temporalities. Simulations enact virtual futures to create a new regulatory knowledge type. Practically, the novel safety knowledge type may destabilize the existing conceptual space of safety politics and liability allocation patterns. (shrink)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  12.  94
    Safety requirements vs. crashing ethically: what matters most for policies on autonomous vehicles.Björn Lundgren - forthcoming - AI and Society:1-11.
    The philosophical–ethical literature and the public debate on autonomous vehicles have been obsessed with ethical issues related to crashing. In this article, these discussions, including more empirical investigations, will be critically assessed. It is argued that a related and more pressing issue is questions concerning safety. For example, what should we require from autonomous vehicles when it comes to safety? What do we mean by ‘safety’? How do we measure it? In response to these questions, the article (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  13.  36
    Anthropomorphism in AI.Arleen Salles, Kathinka Evers & Michele Farisco - 2020 - American Journal of Bioethics Neuroscience 11 (2):88-95.
    AI research is growing rapidly raising various ethical issues related to safety, risks, and other effects widely discussed in the literature. We believe that in order to adequately address those issues and engage in a productive normative discussion it is necessary to examine key concepts and categories. One such category is anthropomorphism. It is a well-known fact that AI’s functionalities and innovations are often anthropomorphized. The general public’s anthropomorphic attitudes and some of their ethical consequences have been widely discussed (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   19 citations  
  14. AI armageddon and the three laws of robotics.Lee McCauley - 2007 - Ethics and Information Technology 9 (2):153-164.
    After 50 years, the fields of artificial intelligence and robotics capture the imagination of the general public while, at the same time, engendering a great deal of fear and skepticism. Isaac Asimov recognized this deep-seated misconception of technology and created the Three Laws of Robotics. The first part of this paper examines the underlying fear of intelligent robots, revisits Asimov’s response, and reports on some current opinions on the use of the Three Laws by practitioners. Finally, an argument against robotic (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  15. How to design AI for social good: seven essential factors.Luciano Floridi, Josh Cowls, Thomas C. King & Mariarosaria Taddeo - 2020 - Science and Engineering Ethics 26 (3):1771–1796.
    The idea of artificial intelligence for social good is gaining traction within information societies in general and the AI community in particular. It has the potential to tackle social problems through the development of AI-based solutions. Yet, to date, there is only limited understanding of what makes AI socially good in theory, what counts as AI4SG in practice, and how to reproduce its initial successes in terms of policies. This article addresses this gap by identifying seven ethical factors that are (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   36 citations  
  16.  41
    AI Case Studies: Potential for Human Health, Space Exploration and Colonisation and a Proposed Superimposition of the Kubler-Ross Change Curve on the Hype Cycle.Martin Braddock & Matthew Williams - 2019 - Studia Humana 8 (1):3-18.
    The development and deployment of artificial intelligence (AI) is and will profoundly reshape human society, the culture and the composition of civilisations which make up human kind. All technological triggers tend to drive a hype curve which over time is realised by an output which is often unexpected, taking both pessimistic and optimistic perspectives and actions of drivers, contributors and enablers on a journey where the ultimate destination may be unclear. In this paper we hypothesise that this journey is not (...)
    No categories
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  17.  12
    AI and suicide risk prediction: Facebook live and its aftermath.Dolores Peralta - forthcoming - AI and Society:1-13.
    As suicide rates increase worldwide, the mental health industry has reached an impasse in attempts to assess patients, predict risk, and prevent suicide. Traditional assessment tools are no more accurate than chance, prompting the need to explore new avenues in artificial intelligence (AI). Early studies into these tools show potential with higher accuracy rates than previous methods alone. Medical researchers, computer scientists, and social media companies are exploring these avenues. While Facebook leads the pack, its efforts stem from scrutiny following (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  18. Will AI take away your job? [REVIEW]Marie Oldfield - 2020 - Tech Magazine.
    Will AI take away your job? The answer is probably not. AI systems can be good predictive systems and be very good at pattern recognition. AI systems have a very repetitive approach to sets of data, which can be useful in certain circumstances. However, AI does make obvious mistakes. This is because AI does not have a sense of context. As Humans we have years of experience in the real world. We have vast amounts of contextual data stored in our (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  19.  26
    Toward safe AI.Andres Morales-Forero, Samuel Bassetto & Eric Coatanea - 2023 - AI and Society 38 (2):685-696.
    Since some AI algorithms with high predictive power have impacted human integrity, safety has become a crucial challenge in adopting and deploying AI. Although it is impossible to prevent an algorithm from failing in complex tasks, it is crucial to ensure that it fails safely, especially if it is a critical system. Moreover, due to AI’s unbridled development, it is imperative to minimize the methodological gaps in these systems’ engineering. This paper uses the well-known Box-Jenkins method for statistical modeling (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  20.  16
    Public perception of military AI in the context of techno-optimistic society.Eleri Lillemäe, Kairi Talves & Wolfgang Wagner - forthcoming - AI and Society:1-15.
    In this study, we analyse the public perception of military AI in Estonia, a techno-optimistic country with high support for science and technology. This study involved quantitative survey data from 2021 on the public’s attitudes towards AI-based technology in general, and AI in developing and using weaponised unmanned ground systems (UGS) in particular. UGS are a technology that has been tested in militaries in recent years with the expectation of increasing effectiveness and saving manpower in dangerous military tasks. However, developing (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  21.  46
    Bringing older people’s perspectives on consumer socially assistive robots into debates about the future of privacy protection and AI governance.Andrea Slane & Isabel Pedersen - forthcoming - AI and Society:1-20.
    A growing number of consumer technology companies are aiming to convince older people that humanoid robots make helpful tools to support aging-in-place. As hybrid devices, socially assistive robots (SARs) are situated between health monitoring tools, familiar digital assistants, security aids, and more advanced AI-powered devices. Consequently, they implicate older people’s privacy in complex ways. Such devices are marketed to perform functions common to smart speakers (e.g., Amazon Echo) and smart home platforms (e.g., Google Home), while other functions are more specific (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  22. From Confucius to Coding and Avicenna to Algorithms: Cultivating Ethical AI Development through Cross-Cultural Ancient Wisdom.Ammar Younas & Yi Zeng - manuscript
    This paper explores the potential of integrating ancient educational principles from diverse eastern cultures into modern AI ethics curricula. It draws on the rich educational traditions of ancient China, India, Arabia, Persia, Japan, Tibet, Mongolia, and Korea, highlighting their emphasis on philosophy, ethics, holistic development, and critical thinking. By examining these historical educational systems, the paper establishes a correlation with modern AI ethics principles, advocating for the inclusion of these ancient teachings in current AI development and education. The proposed integration (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  23.  30
    On the Justified Use of AI Decision Support in Evidence-Based Medicine: Validity, Explainability, and Responsibility.Sune Holm - forthcoming - Cambridge Quarterly of Healthcare Ethics:1-7.
    When is it justified to use opaque artificial intelligence (AI) output in medical decision-making? Consideration of this question is of central importance for the responsible use of opaque machine learning (ML) models, which have been shown to produce accurate and reliable diagnoses, prognoses, and treatment suggestions in medicine. In this article, I discuss the merits of two answers to the question. According to the Explanation View, clinicians must have access to an explanation of why an output was produced. According to (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  24.  12
    Clinicians and AI use: where is the professional guidance?Helen Smith, John Downer & Jonathan Ives - forthcoming - Journal of Medical Ethics.
    With the introduction of artificial intelligence (AI) to healthcare, there is also a need for professional guidance to support its use. New (2022) reports from National Health Service AI Lab & Health Education England focus on healthcare workers’ understanding and confidence in AI clinical decision support systems (AI-CDDSs), and are concerned with developing trust in, and the trustworthiness of these systems. While they offer guidance to aid developers and purchasers of such systems, they offer little specific guidance for the clinical (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  25.  53
    The autonomy-safety-paradox of service robotics in Europe and Japan: a comparative analysis.Hironori Matsuzaki & Gesa Lindemann - 2016 - AI and Society 31 (4):501-517.
  26.  41
    Achieving Equity with Predictive Policing Algorithms: A Social Safety Net Perspective.Chun-Ping Yen & Tzu-Wei Hung - 2021 - Science and Engineering Ethics 27 (3):1-16.
    Whereas using artificial intelligence (AI) to predict natural hazards is promising, applying a predictive policing algorithm (PPA) to predict human threats to others continues to be debated. Whereas PPAs were reported to be initially successful in Germany and Japan, the killing of Black Americans by police in the US has sparked a call to dismantle AI in law enforcement. However, although PPAs may statistically associate suspects with economically disadvantaged classes and ethnic minorities, the targeted groups they aim to protect are (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  27. Transparency and the Black Box Problem: Why We Do Not Trust AI.Warren J. von Eschenbach - 2021 - Philosophy and Technology 34 (4):1607-1622.
    With automation of routine decisions coupled with more intricate and complex information architecture operating this automation, concerns are increasing about the trustworthiness of these systems. These concerns are exacerbated by a class of artificial intelligence that uses deep learning, an algorithmic system of deep neural networks, which on the whole remain opaque or hidden from human comprehension. This situation is commonly referred to as the black box problem in AI. Without understanding how AI reaches its conclusions, it is an open (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   12 citations  
  28. Artificial thinking and doomsday projections: a discourse on trust, ethics and safety.Jeffrey White, Dietrich Brandt, Jan Söffner & Larry Stapleton - 2023 - AI and Society 38 (6):2119-2124.
    The article reflects on where AI is headed and the world along with it, considering trust, ethics and safety. Implicit in artificial thinking and doomsday appraisals is the engineered divorce from reality of sublime human embodiment. Jeffrey White, Dietrich Brandt, Jan Soeffner, and Larry Stapleton, four scholars associated with AI & Society, address these issues, and more, in the following exchange.
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  29. Thinking Inside the Box: Controlling and Using an Oracle AI.Stuart Armstrong, Anders Sandberg & Nick Bostrom - 2012 - Minds and Machines 22 (4):299-324.
    There is no strong reason to believe that human-level intelligence represents an upper limit of the capacity of artificial intelligence, should it be realized. This poses serious safety issues, since a superintelligent system would have great power to direct the future according to its possibly flawed motivation system. Solving this issue in general has proven to be considerably harder than expected. This paper looks at one particular approach, Oracle AI. An Oracle AI is an AI that does not act (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   17 citations  
  30. "My Place in the Sun": Reflections on the Thought of Emmanuel Levinas.Committee of Public Safety - 1996 - Diacritics 26 (1):3-10.
    In lieu of an abstract, here is a brief excerpt of the content:Martin Heidegger and OntologyEmmanuel Levinas (bio)The prestige of Martin Heidegger 1 and the influence of his thought on German philosophy marks both a new phase and one of the high points of the phenomenological movement. Caught unawares, the traditional establishment is obliged to clarify its position on this new teaching which casts a spell over youth and which, overstepping the bounds of permissibility, is already in vogue. For once, (...)
    Direct download (8 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  31.  83
    On the person-based predictive policing of AI.Tzu-Wei Hung & Chun-Ping Yen - 2020 - Ethics and Information Technology 23 (3):165-176.
    Should you be targeted by police for a crime that AI predicts you will commit? In this paper, we analyse when, and to what extent, the person-based predictive policing (PP) — using AI technology to identify and handle individuals who are likely to breach the law — could be justifiably employed. We first examine PP’s epistemological limits, and then argue that these defects by no means refrain from its usage; they are worse in humans. Next, based on major AI ethics (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  32.  19
    Embedding artificial intelligence in society: looking beyond the EU AI master plan using the culture cycle.Simone Borsci, Ville V. Lehtola, Francesco Nex, Michael Ying Yang, Ellen-Wien Augustijn, Leila Bagheriye, Christoph Brune, Ourania Kounadi, Jamy Li, Joao Moreira, Joanne Van Der Nagel, Bernard Veldkamp, Duc V. Le, Mingshu Wang, Fons Wijnhoven, Jelmer M. Wolterink & Raul Zurita-Milla - forthcoming - AI and Society:1-20.
    The European Union Commission’s whitepaper on Artificial Intelligence proposes shaping the emerging AI market so that it better reflects common European values. It is a master plan that builds upon the EU AI High-Level Expert Group guidelines. This article reviews the masterplan, from a culture cycle perspective, to reflect on its potential clashes with current societal, technical, and methodological constraints. We identify two main obstacles in the implementation of this plan: the lack of a coherent EU vision to drive future (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  33.  22
    Johan Berglund: Why safety cultures degenerate and how to revive them.Richard Ennals - 2017 - AI and Society 32 (2):293-294.
  34.  11
    Moral Engagement and Disengagement in Health Care AI Development.Ariadne A. Nichol, Meghan Halley, Carole Federico, Mildred K. Cho & Pamela L. Sankar - forthcoming - AJOB Empirical Bioethics.
    Background Machine learning (ML) is utilized increasingly in health care, and can pose harms to patients, clinicians, health systems, and the public. In response, regulators have proposed an approach that would shift more responsibility to ML developers for mitigating potential harms. To be effective, this approach requires ML developers to recognize, accept, and act on responsibility for mitigating harms. However, little is known regarding the perspectives of developers themselves regarding their obligations to mitigate harms.Methods We conducted 40 semi-structured interviews with (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  35.  31
    Ethics of automated vehicles: breaking traffic rules for road safety.Nick Reed, Tania Leiman, Paula Palade, Marieke Martens & Leon Kester - 2021 - Ethics and Information Technology 23 (4):777-789.
    In this paper, we explore and describe what is needed to allow connected and automated vehicles to break traffic rules in order to minimise road safety risk and to operate with appropriate transparency. Reviewing current traffic rules with particular reference to two driving situations, we illustrate why current traffic rules are not suitable for CAVs and why making new traffic rules specifically for CAVs would be inappropriate. In defining an alternative approach to achieving safe CAV driving behaviours, we describe (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  36.  39
    Institutionalised distrust and human oversight of artificial intelligence: towards a democratic design of AI governance under the European Union AI Act.Johann Laux - forthcoming - AI and Society:1-14.
    Human oversight has become a key mechanism for the governance of artificial intelligence (“AI”). Human overseers are supposed to increase the accuracy and safety of AI systems, uphold human values, and build trust in the technology. Empirical research suggests, however, that humans are not reliable in fulfilling their oversight tasks. They may be lacking in competence or be harmfully incentivised. This creates a challenge for human oversight to be effective. In addressing this challenge, this article aims to make three (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  37.  29
    Deny, dismiss and downplay: developers’ attitudes towards risk and their role in risk creation in the field of healthcare-AI.Shaul A. Duke - 2022 - Ethics and Information Technology 24 (1).
    Developers are often the engine behind the creation and implementation of new technologies, including in the artificial intelligence surge that is currently underway. In many cases these new technologies introduce significant risk to affected stakeholders; risks that can be reduced and mitigated by such a dominant party. This is fully recognized by texts that analyze risks in the current AI transformation, which suggest voluntary adoption of ethical standards and imposing ethical standards via regulation and oversight as tools to compel developers (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  38.  35
    On meaningful human control of AI.Jovana Davidovic - manuscript
    Meaningful human control over AI is exalted as a key tool for assuring safety, dignity, and responsibility for AI and automated decision-systems. It is a central topic especially in fields that deal with the use of AI for decisions that could cause significant harm, like AI-enabled weapons systems. This paper argues that discussions regarding meaningful human control commonly fail to identify the purpose behind the call for meaningful human control and that stating that purpose is a necessary step in (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  39.  13
    Principle-based recommendations for big data and machine learning in food safety: the P-SAFETY model.Salvatore Sapienza & Anton Vedder - 2023 - AI and Society 38 (1):5-20.
    Big data and Machine learning Techniques are reshaping the way in which food safety risk assessment is conducted. The ongoing ‘datafication’ of food safety risk assessment activities and the progressive deployment of probabilistic models in their practices requires a discussion on the advantages and disadvantages of these advances. In particular, the low level of trust in EU food safety risk assessment framework highlighted in 2019 by an EU-funded survey could be exacerbated by novel methods of analysis. The (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  40.  15
    The importance of transparency in naming conventions, designs, and operations of safety features: from modern ADAS to fully autonomous driving functions.Mohsin Murtaza, Chi-Tsun Cheng, Mohammad Fard & John Zeleznikow - 2023 - AI and Society 38 (2):983-993.
    This paper investigates the importance of standardising and maintaining the transparency of advanced driver-assistance systems (ADAS) functions nomenclature, designs, and operations in all categories up until fully autonomous vehicles. The aim of this paper is to reveal the discrepancies in ADAS functions across automakers and discuss the underlying issues and potential solutions. In this pilot study, user manuals of various brands are reviewed systematically and critical analyses of common ADAS functions are conducted. The result shows that terminologies used to describe (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  41. Hsin mei hsüeh.I. Tsʻai - 1947
     
    Export citation  
     
    Bookmark  
  42. Lun chʻeng shih hsin yung ti yüan tse.Chang-lin Tsʻai - 1951 - [s.n.,: Edited by Chang-lin Tsʻai.
     
    Export citation  
     
    Bookmark  
  43. Che hsüeh lo chi.Hsi Chʻai - 1972 - 61 i.: E..
    No categories
     
    Export citation  
     
    Bookmark  
  44. Mencius's young years.Ai Yen Chen - 1972 - Singapore,: Books Associated International.
     
    Export citation  
     
    Bookmark  
  45. Yen Hsi-chai hsüeh pʻu.Ai-chʻun Kuo - 1957
     
    Export citation  
     
    Bookmark  
  46. Chʻi-kʻo-kuo tsʻun tsai kai nien.Mei-chu Tsʻai - 1972
     
    Export citation  
     
    Bookmark  
  47. Tsʻun tsai chu i ta shih Hai-te-ko che hsüeh.Mei-li Tsʻai - 1970 - Edited by Martin Heidegger.
    No categories
     
    Export citation  
     
    Bookmark  
  48. Tʻang Chün-i hsien sheng chi nien chi.Ai-chʻün Feng (ed.) - 1979
     
    Export citation  
     
    Bookmark  
  49. Lun tai jên chieh wu. Kʻai-fêng (ed.) - 1947
    No categories
     
    Export citation  
     
    Bookmark  
  50. Yin ming kai lun. Tʻai-hsü - 1970
     
    Export citation  
     
    Bookmark  
1 — 50 / 984