Results for 'safe AI'

996 found
Order:
  1.  24
    Toward safe AI.Andres Morales-Forero, Samuel Bassetto & Eric Coatanea - 2023 - AI and Society 38 (2):685-696.
    Since some AI algorithms with high predictive power have impacted human integrity, safety has become a crucial challenge in adopting and deploying AI. Although it is impossible to prevent an algorithm from failing in complex tasks, it is crucial to ensure that it fails safely, especially if it is a critical system. Moreover, due to AI’s unbridled development, it is imperative to minimize the methodological gaps in these systems’ engineering. This paper uses the well-known Box-Jenkins method for statistical modeling as (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  2.  7
    User-centered AI-based voice-assistants for safe mobility of older people in urban context.Bokolo Anthony Jnr - forthcoming - AI and Society:1-24.
    Voice-assistants are becoming increasingly popular and can be deployed to offers a low-cost tool that can support and potentially reduce falls, injuries, and accidents faced by older people within the age of 65 and older. But, irrespective of the mobility and walkability challenges faced by the aging population, studies that employed Artificial Intelligence (AI)-based voice-assistants to reduce risks faced by older people when they use public transportation and walk in built environment are scarce. This is because the development of AI-based (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  3.  8
    Making Trust Safe for AI? Non-agential Trust as a Conceptual Engineering Problem.Juri Viehoff - 2023 - Philosophy and Technology 36 (4):1-29.
    Should we be worried that the concept of trust is increasingly used when we assess non-human agents and artefacts, say robots and AI systems? Whilst some authors have developed explanations of the concept of trust with a view to accounting for trust in AI systems and other non-agents, others have rejected the idea that we should extend trust in this way. The article advances this debate by bringing insights from conceptual engineering to bear on this issue. After setting up a (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  4.  8
    Review of “AI assurance: towards trustworthy, explainable, safe, and ethical AI” by Feras A. Batarseh and Laura J. Freeman, Academic Press, 2023. [REVIEW]Jialei Wang & Li Fu - forthcoming - AI and Society:1-2.
  5. Generative AI in EU Law: Liability, Privacy, Intellectual Property, and Cybersecurity.Claudio Novelli, Federico Casolari, Philipp Hacker, Giorgio Spedicato & Luciano Floridi - manuscript
    The advent of Generative AI, particularly through Large Language Models (LLMs) like ChatGPT and its successors, marks a paradigm shift in the AI landscape. Advanced LLMs exhibit multimodality, handling diverse data formats, thereby broadening their application scope. However, the complexity and emergent autonomy of these models introduce challenges in predictability and legal compliance. This paper analyses the legal and regulatory implications of Generative AI and LLMs in the European Union context, focusing on liability, privacy, intellectual property, and cybersecurity. It examines (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  6.  10
    Machine learning models, trusted research environments and UK health data: ensuring a safe and beneficial future for AI development in healthcare.Charalampia Kerasidou, Maeve Malone, Angela Daly & Francesco Tava - 2023 - Journal of Medical Ethics 49 (12):838-843.
    Digitalisation of health and the use of health data in artificial intelligence, and machine learning (ML), including for applications that will then in turn be used in healthcare are major themes permeating current UK and other countries’ healthcare systems and policies. Obtaining rich and representative data is key for robust ML development, and UK health data sets are particularly attractive sources for this. However, ensuring that such research and development is in the public interest, produces public benefit and preserves privacy (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  7. Certifiable AI.Jobst Landgrebe - 2022 - Applied Sciences 12 (3):1050.
    Implicit stochastic models, including both ‘deep neural networks’ (dNNs) and the more recent unsupervised foundational models, cannot be explained. That is, it cannot be determined how they work, because the interactions of the millions or billions of terms that are contained in their equations cannot be captured in the form of a causal model. Because users of stochastic AI systems would like to understand how they operate in order to be able to use them safely and reliably, there has emerged (...)
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  8. Ethical funding for trustworthy AI: proposals to address the responsibilities of funders to ensure that projects adhere to trustworthy AI practice.Marie Oldfield - 2021 - AI and Ethics 1 (1):1.
    AI systems that demonstrate significant bias or lower than claimed accuracy, and resulting in individual and societal harms, continue to be reported. Such reports beg the question as to why such systems continue to be funded, developed and deployed despite the many published ethical AI principles. This paper focusses on the funding processes for AI research grants which we have identified as a gap in the current range of ethical AI solutions such as AI procurement guidelines, AI impact assessments and (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  9.  37
    On the promotion of safe and socially beneficial artificial intelligence.Seth D. Baum - 2017 - AI and Society 32 (4):543-551.
    This paper discusses means for promoting artificial intelligence that is designed to be safe and beneficial for society. The promotion of beneficial AI is a social challenge because it seeks to motivate AI developers to choose beneficial AI designs. Currently, the AI field is focused mainly on building AIs that are more capable, with little regard to social impacts. Two types of measures are available for encouraging the AI field to shift more toward building beneficial AI. Extrinsic measures impose (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   24 citations  
  10.  27
    Companies Committed to Responsible AI: From Principles towards Implementation and Regulation?Paul B. de Laat - 2021 - Philosophy and Technology 34 (4):1135-1193.
    The term ‘responsible AI’ has been coined to denote AI that is fair and non-biased, transparent and explainable, secure and safe, privacy-proof, accountable, and to the benefit of mankind. Since 2016, a great many organizations have pledged allegiance to such principles. Amongst them are 24 AI companies that did so by posting a commitment of the kind on their website and/or by joining the ‘Partnership on AI’. By means of a comprehensive web search, two questions are addressed by this (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  11.  7
    Manifestations of xenophobia in AI systems.Nenad Tomasev, Jonathan Leader Maynard & Iason Gabriel - forthcoming - AI and Society:1-23.
    Xenophobia is one of the key drivers of marginalisation, discrimination, and conflict, yet many prominent machine learning fairness frameworks fail to comprehensively measure or mitigate the resulting xenophobic harms. Here we aim to bridge this conceptual gap and help facilitate safe and ethical design of artificial intelligence (AI) solutions. We ground our analysis of the impact of xenophobia by first identifying distinct types of xenophobic harms, and then applying this framework across a number of prominent AI application domains, reviewing (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  12.  19
    AI — and everything else.Godela Unseld - 1992 - AI and Society 6 (3):280-287.
    One of the most common misunderstandings in dealing with the world is the notion that you can do it piece-meal, that in understanding and shaping one part you can safely ignore the rest. One of the oldest wisdoms is the insight that in reality everything is knitted together, that to meddle with one part is always to meddle with the whole. AI as a social phenomenon is a good example for both findings. In trying to understand this new event in (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  13. Could You Merge With AI? Reflections on the Singularity and Radical Brain Enhancement.Cody Turner & Susan Schneider - 2020 - In Markus Dirk Dubber, Frank Pasquale & Sunit Das (eds.), The Oxford Handbook of Ethics of AI. Oxford University Press. pp. 307-325.
    This chapter focuses on AI-based cognitive and perceptual enhancements. AI-based brain enhancements are already under development, and they may become commonplace over the next 30–50 years. We raise doubts concerning whether radical AI-based enhancements transhumanists advocate will accomplish the transhumanists goals of longevity, human flourishing, and intelligence enhancement. We urge that even if the technologies are medically safe and are not used as tools by surveillance capitalism or an authoritarian dictatorship, these enhancements may still fail to do their job (...)
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  14. Against the Double Standard Argument in AI Ethics.Scott Hill - 2024 - Philosophy and Technology 37 (1):1-5.
    In an important and widely cited paper, Zerilli, Knott, Maclaurin, and Gavaghan (2019) argue that opaque AI decision makers are at least as transparent as human decision makers and therefore the concern that opaque AI is not sufficiently transparent is mistaken. I argue that the concern about opaque AI should not be understood as the concern that such AI fails to be transparent in a way that humans are transparent. Rather, the concern is that the way in which opaque AI (...)
    No categories
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  15. The trustworthiness of AI: Comments on Simion and Kelp’s account.Dong-Yong Choi - 2023 - Asian Journal of Philosophy 2 (1):1-9.
    Simion and Kelp explain the trustworthiness of an AI based on that AI’s disposition to meet its obligations. Roughly speaking, according to Simion and Kelp, an AI is trustworthy regarding its task if and only if that AI is obliged to complete the task and its disposition to complete the task is strong enough. Furthermore, an AI is obliged to complete a task in the case where the task is the AI’s etiological function or design function. This account has a (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  16.  29
    How to teach responsible AI in Higher Education: challenges and opportunities.Andrea Aler Tubella, Marçal Mora-Cantallops & Juan Carlos Nieves - 2023 - Ethics and Information Technology 26 (1):1-14.
    In recent years, the European Union has advanced towards responsible and sustainable Artificial Intelligence (AI) research, development and innovation. While the Ethics Guidelines for Trustworthy AI released in 2019 and the AI Act in 2021 set the starting point for a European Ethical AI, there are still several challenges to translate such advances into the public debate, education and practical learning. This paper contributes towards closing this gap by reviewing the approaches that can be found in the existing literature and (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  17.  17
    In Defence of Principlism in AI Ethics and Governance.Elizabeth Seger - 2022 - Philosophy and Technology 35 (2):1-7.
    It is widely acknowledged that high-level AI principles are difficult to translate into practices via explicit rules and design guidelines. Consequently, many AI research and development groups that claim to adopt ethics principles have been accused of unwarranted “ethics washing”. Accordingly, there remains a question as to if and how high-level principles should be expected to influence the development of safe and beneficial AI. In this short commentary I discuss two roles high-level principles might play in AI ethics and (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  18.  15
    Just accountability structures – a way to promote the safe use of automated decision-making in the public sector.Hanne Hirvonen - 2024 - AI and Society 39 (1):155-167.
    The growing use of automated decision-making (ADM) systems in the public sector and the need to control these has raised many legal questions in academic research and in policymaking. One of the timely means of legal control is accountability, which traditionally includes the ability to impose sanctions on the violator as one dimension. Even though many risks regarding the use of ADM have been noted and there is a common will to promote the safety of these systems, the relevance of (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  19.  29
    Consent-GPT: is it ethical to delegate procedural consent to conversational AI?Jemima Winifred Allen, Brian D. Earp, Julian Koplin & Dominic Wilkinson - 2024 - Journal of Medical Ethics 50 (2):77-83.
    Obtaining informed consent from patients prior to a medical or surgical procedure is a fundamental part of safe and ethical clinical practice. Currently, it is routine for a significant part of the consent process to be delegated to members of the clinical team not performing the procedure (eg, junior doctors). However, it is common for consent-taking delegates to lack sufficient time and clinical knowledge to adequately promote patient autonomy and informed decision-making. Such problems might be addressed in a number (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  20.  51
    Artificial Intelligence and Robotics in Nursing: Ethics of Caring as a Guide to Dividing Tasks Between AI and Humans.Felicia Stokes & Amitabha Palmer - 2020 - Nursing Philosophy 21 (4):e12306.
    Nurses have traditionally been regarded as clinicians that deliver compassionate, safe, and empathetic health care (Nurses again outpace other professions for honesty & ethics, 2018). Caring is a fundamental characteristic, expectation, and moral obligation of the nursing and caregiving professions (Nursing: Scope and standards of practice, American Nurses Association, Silver Spring, MD, 2015). Along with caring, nurses are expected to undertake ever‐expanding duties and complex tasks. In part because of the growing physical, intellectual and emotional demandingness, of nursing as (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  21.  54
    Society under threat… but not from AI.Ajit Narayanan - 2013 - AI and Society 28 (1):87-94.
    25 years ago, when AI & Society was launched, the emphasis was, and still is, on dehumanisation and the effects of technology on human life, including reliance on technology. What we forgot to take into account was another very great danger to humans. The pervasiveness of computer technology, without appropriate security safeguards, dehumanises us by allowing criminals to steal not just our money but also our confidential and private data at will. Also, denial-of-service attacks prevent us from accessing the information (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  22. A Ghost Workers' Bill of Rights: How to Establish a Fair and Safe Gig Work Platform.Julian Friedland, David Balkin & Ramiro Montealegre - 2020 - California Management Review 62 (2).
    Many of us assume that all the free editing and sorting of online content we ordinarily rely on is carried out by AI algorithms — not human persons. Yet in fact, that is often not the case. This is because human workers remain cheaper, quicker, and more reliable than AI for performing myriad tasks where the right answer turns on ineffable contextual criteria too subtle for algorithms to yet decode. The output of this work is then used for machine learning (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  23.  24
    Who's really afraid of AI?: Anthropocentric bias and postbiological evolution.Milan M. Ćirković - 2022 - Belgrade Philosophical Annual 35:17-29.
    The advent of artificial intelligence (AI) systems has provoked a lot of discussions in both epistemological, bioethical and risk-analytic terms, much of it rather paranoid in nature. Unless one takes an extreme anthropocentric and chronocentric stance, this process can be safely regarded as part and parcel of the sciences of the origin. In this contribution, I would like to suggest that at least four different classes of arguments could be brought forth against the proposition that AI - either human-level or (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  24. Ai Siqi wen ji.Siqi Ai - 1981 - [Peking]: Xin hua shu dian fa xing.
     
    Export citation  
     
    Bookmark  
  25.  81
    Evidentiality.A. I︠U︡ Aĭkhenvalʹd - 2004 - New York: Oxford University Press.
    In some languages every statement must contain a specification of the type of evidence on which it is based: for example, whether the speaker saw it, or heard it, or inferred it from indirect evidence, or learnt it from someone else. This grammatical reference to information source is called 'evidentiality', and is one of the least described grammatical categories. Evidentiality systems differ in how complex they are: some distinguish just two terms (eyewitness and noneyewitness, or reported and everything else), while (...)
    Direct download  
     
    Export citation  
     
    Bookmark   46 citations  
  26. Ru he yan jiu zhe xue.Siqi Ai - 1940
     
    Export citation  
     
    Bookmark  
  27.  1
    Dialekticheskiĭ materializm.Arnolʹd Samoĭlovich Aĭzenberg (ed.) - 1931
  28.  2
    Zekhor le-Avraham: asupat maʼamarim be-Yahadut uve-ḥinukh le-zekher Dr. Avraham Zalḳin = Zekhor le-Avraham: an academic anthology on Jewish studies and education in memory of Dr. Avraham Zalkin.Yaʼir Barḳai, Ḥayim Gaziʼel, Mordekhai Zalḳin, Luba Charlap, S. Kogut & Avraham Zalḳin (eds.) - 2020 - Yerushalayim: Mikhlelet Lifshits.
    An academic anthology on Jewish studies and education in memory of dr. Avraham Zalkin.
    Direct download  
     
    Export citation  
     
    Bookmark  
  29. Cong tou xue qi.Siqi Ai - 1950
     
    Export citation  
     
    Bookmark  
  30.  4
    The web of knowledge: evidentiality at the cross-roads.A. I︠U︡ Aĭkhenvalʹd - 2021 - Boston: BRILL.
    Knowledge can be expressed in language using a plethora of grammatical means. Four major groups of meanings related to knowledge are Evidentiality: grammatical expression of information source; Egophoricity: grammatical expression of access to knowledge; Mirativity: grammatical expression of expectation of knowledge; and Epistemic modality: grammatical expression of attitude to knowledge. The four groups of categories interact. Some develop overtones of the others. Evidentials stand apart from other means in many ways, including their correlations with speech genres and social environment. This (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  31. Li shi wei wu lun: she hui fa zhan shi jiang yi.Siqi Ai - 1950 - Beijing: Gong ren chu ban she.
     
    Export citation  
     
    Bookmark  
  32. Li shih wei wu lun.Ssu-ch I. Ai - 1950
     
    Export citation  
     
    Bookmark  
  33. Li shi wei wu lun: she hui fa zhan shi jiang shou ti gang.Siqi Ai - 1950 - Guangzhou: Xin hua shu dian.
     
    Export citation  
     
    Bookmark  
  34. Che hsüeh lo chi.Hsi Chʻai - 1972 - 61 i.: E..
    No categories
     
    Export citation  
     
    Bookmark  
  35. Mencius's young years.Ai Yen Chen - 1972 - Singapore,: Books Associated International.
     
    Export citation  
     
    Bookmark  
  36. Chʻi-kʻo-kuo tsʻun tsai kai nien.Mei-chu Tsʻai - 1972
     
    Export citation  
     
    Bookmark  
  37. Hsin mei hsüeh.I. Tsʻai - 1947
     
    Export citation  
     
    Bookmark  
  38. Lun chʻeng shih hsin yung ti yüan tse.Chang-lin Tsʻai - 1951 - [s.n.,: Edited by Chang-lin Tsʻai.
     
    Export citation  
     
    Bookmark  
  39. Tsʻun tsai chu i ta shih Hai-te-ko che hsüeh.Mei-li Tsʻai - 1970 - Edited by Martin Heidegger.
    No categories
     
    Export citation  
     
    Bookmark  
  40. Miqdor ŭzgarishlarining sifat ŭzgarishlariga ŭtishi qonuni.A. T. Ai︠u︡pov - 1966
     
    Export citation  
     
    Bookmark  
  41. Yen Hsi-chai hsüeh pʻu.Ai-chʻun Kuo - 1957
     
    Export citation  
     
    Bookmark  
  42. Yin ming kai lun. Tʻai-hsü - 1970
     
    Export citation  
     
    Bookmark  
  43. Bian zheng wei wu zhu yi gang yao.Siqi Ai - 1978 - Beijing: Ren min chu ban she.
     
    Export citation  
     
    Bookmark  
  44. Hu Shi, Liang Shuming zhe xue si xiang pi pan.Siqi Ai - 1977
     
    Export citation  
     
    Bookmark  
  45. Tʻang Chün-i hsien sheng chi nien chi.Ai-chʻün Feng (ed.) - 1979
     
    Export citation  
     
    Bookmark  
  46. Bian zheng wei wu zhu yi, li shi wei wu zhu yi.Siqi Ai (ed.) - 1978 - Beijing: Ren min chu ban she.
     
    Export citation  
     
    Bookmark  
  47. Guan yu "he er er yi" di lun zhan.Hengwu Ai - 1981 - Hubei sheng xin hua shu dian fa xing.
    No categories
     
    Export citation  
     
    Bookmark  
  48. Jian guo yi lai zhe xue wen ti tao lun zong shu.Zhong Ai - 1983 - [Changchun shi]: Jilin sheng xin hua shu dian fa xing. Edited by Huan Li.
    No categories
     
    Export citation  
     
    Bookmark  
  49. Leksicheskai︠a︡ i frazeologicheskai︠a︡ semantika: mezhvuzovskiĭ nauchnyĭ sbornik.L. L. Ai︠u︡pova & L. M. Vasilʹev (eds.) - 1982 - Ufa: Bashkirskiĭ gos. universitet.
     
    Export citation  
     
    Bookmark  
  50. Obshchie voprosy semantiki: mezhvuzovskiĭ nauchnyĭ sbornik.L. L. Ai︠u︡pova & L. M. Vasilʹev (eds.) - 1983 - Ufa: Bashkirskiĭ universitet.
     
    Export citation  
     
    Bookmark  
1 — 50 / 996