Results for 'LLMs'

112 found
Order:
  1.  24
    The bma covid-19 ethical guidance: A legal analysis.Llm James E. Hurford Llb - 2020 - The New Bioethics 26 (2):176-189.
    The paper considers the recently published British Medical Association Guidance on ethical issues arising in relation to rationing of treatment during the COVID-19 Pandemic. It considers whether it...
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  2.  23
    Unreliable LLM Bioethics Assistants: Ethical and Pedagogical Risks.Lea Goetz, Markus Trengove, Artem Trotsyuk & Carole A. Federico - 2023 - American Journal of Bioethics 23 (10):89-91.
    Whilst Rahimzadeh et al. (2023) apply a critical lens to the pedagogical use of LLM bioethics assistants, we outline here further reason for skepticism. Two features of LLM chatbots are of signific...
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  3.  56
    Meaning by Courtesy: LLM-Generated Texts and the Illusion of Content.Gary Ostertag - 2023 - American Journal of Bioethics 23 (10):91-93.
    Contrary to how it may seem when we observe its output, an [LLM] is a system for haphazardly stitching together sequences of linguistic forms it has observed in its vast training data, according to...
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  4. Abundance of words versus Poverty of mind: The hidden human costs of LLMs.Quan-Hoang Vuong & Manh-Tung Ho - manuscript
    This essay analyzes the rise of Large Language Models (LLMs) such as GPT-4 or Gemini, which are now incorporated in a wide range of products and services in everyday life. Importantly, it considers some of their hidden human costs. First, is the question of who is left behind by the further infusion of LLMs in society. Second, is the issue of social inequalities between lingua franca and those which are not. Third, LLMs will help disseminate scientific concepts, (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  5. Are Language Models More Like Libraries or Like Librarians? Bibliotechnism, the Novel Reference Problem, and the Attitudes of LLMs.Harvey Lederman & Kyle Mahowald - forthcoming - Transactions of the Association for Computational Linguistics.
    Are LLMs cultural technologies like photocopiers or printing presses, which transmit information but cannot create new content? A challenge for this idea, which we call bibliotechnism, is that LLMs generate novel text. We begin with a defense of bibliotechnism, showing how even novel text may inherit its meaning from original human-generated text. We then argue that bibliotechnism faces an independent challenge from examples in which LLMs generate novel reference, using new names to refer to new entities. Such (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  6.  80
    The Turing test is not a good benchmark for thought in LLMs.Tim Bayne & Iwan Williams - 2023 - Nature Human Behaviour 7:1806–1807.
  7.  20
    Sequent reconstruction in LLM—A sweepline proof.R. Banach - 1995 - Annals of Pure and Applied Logic 73 (3):277-295.
    An alternative proof is given that to each LLM proof net there corresponds at least one LLM sequent proof. The construction is inspired by the sweepline technique from computational geometry and includes a treatment of the multiplicative constants and of proof boxes.
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  8.  17
    Beyond algorithmic trust: interpersonal aspects on consent delegation to LLMs.Zeineb Sassi, Michael Hahn, Sascha Eickmann, Anne Herrmann-Johns & Max Tretter - 2024 - Journal of Medical Ethics 50 (2):139-139.
    In their article ‘Consent-GPT: is it ethical to delegate procedural consent to conversational AI?’, Allen et al 1 explore the ethical complexities involved in handing over parts of the process of obtaining medical consent to conversational Artificial Intelligence (AI) systems, that is, AI-driven large language models (LLMs) trained to interact with patients to inform them about upcoming medical procedures and assist in the process of obtaining informed consent.1 They focus specifically on challenges related to accuracy (4–5), trust (5), privacy (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  9.  7
    Religija, teologija i filozofske vještine automatiziranih programa za čavrljanje (chatbotova) pogonjenima velikim jezičnim modelima (LLM).Marcin Trepczyński - 2024 - Disputatio Philosophica 25 (1):19-36.
    U radu se nastoji prikazati kako se vjera i teologija mogu iskoristiti za testiranje uspješnosti velikih jezičnih modela (LLM–ova) i automatiziranih programa za čavrljanje (chatbotova) pogonjenima na takvim modelima, mjerenjem njihovih filozofskih vještina. Predstavljaju se rezultati testiranja četiriju odabranih chatbotova: ChatGPT, Bing, Bard i Llama2. Za potrebe testiranja uzeta su tri moguća izvora iz područja vjere i teologije: 1) teorija četiri smisla Svetog pisma, 2) apstraktne teološke izjave, 3) apstraktna logička formula izvedena iz vjerskog teksta kako bi se pokazalo da (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  10.  3
    Exploring the psychology of LLMs’ Moral and Legal Reasoning.Guilherme F. C. F. Almeida, José Luiz Nunes, Neele Engelmann, Alex Wiegmann & Marcelo de Araújo - forthcoming - Artificial Intelligence.
  11.  20
    AUTOGEN and the Ethics of Co-Creation with Personalized LLMs—Reply to the Commentaries.Sebastian Porsdam Mann, Brian D. Earp, Nikolaj Møller, Vynn Suren & Julian Savulescu - 2024 - American Journal of Bioethics 24 (3):6-14.
    In this reply to our commentators, we respond to ethical concerns raised about the potential use (or misuse) of personalized LLMs for academic idea and prose generation, including questions about c...
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  12.  18
    Abundance of words versus poverty of mind: the hidden human costs co-created with LLMs.Quan-Hoang Vuong & Manh-Tung Ho - forthcoming - AI and Society:1-2.
  13.  25
    Introduction to the Special Issue - LLMs and Writing.Syed Abumusab - 2024 - Teaching Philosophy 47 (2):139-142.
  14. Representações da Literatura em Alunos LLM: Da Voz da Escola a Surdez dos deuses.Antônio Branco - 2005 - Quaestio: Revista de Estudos Em Educação 7 (2).
    No categories
     
    Export citation  
     
    Bookmark  
  15.  28
    James F. Bresnahan, SJ, JD, LLM, Ph. D., is Professor of Clinical Medicine, Department of Medicine, and Co-Director of the Ethics and Human Values in Medicine Program, Northwestern University Medical School, Chicago David A* Buehler, M. Div., MA, is Coordinator of the bioethics committee and Director of Pastoral Care, Charlton Memorial Hospital, Fall River, Massachusetts. [REVIEW]Miriam Piven Cotler - 1993 - Cambridge Quarterly of Healthcare Ethics 2:125-126.
  16.  32
    AUTOGEN: A Personalized Large Language Model for Academic Enhancement—Ethics and Proof of Principle.Sebastian Porsdam Mann, Brian D. Earp, Nikolaj Møller, Suren Vynn & Julian Savulescu - 2023 - American Journal of Bioethics 23 (10):28-41.
    Large language models (LLMs) such as ChatGPT or Google’s Bard have shown significant performance on a variety of text-based tasks, such as summarization, translation, and even the generation of new...
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   19 citations  
  17. Chatting with Chat(GPT-4): Quid est Understanding?Elan Moritz - manuscript
    What is Understanding? This is the first of a series of Chats with OpenAI’s ChatGPT (Chat). The main goal is to obtain Chat’s response to a series of questions about the concept of ’understand- ing’. The approach is a conversational approach where the author (labeled as user) asks (prompts) Chat, obtains a response, and then uses the response to formulate followup questions. David Deutsch’s assertion of the primality of the process / capability of understanding is used as the starting point. (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  18.  62
    What Should ChatGPT Mean for Bioethics?I. Glenn Cohen - 2023 - American Journal of Bioethics 23 (10):8-16.
    In the last several months, several major disciplines have started their initial reckoning with what ChatGPT and other Large Language Models (LLMs) mean for them – law, medicine, business among other professions. With a heavy dose of humility, given how fast the technology is moving and how uncertain its social implications are, this article attempts to give some early tentative thoughts on what ChatGPT might mean for bioethics. I will first argue that many bioethics issues raised by ChatGPT are (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   25 citations  
  19. Diagonalization & Forcing FLEX: From Cantor to Cohen and Beyond. Learning from Leibniz, Cantor, Turing, Gödel, and Cohen; crawling towards AGI.Elan Moritz - manuscript
    The paper continues my earlier Chat with OpenAI’s ChatGPT with a Focused LLM Experiment (FLEX). The idea is to conduct Large Language Model (LLM) based explorations of certain areas or concepts. The approach is based on crafting initial guiding prompts and then follow up with user prompts based on the LLMs’ responses. The goals include improving understanding of LLM capabilities and their limitations culminating in optimized prompts. The specific subjects explored as research subject matter include a) diagonalization techniques as (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  20. Large Language Models and Biorisk.William D’Alessandro, Harry R. Lloyd & Nathaniel Sharadin - 2023 - American Journal of Bioethics 23 (10):115-118.
    We discuss potential biorisks from large language models (LLMs). AI assistants based on LLMs such as ChatGPT have been shown to significantly reduce barriers to entry for actors wishing to synthesize dangerous, potentially novel pathogens and chemical weapons. The harms from deploying such bioagents could be further magnified by AI-assisted misinformation. We endorse several policy responses to these dangers, including prerelease evaluations of biomedical AIs by subject-matter experts, enhanced surveillance and lab screening procedures, restrictions on AI training data, (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  21. Apropos of "Speciesist bias in AI: how AI applications perpetuate discrimination and unfair outcomes against animals".Ognjen Arandjelović - 2023 - AI and Ethics.
    The present comment concerns a recent AI & Ethics article which purports to report evidence of speciesist bias in various popular computer vision (CV) and natural language processing (NLP) machine learning models described in the literature. I examine the authors' analysis and show it, ironically, to be prejudicial, often being founded on poorly conceived assumptions and suffering from fallacious and insufficiently rigorous reasoning, its superficial appeal in large part relying on the sequacity of the article's target readership.
    Direct download  
     
    Export citation  
     
    Bookmark  
  22. AI-Testimony, Conversational AIs and Our Anthropocentric Theory of Testimony.Ori Freiman - forthcoming - Social Epistemology.
    The ability to interact in a natural language profoundly changes devices’ interfaces and potential applications of speaking technologies. Concurrently, this phenomenon challenges our mainstream theories of knowledge, such as how to analyze linguistic outputs of devices under existing anthropocentric theoretical assumptions. In section 1, I present the topic of machines that speak, connecting between Descartes and Generative AI. In section 2, I argue that accepted testimonial theories of knowledge and justification commonly reject the possibility that a speaking technological artifact can (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  23.  71
    Large Language Models and the Reverse Turing Test.Terrence Sejnowski - 2023 - Neural Computation 35 (3):309–342.
    Large Language Models (LLMs) have been transformative. They are pre-trained foundational models that are self-supervised and can be adapted with fine tuning to a wide range of natural language tasks, each of which previously would have required a separate network model. This is one step closer to the extraordinary versatility of human language. GPT-3 and more recently LaMDA can carry on dialogs with humans on many topics after minimal priming with a few examples. However, there has been a wide (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  24.  21
    AI Enters Public Discourse: a Habermasian Assessment of the Moral Status of Large Language Models.Paolo Monti - 2024 - Ethics and Politics 61 (1):61-80.
    Large Language Models (LLMs) are generative AI systems capable of producing original texts based on inputs about topic and style provided in the form of prompts or questions. The introduction of the outputs of these systems into human discursive practices poses unprecedented moral and political questions. The article articulates an analysis of the moral status of these systems and their interactions with human interlocutors based on the Habermasian theory of communicative action. The analysis explores, among other things, Habermas's inquiries (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  25. Angry Men, Sad Women: Large Language Models Reflect Gendered Stereotypes in Emotion Attribution.Flor Miriam Plaza-del Arco, Amanda Cercas Curry & Alba Curry - 2024 - Arxiv.
    Large language models (LLMs) reflect societal norms and biases, especially about gender. While societal biases and stereotypes have been extensively researched in various NLP applications, there is a surprising gap for emotion analysis. However, emotion and gender are closely linked in societal discourse. E.g., women are often thought of as more empathetic, while men's anger is more socially accepted. To fill this gap, we present the first comprehensive study of gendered emotion attribution in five state-of-the-art LLMs (open- and (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  26. Machine Advisors: Integrating Large Language Models into Democratic Assemblies.Petr Špecián - manuscript
    Large language models (LLMs) represent the currently most relevant incarnation of artificial intelligence with respect to the future fate of democratic governance. Considering their potential, this paper seeks to answer a pressing question: Could LLMs outperform humans as expert advisors to democratic assemblies? While bearing the promise of enhanced expertise availability and accessibility, they also present challenges of hallucinations, misalignment, or value imposition. Weighing LLMs’ benefits and drawbacks compared to their human counterparts, I argue for their careful (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  27.  57
    Is Academic Enhancement Possible by Means of Generative AI-Based Digital Twins?Sven Nyholm - 2023 - American Journal of Bioethics 23 (10):44-47.
    Large Language Models (LLMs) “assign probabilities to sequences of text. When given some initial text, they use these probabilities to generate new text. Large language models are language models u...
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  28.  24
    Large language models in medical ethics: useful but not expert.Andrea Ferrario & Nikola Biller-Andorno - forthcoming - Journal of Medical Ethics.
    Large language models (LLMs) have now entered the realm of medical ethics. In a recent study, Balaset alexamined the performance of GPT-4, a commercially available LLM, assessing its performance in generating responses to diverse medical ethics cases. Their findings reveal that GPT-4 demonstrates an ability to identify and articulate complex medical ethical issues, although its proficiency in encoding the depth of real-world ethical dilemmas remains an avenue for improvement. Investigating the integration of LLMs into medical ethics decision-making appears (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  29.  35
    Exploring the potential utility of AI large language models for medical ethics: an expert panel evaluation of GPT-4.Michael Balas, Jordan Joseph Wadden, Philip C. Hébert, Eric Mathison, Marika D. Warren, Victoria Seavilleklein, Daniel Wyzynski, Alison Callahan, Sean A. Crawford, Parnian Arjmand & Edsel B. Ing - 2024 - Journal of Medical Ethics 50 (2):90-96.
    Integrating large language models (LLMs) like GPT-4 into medical ethics is a novel concept, and understanding the effectiveness of these models in aiding ethicists with decision-making can have significant implications for the healthcare sector. Thus, the objective of this study was to evaluate the performance of GPT-4 in responding to complex medical ethical vignettes and to gauge its utility and limitations for aiding medical ethicists. Using a mixed-methods, cross-sectional survey approach, a panel of six ethicists assessed LLM-generated responses to (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  30. Language Agents Reduce the Risk of Existential Catastrophe.Simon Goldstein & Cameron Domenico Kirk-Giannini - forthcoming - AI and Society:1-11.
    Recent advances in natural language processing have given rise to a new kind of AI architecture: the language agent. By repeatedly calling an LLM to perform a variety of cognitive tasks, language agents are able to function autonomously to pursue goals specified in natural language and stored in a human-readable format. Because of their architecture, language agents exhibit behavior that is predictable according to the laws of folk psychology: they function as though they have desires and beliefs, and then make (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  31. A Talking Cure for Autonomy Traps : How to share our social world with chatbots.Regina Rini - manuscript
    Large Language Models (LLMs) like ChatGPT were trained on human conversation, but in the future they will also train us. As chatbots speak from our smartphones and customer service helplines, they will become a part of everyday life and a growing share of all the conversations we ever have. It’s hard to doubt this will have some effect on us. Here I explore a specific concern about the impact of artificial conversation on our capacity to deliberate and hold ourselves (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  32.  13
    Beyond the limitations of any imaginable mechanism: Large language models and psycholinguistics.Conor Houghton, Nina Kazanina & Priyanka Sukumaran - 2023 - Behavioral and Brain Sciences 46:e395.
    Large language models (LLMs) are not detailed models of human linguistic processing. They are, however, extremely successful at their primary task: Providing a model for language. For this reason LLMs are important in psycholinguistics: They are useful as a practical tool, as an illustrative comparative, and philosophically, as a basis for recasting the relationship between language and thought.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  33.  7
    Large language models in cryptocurrency securities cases: can a GPT model meaningfully assist lawyers?Arianna Trozze, Toby Davies & Bennett Kleinberg - forthcoming - Artificial Intelligence and Law:1-47.
    Large Language Models (LLMs) could be a useful tool for lawyers. However, empirical research on their effectiveness in conducting legal tasks is scant. We study securities cases involving cryptocurrencies as one of numerous contexts where AI could support the legal process, studying GPT-3.5’s legal reasoning and ChatGPT’s legal drafting capabilities. We examine whether a) GPT-3.5 can accurately determine which laws are potentially being violated from a fact pattern, and b) whether there is a difference in juror decision-making based on (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  34. ChatGPT and the Technology-Education Tension: Applying Contextual Virtue Epistemology to a Cognitive Artifact.Guido Cassinadri - 2024 - Philosophy and Technology 37 (14):1-28.
    According to virtue epistemology, the main aim of education is the development of the cognitive character of students (Pritchard, 2014, 2016). Given the proliferation of technological tools such as ChatGPT and other LLMs for solving cognitive tasks, how should educational practices incorporate the use of such tools without undermining the cognitive character of students? Pritchard (2014, 2016) argues that it is possible to properly solve this ‘technology-education tension’ (TET) by combining the virtue epistemology framework with the theory of extended (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  35.  36
    Assessing the Strengths and Weaknesses of Large Language Models.Shalom Lappin - 2023 - Journal of Logic, Language and Information 33 (1):9-20.
    The transformers that drive chatbots and other AI systems constitute large language models (LLMs). These are currently the focus of a lively discussion in both the scientific literature and the popular media. This discussion ranges from hyperbolic claims that attribute general intelligence and sentience to LLMs, to the skeptical view that these devices are no more than “stochastic parrots”. I present an overview of some of the weak arguments that have been presented against LLMs, and I consider (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  36. You are what you’re for: Essentialist categorization in large language models.Siying Zhang, Selena She, Tobias Gerstenberg & David Rose - forthcoming - Proceedings of the 45Th Annual Conference of the Cognitive Science Society.
    How do essentialist beliefs about categories arise? We hypothesize that such beliefs are transmitted via language. We subject large language models (LLMs) to vignettes from the literature on essentialist categorization and find that they align well with people when the studies manipulated teleological information -- information about what something is for. We examine whether in a classic test of essentialist categorization -- the transformation task -- LLMs prioritize teleological properties over information about what something looks like, or is (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  37.  83
    Addressing Social Misattributions of Large Language Models: An HCXAI-based Approach.Andrea Ferrario, Alberto Termine & Alessandro Facchini - forthcoming - Available at Https://Arxiv.Org/Abs/2403.17873 (Extended Version of the Manuscript Accepted for the Acm Chi Workshop on Human-Centered Explainable Ai 2024 (Hcxai24).
    Human-centered explainable AI (HCXAI) advocates for the integration of social aspects into AI explanations. Central to the HCXAI discourse is the Social Transparency (ST) framework, which aims to make the socio-organizational context of AI systems accessible to their users. In this work, we suggest extending the ST framework to address the risks of social misattributions in Large Language Models (LLMs), particularly in sensitive areas like mental health. In fact LLMs, which are remarkably capable of simulating roles and personas, (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  38.  18
    Open AI meets open notes: surveillance capitalism, patient privacy and online record access.Charlotte Blease - 2024 - Journal of Medical Ethics 50 (2):84-89.
    Patient online record access (ORA) is spreading worldwide, and in some countries, including Sweden, and the USA, access is advanced with patients obtaining rapid access to their full records. In the UK context, from 31 October 2023 as part of the new NHS England general practitioner (GP) contract it will be mandatory for GPs to offer ORA to patients aged 16 and older. Patients report many benefits from reading their clinical records including feeling more empowered, better understanding and remembering their (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  39. Taking AI Risks Seriously: a New Assessment Model for the AI Act.Claudio Novelli, Casolari Federico, Antonino Rotolo, Mariarosaria Taddeo & Luciano Floridi - 2023 - AI and Society 38 (3):1-5.
    The EU proposal for the Artificial Intelligence Act (AIA) defines four risk categories: unacceptable, high, limited, and minimal. However, as these categories statically depend on broad fields of application of AI, the risk magnitude may be wrongly estimated, and the AIA may not be enforced effectively. This problem is particularly challenging when it comes to regulating general-purpose AI (GPAI), which has versatile and often unpredictable applications. Recent amendments to the compromise text, though introducing context-specific assessments, remain insufficient. To address this, (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  40.  81
    Technosophistische Schattenspiele.Wessel Reijers, Felix Maschewski & Anna-Verena Nosthoff - 2023 - Philosophie Magazin.
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  41. Personhood and AI: Why large language models don’t understand us.Jacob Browning - forthcoming - AI and Society:1-8.
    Recent artificial intelligence advances, especially those of large language models (LLMs), have increasingly shown glimpses of human-like intelligence. This has led to bold claims that these systems are no longer a mere “it” but now a “who,” a kind of person deserving respect. In this paper, I argue that this view depends on a Cartesian account of personhood, on which identifying someone as a person is based on their cognitive sophistication and ability to address common-sense reasoning problems. I contrast (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  42.  53
    Still no lie detector for language models: probing empirical and conceptual roadblocks.Benjamin A. Levinstein & Daniel A. Herrmann - forthcoming - Philosophical Studies:1-27.
    We consider the questions of whether or not large language models (LLMs) have beliefs, and, if they do, how we might measure them. First, we consider whether or not we should expect LLMs to have something like beliefs in the first place. We consider some recent arguments aiming to show that LLMs cannot have beliefs. We show that these arguments are misguided. We provide a more productive framing of questions surrounding the status of beliefs in LLMs, (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  43.  56
    The Epistemological Danger of Large Language Models.Elise Li Zheng & Sandra Soo-Jin Lee - 2023 - American Journal of Bioethics 23 (10):102-104.
    The potential of ChatGPT looms large for the practice of medicine, as both boon and bane. The use of Large Language Models (LLMs) in platforms such as ChatGPT raises critical ethical questions of w...
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  44. Introspective Capabilities in Large Language Models.Robert Long - 2023 - Journal of Consciousness Studies 30 (9):143-153.
    This paper considers the kind of introspection that large language models (LLMs) might be able to have. It argues that LLMs, while currently limited in their introspective capabilities, are not inherently unable to have such capabilities: they already model the world, including mental concepts, and already have some introspection-like capabilities. With deliberate training, LLMs may develop introspective capabilities. The paper proposes a method for such training for introspection, situates possible LLM introspection in the 'possible forms of introspection' (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  45. Unjustified untrue "beliefs": AI hallucinations and justification logics.Kristina Šekrst - forthcoming - In Kordula Świętorzecka, Filip Grgić & Anna Brozek (eds.), Logic, Knowledge, and Tradition. Essays in Honor of Srecko Kovac.
    In artificial intelligence (AI), responses generated by machine-learning models (most often large language models) may be unfactual information presented as a fact. For example, a chatbot might state that the Mona Lisa was painted in 1815. Such phenomenon is called AI hallucinations, seeking inspiration from human psychology, with a great difference of AI ones being connected to unjustified beliefs (that is, AI “beliefs”) rather than perceptual failures). -/- AI hallucinations may have their source in the data itself, that is, the (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  46. A phenomenology and epistemology of large language models: Transparency, trust, and trustworthiness.Richard Heersmink, Barend de Rooij, María Jimena Clavel Vázquez & Matteo Colombo - forthcoming - Ethics and Information Technology.
    This paper analyses the phenomenology and epistemology of chatbots such as ChatGPT and Bard. The computational architecture underpinning these chatbots are large language models (LLMs), which are generative AI (Artificial Intelligence) systems trained on a massive dataset of text extracted from the Web. We conceptualise these LLMs as multifunctional computational cognitive artifacts, used for various cognitive tasks such as translating, summarizing, answering questions, information-seeking, and much more. Phenomenologically, LLMs can be experienced as a “quasi-other”; when that happens, (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  47.  50
    The Hazards of Putting Ethics on Autopilot.Julian Friedland, B. Balkin, David & Kristian Myrseth - 2024 - MIT Sloan Management Review 65 (4).
    The generative AI boom is unleashing its minions. Enterprise software vendors have rolled out legions of automated assistants that use large language model (LLM) technology, such as ChatGPT, to offer users helpful suggestions or to execute simple tasks. These so-called copilots and chatbots can increase productivity and automate tedious manual work. In this article, we explain how that leads to the risk that users' ethical competence may degrade over time — and what to do about it.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  48. Emerging Technologies & Higher Education.Jake Burley & Alec Stubbs - 2023 - Ieet White Papers.
    Extended Reality (XR) and Large Language Model (LLM) technologies have the potential to significantly influence higher education practices and pedagogy in the coming years. As these emerging technologies reshape the educational landscape, it is crucial for educators and higher education professionals to understand their implications and make informed policy decisions for both individual courses and universities as a whole. -/- This paper has two parts. In the first half, we give an overview of XR technologies and their potential future role (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  49.  33
    Generative AI and medical ethics: the state of play.Hazem Zohny, Sebastian Porsdam Mann, Brian D. Earp & John McMillan - 2024 - Journal of Medical Ethics 50 (2):75-76.
    Since their public launch, a little over a year ago, large language models (LLMs) have inspired a flurry of analysis about what their implications might be for medical ethics, and for society more broadly. 1 Much of the recent debate has moved beyond categorical evaluations of the permissibility or impermissibility of LLM use in different general contexts (eg, at work or school), to more fine-grained discussions of the criteria that should govern their appropriate use in specific domains or towards (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  50. What is it for a Machine Learning Model to Have a Capability?Jacqueline Harding & Nathaniel Sharadin - forthcoming - British Journal for the Philosophy of Science.
    What can contemporary machine learning (ML) models do? Given the proliferation of ML models in society, answering this question matters to a variety of stakeholders, both public and private. The evaluation of models' capabilities is rapidly emerging as a key subfield of modern ML, buoyed by regulatory attention and government grants. Despite this, the notion of an ML model possessing a capability has not been interrogated: what are we saying when we say that a model is able to do something? (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 112