Results for 'LLM'

111 found
Order:
  1.  21
    The bma covid-19 ethical guidance: A legal analysis.Llm James E. Hurford Llb - 2020 - The New Bioethics 26 (2):176-189.
    The paper considers the recently published British Medical Association Guidance on ethical issues arising in relation to rationing of treatment during the COVID-19 Pandemic. It considers whether it...
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  2.  23
    Unreliable LLM Bioethics Assistants: Ethical and Pedagogical Risks.Lea Goetz, Markus Trengove, Artem Trotsyuk & Carole A. Federico - 2023 - American Journal of Bioethics 23 (10):89-91.
    Whilst Rahimzadeh et al. (2023) apply a critical lens to the pedagogical use of LLM bioethics assistants, we outline here further reason for skepticism. Two features of LLM chatbots are of signific...
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  3.  56
    Meaning by Courtesy: LLM-Generated Texts and the Illusion of Content.Gary Ostertag - 2023 - American Journal of Bioethics 23 (10):91-93.
    Contrary to how it may seem when we observe its output, an [LLM] is a system for haphazardly stitching together sequences of linguistic forms it has observed in its vast training data, according to...
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  4. Abundance of words versus Poverty of mind: The hidden human costs of LLMs.Quan-Hoang Vuong & Manh-Tung Ho - manuscript
    This essay analyzes the rise of Large Language Models (LLMs) such as GPT-4 or Gemini, which are now incorporated in a wide range of products and services in everyday life. Importantly, it considers some of their hidden human costs. First, is the question of who is left behind by the further infusion of LLMs in society. Second, is the issue of social inequalities between lingua franca and those which are not. Third, LLMs will help disseminate scientific concepts, but their meanings' (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  5.  20
    Sequent reconstruction in LLM—A sweepline proof.R. Banach - 1995 - Annals of Pure and Applied Logic 73 (3):277-295.
    An alternative proof is given that to each LLM proof net there corresponds at least one LLM sequent proof. The construction is inspired by the sweepline technique from computational geometry and includes a treatment of the multiplicative constants and of proof boxes.
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  6. Are Language Models More Like Libraries or Like Librarians? Bibliotechnism, the Novel Reference Problem, and the Attitudes of LLMs.Harvey Lederman & Kyle Mahowald - manuscript
    Are LLMs cultural technologies like photocopiers or printing presses, which transmit information but cannot create new content? A challenge for this idea, which we call "bibliotechnism", is that LLMs often do generate entirely novel text. We begin by defending bibliotechnism against this challenge, showing how novel text may be meaningful only in a derivative sense, so that the content of this generated text depends in an important sense on the content of original human text. We go on to present a (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  7.  3
    Exploring the psychology of LLMs’ Moral and Legal Reasoning.Guilherme F. C. F. Almeida, José Luiz Nunes, Neele Engelmann, Alex Wiegmann & Marcelo de Araújo - forthcoming - Artificial Intelligence.
  8.  18
    Abundance of words versus poverty of mind: the hidden human costs co-created with LLMs.Quan-Hoang Vuong & Manh-Tung Ho - forthcoming - AI and Society:1-2.
  9.  17
    Beyond algorithmic trust: interpersonal aspects on consent delegation to LLMs.Zeineb Sassi, Michael Hahn, Sascha Eickmann, Anne Herrmann-Johns & Max Tretter - 2024 - Journal of Medical Ethics 50 (2):139-139.
    In their article ‘Consent-GPT: is it ethical to delegate procedural consent to conversational AI?’, Allen et al 1 explore the ethical complexities involved in handing over parts of the process of obtaining medical consent to conversational Artificial Intelligence (AI) systems, that is, AI-driven large language models (LLMs) trained to interact with patients to inform them about upcoming medical procedures and assist in the process of obtaining informed consent.1 They focus specifically on challenges related to accuracy (4–5), trust (5), privacy (5), (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  10.  20
    AUTOGEN and the Ethics of Co-Creation with Personalized LLMs—Reply to the Commentaries.Sebastian Porsdam Mann, Brian D. Earp, Nikolaj Møller, Vynn Suren & Julian Savulescu - 2024 - American Journal of Bioethics 24 (3):6-14.
    In this reply to our commentators, we respond to ethical concerns raised about the potential use (or misuse) of personalized LLMs for academic idea and prose generation, including questions about c...
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  11.  80
    The Turing test is not a good benchmark for thought in LLMs.Tim Bayne & Iwan Williams - 2023 - Nature Human Behaviour 7:1806–1807.
  12.  7
    Religija, teologija i filozofske vještine automatiziranih programa za čavrljanje (chatbotova) pogonjenima velikim jezičnim modelima (LLM).Marcin Trepczyński - 2024 - Disputatio Philosophica 25 (1):19-36.
    U radu se nastoji prikazati kako se vjera i teologija mogu iskoristiti za testiranje uspješnosti velikih jezičnih modela (LLM–ova) i automatiziranih programa za čavrljanje (chatbotova) pogonjenima na takvim modelima, mjerenjem njihovih filozofskih vještina. Predstavljaju se rezultati testiranja četiriju odabranih chatbotova: ChatGPT, Bing, Bard i Llama2. Za potrebe testiranja uzeta su tri moguća izvora iz područja vjere i teologije: 1) teorija četiri smisla Svetog pisma, 2) apstraktne teološke izjave, 3) apstraktna logička formula izvedena iz vjerskog teksta kako bi se pokazalo da (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  13. Representações da Literatura em Alunos LLM: Da Voz da Escola a Surdez dos deuses.Antônio Branco - 2005 - Quaestio: Revista de Estudos Em Educação 7 (2).
    No categories
     
    Export citation  
     
    Bookmark  
  14.  27
    James F. Bresnahan, SJ, JD, LLM, Ph. D., is Professor of Clinical Medicine, Department of Medicine, and Co-Director of the Ethics and Human Values in Medicine Program, Northwestern University Medical School, Chicago David A* Buehler, M. Div., MA, is Coordinator of the bioethics committee and Director of Pastoral Care, Charlton Memorial Hospital, Fall River, Massachusetts. [REVIEW]Miriam Piven Cotler - 1993 - Cambridge Quarterly of Healthcare Ethics 2:125-126.
  15.  32
    AUTOGEN: A Personalized Large Language Model for Academic Enhancement—Ethics and Proof of Principle.Sebastian Porsdam Mann, Brian D. Earp, Nikolaj Møller, Suren Vynn & Julian Savulescu - 2023 - American Journal of Bioethics 23 (10):28-41.
    Large language models (LLMs) such as ChatGPT or Google’s Bard have shown significant performance on a variety of text-based tasks, such as summarization, translation, and even the generation of new...
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   19 citations  
  16. Chatting with Chat(GPT-4): Quid est Understanding?Elan Moritz - manuscript
    What is Understanding? This is the first of a series of Chats with OpenAI’s ChatGPT (Chat). The main goal is to obtain Chat’s response to a series of questions about the concept of ’understand- ing’. The approach is a conversational approach where the author (labeled as user) asks (prompts) Chat, obtains a response, and then uses the response to formulate followup questions. David Deutsch’s assertion of the primality of the process / capability of understanding is used as the starting point. (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  17.  60
    What Should ChatGPT Mean for Bioethics?I. Glenn Cohen - 2023 - American Journal of Bioethics 23 (10):8-16.
    In the last several months, several major disciplines have started their initial reckoning with what ChatGPT and other Large Language Models (LLMs) mean for them – law, medicine, business among other professions. With a heavy dose of humility, given how fast the technology is moving and how uncertain its social implications are, this article attempts to give some early tentative thoughts on what ChatGPT might mean for bioethics. I will first argue that many bioethics issues raised by ChatGPT are similar (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   25 citations  
  18. Large Language Models and Biorisk.William D’Alessandro, Harry R. Lloyd & Nathaniel Sharadin - 2023 - American Journal of Bioethics 23 (10):115-118.
    We discuss potential biorisks from large language models (LLMs). AI assistants based on LLMs such as ChatGPT have been shown to significantly reduce barriers to entry for actors wishing to synthesize dangerous, potentially novel pathogens and chemical weapons. The harms from deploying such bioagents could be further magnified by AI-assisted misinformation. We endorse several policy responses to these dangers, including prerelease evaluations of biomedical AIs by subject-matter experts, enhanced surveillance and lab screening procedures, restrictions on AI training data, and access (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  19. Diagonalization & Forcing FLEX: From Cantor to Cohen and Beyond. Learning from Leibniz, Cantor, Turing, Gödel, and Cohen; crawling towards AGI.Elan Moritz - manuscript
    The paper continues my earlier Chat with OpenAI’s ChatGPT with a Focused LLM Experiment (FLEX). The idea is to conduct Large Language Model (LLM) based explorations of certain areas or concepts. The approach is based on crafting initial guiding prompts and then follow up with user prompts based on the LLMs’ responses. The goals include improving understanding of LLM capabilities and their limitations culminating in optimized prompts. The specific subjects explored as research subject matter include a) diagonalization techniques as practiced (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  20. AI-Testimony, Conversational AIs and Our Anthropocentric Theory of Testimony.Ori Freiman - forthcoming - Social Epistemology.
    The ability to interact in a natural language profoundly changes devices’ interfaces and potential applications of speaking technologies. Concurrently, this phenomenon challenges our mainstream theories of knowledge, such as how to analyze linguistic outputs of devices under existing anthropocentric theoretical assumptions. In section 1, I present the topic of machines that speak, connecting between Descartes and Generative AI. In section 2, I argue that accepted testimonial theories of knowledge and justification commonly reject the possibility that a speaking technological artifact can (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  21. Angry Men, Sad Women: Large Language Models Reflect Gendered Stereotypes in Emotion Attribution.Flor Miriam Plaza-del Arco, Amanda Cercas Curry & Alba Curry - 2024 - Arxiv.
    Large language models (LLMs) reflect societal norms and biases, especially about gender. While societal biases and stereotypes have been extensively researched in various NLP applications, there is a surprising gap for emotion analysis. However, emotion and gender are closely linked in societal discourse. E.g., women are often thought of as more empathetic, while men's anger is more socially accepted. To fill this gap, we present the first comprehensive study of gendered emotion attribution in five state-of-the-art LLMs (open- and closed-source). We (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  22.  70
    Large Language Models and the Reverse Turing Test.Terrence Sejnowski - 2023 - Neural Computation 35 (3):309–342.
    Large Language Models (LLMs) have been transformative. They are pre-trained foundational models that are self-supervised and can be adapted with fine tuning to a wide range of natural language tasks, each of which previously would have required a separate network model. This is one step closer to the extraordinary versatility of human language. GPT-3 and more recently LaMDA can carry on dialogs with humans on many topics after minimal priming with a few examples. However, there has been a wide range (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  23. Apropos of "Speciesist bias in AI: how AI applications perpetuate discrimination and unfair outcomes against animals".Ognjen Arandjelović - 2023 - AI and Ethics.
    The present comment concerns a recent AI & Ethics article which purports to report evidence of speciesist bias in various popular computer vision (CV) and natural language processing (NLP) machine learning models described in the literature. I examine the authors' analysis and show it, ironically, to be prejudicial, often being founded on poorly conceived assumptions and suffering from fallacious and insufficiently rigorous reasoning, its superficial appeal in large part relying on the sequacity of the article's target readership.
    Direct download  
     
    Export citation  
     
    Bookmark  
  24.  21
    Large language models in medical ethics: useful but not expert.Andrea Ferrario & Nikola Biller-Andorno - forthcoming - Journal of Medical Ethics.
    Large language models (LLMs) have now entered the realm of medical ethics. In a recent study, Balaset alexamined the performance of GPT-4, a commercially available LLM, assessing its performance in generating responses to diverse medical ethics cases. Their findings reveal that GPT-4 demonstrates an ability to identify and articulate complex medical ethical issues, although its proficiency in encoding the depth of real-world ethical dilemmas remains an avenue for improvement. Investigating the integration of LLMs into medical ethics decision-making appears to be (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  25.  34
    Exploring the potential utility of AI large language models for medical ethics: an expert panel evaluation of GPT-4.Michael Balas, Jordan Joseph Wadden, Philip C. Hébert, Eric Mathison, Marika D. Warren, Victoria Seavilleklein, Daniel Wyzynski, Alison Callahan, Sean A. Crawford, Parnian Arjmand & Edsel B. Ing - 2024 - Journal of Medical Ethics 50 (2):90-96.
    Integrating large language models (LLMs) like GPT-4 into medical ethics is a novel concept, and understanding the effectiveness of these models in aiding ethicists with decision-making can have significant implications for the healthcare sector. Thus, the objective of this study was to evaluate the performance of GPT-4 in responding to complex medical ethical vignettes and to gauge its utility and limitations for aiding medical ethicists. Using a mixed-methods, cross-sectional survey approach, a panel of six ethicists assessed LLM-generated responses to eight (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  26. Machine Advisors: Integrating Large Language Models into Democratic Assemblies.Petr Špecián - manuscript
    Large language models (LLMs) represent the currently most relevant incarnation of artificial intelligence with respect to the future fate of democratic governance. Considering their potential, this paper seeks to answer a pressing question: Could LLMs outperform humans as expert advisors to democratic assemblies? While bearing the promise of enhanced expertise availability and accessibility, they also present challenges of hallucinations, misalignment, or value imposition. Weighing LLMs’ benefits and drawbacks compared to their human counterparts, I argue for their careful integration to augment (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  27.  57
    Is Academic Enhancement Possible by Means of Generative AI-Based Digital Twins?Sven Nyholm - 2023 - American Journal of Bioethics 23 (10):44-47.
    Large Language Models (LLMs) “assign probabilities to sequences of text. When given some initial text, they use these probabilities to generate new text. Large language models are language models u...
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  28.  7
    Large language models in cryptocurrency securities cases: can a GPT model meaningfully assist lawyers?Arianna Trozze, Toby Davies & Bennett Kleinberg - forthcoming - Artificial Intelligence and Law:1-47.
    Large Language Models (LLMs) could be a useful tool for lawyers. However, empirical research on their effectiveness in conducting legal tasks is scant. We study securities cases involving cryptocurrencies as one of numerous contexts where AI could support the legal process, studying GPT-3.5’s legal reasoning and ChatGPT’s legal drafting capabilities. We examine whether a) GPT-3.5 can accurately determine which laws are potentially being violated from a fact pattern, and b) whether there is a difference in juror decision-making based on complaints (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  29. Language Agents Reduce the Risk of Existential Catastrophe.Simon Goldstein & Cameron Domenico Kirk-Giannini - forthcoming - AI and Society:1-11.
    Recent advances in natural language processing have given rise to a new kind of AI architecture: the language agent. By repeatedly calling an LLM to perform a variety of cognitive tasks, language agents are able to function autonomously to pursue goals specified in natural language and stored in a human-readable format. Because of their architecture, language agents exhibit behavior that is predictable according to the laws of folk psychology: they function as though they have desires and beliefs, and then make (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  30. A Talking Cure for Autonomy Traps : How to share our social world with chatbots.Regina Rini - manuscript
    Large Language Models (LLMs) like ChatGPT were trained on human conversation, but in the future they will also train us. As chatbots speak from our smartphones and customer service helplines, they will become a part of everyday life and a growing share of all the conversations we ever have. It’s hard to doubt this will have some effect on us. Here I explore a specific concern about the impact of artificial conversation on our capacity to deliberate and hold ourselves accountable (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  31.  32
    Generative AI and medical ethics: the state of play.Hazem Zohny, Sebastian Porsdam Mann, Brian D. Earp & John McMillan - 2024 - Journal of Medical Ethics 50 (2):75-76.
    Since their public launch, a little over a year ago, large language models (LLMs) have inspired a flurry of analysis about what their implications might be for medical ethics, and for society more broadly. 1 Much of the recent debate has moved beyond categorical evaluations of the permissibility or impermissibility of LLM use in different general contexts (eg, at work or school), to more fine-grained discussions of the criteria that should govern their appropriate use in specific domains or towards certain (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  32. ChatGPT and the Technology-Education Tension: Applying Contextual Virtue Epistemology to a Cognitive Artifact.Guido Cassinadri - 2024 - Philosophy and Technology 37 (14):1-28.
    According to virtue epistemology, the main aim of education is the development of the cognitive character of students (Pritchard, 2014, 2016). Given the proliferation of technological tools such as ChatGPT and other LLMs for solving cognitive tasks, how should educational practices incorporate the use of such tools without undermining the cognitive character of students? Pritchard (2014, 2016) argues that it is possible to properly solve this ‘technology-education tension’ (TET) by combining the virtue epistemology framework with the theory of extended cognition (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  33.  13
    Beyond the limitations of any imaginable mechanism: Large language models and psycholinguistics.Conor Houghton, Nina Kazanina & Priyanka Sukumaran - 2023 - Behavioral and Brain Sciences 46:e395.
    Large language models (LLMs) are not detailed models of human linguistic processing. They are, however, extremely successful at their primary task: Providing a model for language. For this reason LLMs are important in psycholinguistics: They are useful as a practical tool, as an illustrative comparative, and philosophically, as a basis for recasting the relationship between language and thought.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  34.  34
    Assessing the Strengths and Weaknesses of Large Language Models.Shalom Lappin - 2023 - Journal of Logic, Language and Information 33 (1):9-20.
    The transformers that drive chatbots and other AI systems constitute large language models (LLMs). These are currently the focus of a lively discussion in both the scientific literature and the popular media. This discussion ranges from hyperbolic claims that attribute general intelligence and sentience to LLMs, to the skeptical view that these devices are no more than “stochastic parrots”. I present an overview of some of the weak arguments that have been presented against LLMs, and I consider several of the (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  35.  90
    An Optimal Choice of Cognitive Diagnostic Model for Second Language Listening Comprehension Test.Yanyun Dong, Xiaomei Ma, Chuang Wang & Xuliang Gao - 2021 - Frontiers in Psychology 12.
    Cognitive diagnostic models show great promise in language assessment for providing rich diagnostic information. The lack of a full understanding of second language listening subskills made model selection difficult. In search of optimal CDM that could provide a better understanding of L2 listening subskills and facilitate accurate classification, this study carried a two-layer model selection. At the test level, A-CDM, LLM, and R-RUM had an acceptable and comparable model fit, suggesting mixed inter-attribute relationships of L2 listening subskills. At the item (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  36. You are what you’re for: Essentialist categorization in large language models.Siying Zhang, Selena She, Tobias Gerstenberg & David Rose - forthcoming - Proceedings of the 45Th Annual Conference of the Cognitive Science Society.
    How do essentialist beliefs about categories arise? We hypothesize that such beliefs are transmitted via language. We subject large language models (LLMs) to vignettes from the literature on essentialist categorization and find that they align well with people when the studies manipulated teleological information -- information about what something is for. We examine whether in a classic test of essentialist categorization -- the transformation task -- LLMs prioritize teleological properties over information about what something looks like, or is made of. (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  37. Taking AI Risks Seriously: a New Assessment Model for the AI Act.Claudio Novelli, Casolari Federico, Antonino Rotolo, Mariarosaria Taddeo & Luciano Floridi - 2023 - AI and Society 38 (3):1-5.
    The EU proposal for the Artificial Intelligence Act (AIA) defines four risk categories: unacceptable, high, limited, and minimal. However, as these categories statically depend on broad fields of application of AI, the risk magnitude may be wrongly estimated, and the AIA may not be enforced effectively. This problem is particularly challenging when it comes to regulating general-purpose AI (GPAI), which has versatile and often unpredictable applications. Recent amendments to the compromise text, though introducing context-specific assessments, remain insufficient. To address this, (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  38.  76
    Technosophistische Schattenspiele.Wessel Reijers, Felix Maschewski & Anna-Verena Nosthoff - 2023 - Philosophie Magazin.
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  39. Emerging Technologies & Higher Education.Jake Burley & Alec Stubbs - 2023 - Ieet White Papers.
    Extended Reality (XR) and Large Language Model (LLM) technologies have the potential to significantly influence higher education practices and pedagogy in the coming years. As these emerging technologies reshape the educational landscape, it is crucial for educators and higher education professionals to understand their implications and make informed policy decisions for both individual courses and universities as a whole. -/- This paper has two parts. In the first half, we give an overview of XR technologies and their potential future role (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  40. Babbling stochastic parrots? On reference and reference change in large language models.Steffen Koch - manuscript
    Recently developed large language models (LLMs) perform surprisingly well in many language-related tasks, ranging from text correction or authentic chat experiences to the production of entirely new texts or even essays. It is natural to get the impression that LLMs know the meaning of natural language expressions and can use them productively. Recent scholarship, however, has questioned the validity of this impression, arguing that LLMs are ultimately incapable of understanding and producing meaningful texts. This paper develops a more optimistic view. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  41.  50
    The Hazards of Putting Ethics on Autopilot.Julian Friedland, B. Balkin, David & Kristian Myrseth - 2024 - MIT Sloan Management Review 65 (4).
    The generative AI boom is unleashing its minions. Enterprise software vendors have rolled out legions of automated assistants that use large language model (LLM) technology, such as ChatGPT, to offer users helpful suggestions or to execute simple tasks. These so-called copilots and chatbots can increase productivity and automate tedious manual work. In this article, we explain how that leads to the risk that users' ethical competence may degrade over time — and what to do about it.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  42. What is it for a Machine Learning Model to Have a Capability?Jacqueline Harding & Nathaniel Sharadin - forthcoming - British Journal for the Philosophy of Science.
    What can contemporary machine learning (ML) models do? Given the proliferation of ML models in society, answering this question matters to a variety of stakeholders, both public and private. The evaluation of models' capabilities is rapidly emerging as a key subfield of modern ML, buoyed by regulatory attention and government grants. Despite this, the notion of an ML model possessing a capability has not been interrogated: what are we saying when we say that a model is able to do something? (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  43.  37
    Consent-GPT: is it ethical to delegate procedural consent to conversational AI?Jemima Winifred Allen, Brian D. Earp, Julian Koplin & Dominic Wilkinson - 2024 - Journal of Medical Ethics 50 (2):77-83.
    Obtaining informed consent from patients prior to a medical or surgical procedure is a fundamental part of safe and ethical clinical practice. Currently, it is routine for a significant part of the consent process to be delegated to members of the clinical team not performing the procedure (eg, junior doctors). However, it is common for consent-taking delegates to lack sufficient time and clinical knowledge to adequately promote patient autonomy and informed decision-making. Such problems might be addressed in a number of (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  44.  56
    Large Language Models Demonstrate the Potential of Statistical Learning in Language.Pablo Contreras Kallens, Ross Deans Kristensen-McLachlan & Morten H. Christiansen - 2023 - Cognitive Science 47 (3):e13256.
    To what degree can language be acquired from linguistic input alone? This question has vexed scholars for millennia and is still a major focus of debate in the cognitive science of language. The complexity of human language has hampered progress because studies of language–especially those involving computational modeling–have only been able to deal with small fragments of our linguistic skills. We suggest that the most recent generation of Large Language Models (LLMs) might finally provide the computational tools to determine empirically (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  45.  94
    ChatGPT is no Stochastic Parrot. But it also Claims that 1 is Greater than 1.Konstantine Arkoudas - 2023 - Philosophy and Technology 36 (3):1-29.
    This article is a commentary on ChatGPT and LLMs (Large Language Models) in general. It argues that this technology has matured to the point where calling systems such as ChatGPT “stochastic parrots” is no longer warranted. But it also argues that these systems continue to have serious limitations when it comes to reasoning. These limitations are much more severe than commonly thought. A large array of examples are given to support these claims.
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  46. Publish with AUTOGEN or Perish? Some Pitfalls to Avoid in the Pursuit of Academic Enhancement via Personalized Large Language Models.Alexandre Erler - 2023 - American Journal of Bioethics 23 (10):94-96.
    The potential of using personalized Large Language Models (LLMs) or “generative AI” (GenAI) to enhance productivity in academic research, as highlighted by Porsdam Mann and colleagues (Porsdam Mann...
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  47.  17
    Open AI meets open notes: surveillance capitalism, patient privacy and online record access.Charlotte Blease - 2024 - Journal of Medical Ethics 50 (2):84-89.
    Patient online record access (ORA) is spreading worldwide, and in some countries, including Sweden, and the USA, access is advanced with patients obtaining rapid access to their full records. In the UK context, from 31 October 2023 as part of the new NHS England general practitioner (GP) contract it will be mandatory for GPs to offer ORA to patients aged 16 and older. Patients report many benefits from reading their clinical records including feeling more empowered, better understanding and remembering their (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  48.  32
    Large Language Models, Agency, and Why Speech Acts are Beyond Them (For Now) – A Kantian-Cum-Pragmatist Case.Reto Gubelmann - 2024 - Philosophy and Technology 37 (1):1-24.
    This article sets in with the question whether current or foreseeable transformer-based large language models (LLMs), such as the ones powering OpenAI’s ChatGPT, could be language users in a way comparable to humans. It answers the question negatively, presenting the following argument. Apart from niche uses, to use language means to act. But LLMs are unable to act because they lack intentions. This, in turn, is because they are the wrong kind of being: agents with intentions need to be autonomous (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  49.  24
    Why Personalized Large Language Models Fail to Do What Ethics is All About.Sebastian Laacke & Charlotte Gauckler - 2023 - American Journal of Bioethics 23 (10):60-63.
    Porsdam Mann and colleagues provide an overview of opportunities and risks associated with the use of personalized large language models (LLMs) for text production in bio)ethics (Porsdam Mann et al...
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  50.  53
    The Epistemological Danger of Large Language Models.Elise Li Zheng & Sandra Soo-Jin Lee - 2023 - American Journal of Bioethics 23 (10):102-104.
    The potential of ChatGPT looms large for the practice of medicine, as both boon and bane. The use of Large Language Models (LLMs) in platforms such as ChatGPT raises critical ethical questions of w...
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
1 — 50 / 111