Results for 'LLMs'

118 found
Order:
  1.  24
    The bma covid-19 ethical guidance: A legal analysis.Llm James E. Hurford Llb - 2020 - The New Bioethics 26 (2):176-189.
    The paper considers the recently published British Medical Association Guidance on ethical issues arising in relation to rationing of treatment during the COVID-19 Pandemic. It considers whether it...
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  2.  26
    Unreliable LLM Bioethics Assistants: Ethical and Pedagogical Risks.Lea Goetz, Markus Trengove, Artem Trotsyuk & Carole A. Federico - 2023 - American Journal of Bioethics 23 (10):89-91.
    Whilst Rahimzadeh et al. (2023) apply a critical lens to the pedagogical use of LLM bioethics assistants, we outline here further reason for skepticism. Two features of LLM chatbots are of signific...
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  3.  20
    Sequent reconstruction in LLM—A sweepline proof.R. Banach - 1995 - Annals of Pure and Applied Logic 73 (3):277-295.
    An alternative proof is given that to each LLM proof net there corresponds at least one LLM sequent proof. The construction is inspired by the sweepline technique from computational geometry and includes a treatment of the multiplicative constants and of proof boxes.
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  4.  23
    AUTOGEN and the Ethics of Co-Creation with Personalized LLMs—Reply to the Commentaries.Sebastian Porsdam Mann, Brian D. Earp, Nikolaj Møller, Vynn Suren & Julian Savulescu - 2024 - American Journal of Bioethics 24 (3):6-14.
    In this reply to our commentators, we respond to ethical concerns raised about the potential use (or misuse) of personalized LLMs for academic idea and prose generation, including questions about c...
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  5.  18
    Beyond algorithmic trust: interpersonal aspects on consent delegation to LLMs.Zeineb Sassi, Michael Hahn, Sascha Eickmann, Anne Herrmann-Johns & Max Tretter - 2024 - Journal of Medical Ethics 50 (2):139-139.
    In their article ‘Consent-GPT: is it ethical to delegate procedural consent to conversational AI?’, Allen et al 1 explore the ethical complexities involved in handing over parts of the process of obtaining medical consent to conversational Artificial Intelligence (AI) systems, that is, AI-driven large language models (LLMs) trained to interact with patients to inform them about upcoming medical procedures and assist in the process of obtaining informed consent.1 They focus specifically on challenges related to accuracy (4–5), trust (5), privacy (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  6. The Turing test is not a good benchmark for thought in LLMs.Tim Bayne & Iwan Williams - 2023 - Nature Human Behaviour 7:1806–1807.
  7.  63
    Meaning by Courtesy: LLM-Generated Texts and the Illusion of Content.Gary Ostertag - 2023 - American Journal of Bioethics 23 (10):91-93.
    Contrary to how it may seem when we observe its output, an [LLM] is a system for haphazardly stitching together sequences of linguistic forms it has observed in its vast training data, according to...
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  8. Are Language Models More Like Libraries or Like Librarians? Bibliotechnism, the Novel Reference Problem, and the Attitudes of LLMs.Harvey Lederman & Kyle Mahowald - forthcoming - Transactions of the Association for Computational Linguistics.
    Are LLMs cultural technologies like photocopiers or printing presses, which transmit information but cannot create new content? A challenge for this idea, which we call bibliotechnism, is that LLMs generate novel text. We begin with a defense of bibliotechnism, showing how even novel text may inherit its meaning from original human-generated text. We then argue that bibliotechnism faces an independent challenge from examples in which LLMs generate novel reference, using new names to refer to new entities. Such (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  9. Abundance of words versus Poverty of mind: The hidden human costs of LLMs.Quan-Hoang Vuong & Manh-Tung Ho - manuscript
    This essay analyzes the rise of Large Language Models (LLMs) such as GPT-4 or Gemini, which are now incorporated in a wide range of products and services in everyday life. Importantly, it considers some of their hidden human costs. First, is the question of who is left behind by the further infusion of LLMs in society. Second, is the issue of social inequalities between lingua franca and those which are not. Third, LLMs will help disseminate scientific concepts, (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  10.  12
    Exploring the psychology of LLMs’ Moral and Legal Reasoning.Guilherme F. C. F. Almeida, José Luiz Nunes, Neele Engelmann, Alex Wiegmann & Marcelo de Araújo - forthcoming - Artificial Intelligence.
  11.  7
    Religija, teologija i filozofske vještine automatiziranih programa za čavrljanje (chatbotova) pogonjenima velikim jezičnim modelima (LLM).Marcin Trepczyński - 2024 - Disputatio Philosophica 25 (1):19-36.
    U radu se nastoji prikazati kako se vjera i teologija mogu iskoristiti za testiranje uspješnosti velikih jezičnih modela (LLM–ova) i automatiziranih programa za čavrljanje (chatbotova) pogonjenima na takvim modelima, mjerenjem njihovih filozofskih vještina. Predstavljaju se rezultati testiranja četiriju odabranih chatbotova: ChatGPT, Bing, Bard i Llama2. Za potrebe testiranja uzeta su tri moguća izvora iz područja vjere i teologije: 1) teorija četiri smisla Svetog pisma, 2) apstraktne teološke izjave, 3) apstraktna logička formula izvedena iz vjerskog teksta kako bi se pokazalo da (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  12. Representações da Literatura em Alunos LLM: Da Voz da Escola a Surdez dos deuses.Antônio Branco - 2005 - Quaestio: Revista de Estudos Em Educação 7 (2).
    No categories
     
    Export citation  
     
    Bookmark  
  13.  5
    Artificial Intelligence and content analysis: the large language models (LLMs) and the automatized categorization.Ana Carolina Carius & Alex Justen Teixeira - forthcoming - AI and Society:1-12.
    The growing advancement of Artificial Intelligence models based on deep learning and the consequent popularization of large language models (LLMs), such as ChatGPT, place the academic community facing unprecedented dilemmas, in addition to corroborating questions involving research activities and human beings. In this work, Content Analysis was chosen as the object of study, an important technique for analyzing qualitative data and frequently used among Brazilian researchers. The objective of this work was to compare the process of categorization by themes (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  14.  37
    Abundance of words versus poverty of mind: the hidden human costs co-created with LLMs.Quan-Hoang Vuong & Manh-Tung Ho - forthcoming - AI and Society:1-2.
  15.  92
    Introduction to the Special Issue - LLMs and Writing.Syed AbuMusab - 2024 - Teaching Philosophy 47 (2):139-142.
  16.  28
    James F. Bresnahan, SJ, JD, LLM, Ph. D., is Professor of Clinical Medicine, Department of Medicine, and Co-Director of the Ethics and Human Values in Medicine Program, Northwestern University Medical School, Chicago David A* Buehler, M. Div., MA, is Coordinator of the bioethics committee and Director of Pastoral Care, Charlton Memorial Hospital, Fall River, Massachusetts. [REVIEW]Miriam Piven Cotler - 1993 - Cambridge Quarterly of Healthcare Ethics 2:125-126.
  17.  38
    AUTOGEN: A Personalized Large Language Model for Academic Enhancement—Ethics and Proof of Principle.Sebastian Porsdam Mann, Brian D. Earp, Nikolaj Møller, Suren Vynn & Julian Savulescu - 2023 - American Journal of Bioethics 23 (10):28-41.
    Large language models (LLMs) such as ChatGPT or Google’s Bard have shown significant performance on a variety of text-based tasks, such as summarization, translation, and even the generation of new...
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   19 citations  
  18.  73
    What Should ChatGPT Mean for Bioethics?I. Glenn Cohen - 2023 - American Journal of Bioethics 23 (10):8-16.
    In the last several months, several major disciplines have started their initial reckoning with what ChatGPT and other Large Language Models (LLMs) mean for them – law, medicine, business among other professions. With a heavy dose of humility, given how fast the technology is moving and how uncertain its social implications are, this article attempts to give some early tentative thoughts on what ChatGPT might mean for bioethics. I will first argue that many bioethics issues raised by ChatGPT are (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   25 citations  
  19. Diagonalization & Forcing FLEX: From Cantor to Cohen and Beyond. Learning from Leibniz, Cantor, Turing, Gödel, and Cohen; crawling towards AGI.Elan Moritz - manuscript
    The paper continues my earlier Chat with OpenAI’s ChatGPT with a Focused LLM Experiment (FLEX). The idea is to conduct Large Language Model (LLM) based explorations of certain areas or concepts. The approach is based on crafting initial guiding prompts and then follow up with user prompts based on the LLMs’ responses. The goals include improving understanding of LLM capabilities and their limitations culminating in optimized prompts. The specific subjects explored as research subject matter include a) diagonalization techniques as (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  20. Large Language Models and Biorisk.William D’Alessandro, Harry R. Lloyd & Nathaniel Sharadin - 2023 - American Journal of Bioethics 23 (10):115-118.
    We discuss potential biorisks from large language models (LLMs). AI assistants based on LLMs such as ChatGPT have been shown to significantly reduce barriers to entry for actors wishing to synthesize dangerous, potentially novel pathogens and chemical weapons. The harms from deploying such bioagents could be further magnified by AI-assisted misinformation. We endorse several policy responses to these dangers, including prerelease evaluations of biomedical AIs by subject-matter experts, enhanced surveillance and lab screening procedures, restrictions on AI training data, (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  21. Chatting with Chat(GPT-4): Quid est Understanding?Elan Moritz - manuscript
    What is Understanding? This is the first of a series of Chats with OpenAI’s ChatGPT (Chat). The main goal is to obtain Chat’s response to a series of questions about the concept of ’understand- ing’. The approach is a conversational approach where the author (labeled as user) asks (prompts) Chat, obtains a response, and then uses the response to formulate followup questions. David Deutsch’s assertion of the primality of the process / capability of understanding is used as the starting point. (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  22.  13
    Negotiating becoming: a Nietzschean critique of large language models.Simon W. S. Fischer & Bas de Boer - 2024 - Ethics and Information Technology 26 (3):1-12.
    Large language models (LLMs) structure the linguistic landscape by reflecting certain beliefs and assumptions. In this paper, we address the risk of people unthinkingly adopting and being determined by the values or worldviews embedded in LLMs. We provide a Nietzschean critique of LLMs and, based on the concept of will to power, consider LLMs as will-to-power organisations. This allows us to conceptualise the interaction between self and LLMs as power struggles, which we understand as negotiation. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  23.  38
    Exploring the potential utility of AI large language models for medical ethics: an expert panel evaluation of GPT-4.Michael Balas, Jordan Joseph Wadden, Philip C. Hébert, Eric Mathison, Marika D. Warren, Victoria Seavilleklein, Daniel Wyzynski, Alison Callahan, Sean A. Crawford, Parnian Arjmand & Edsel B. Ing - 2024 - Journal of Medical Ethics 50 (2):90-96.
    Integrating large language models (LLMs) like GPT-4 into medical ethics is a novel concept, and understanding the effectiveness of these models in aiding ethicists with decision-making can have significant implications for the healthcare sector. Thus, the objective of this study was to evaluate the performance of GPT-4 in responding to complex medical ethical vignettes and to gauge its utility and limitations for aiding medical ethicists. Using a mixed-methods, cross-sectional survey approach, a panel of six ethicists assessed LLM-generated responses to (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  24.  60
    AI Enters Public Discourse: a Habermasian Assessment of the Moral Status of Large Language Models.Paolo Monti - 2024 - Ethics and Politics 61 (1):61-80.
    Large Language Models (LLMs) are generative AI systems capable of producing original texts based on inputs about topic and style provided in the form of prompts or questions. The introduction of the outputs of these systems into human discursive practices poses unprecedented moral and political questions. The article articulates an analysis of the moral status of these systems and their interactions with human interlocutors based on the Habermasian theory of communicative action. The analysis explores, among other things, Habermas's inquiries (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  25.  8
    Large language models in cryptocurrency securities cases: can a GPT model meaningfully assist lawyers?Arianna Trozze, Toby Davies & Bennett Kleinberg - forthcoming - Artificial Intelligence and Law:1-47.
    Large Language Models (LLMs) could be a useful tool for lawyers. However, empirical research on their effectiveness in conducting legal tasks is scant. We study securities cases involving cryptocurrencies as one of numerous contexts where AI could support the legal process, studying GPT-3.5’s legal reasoning and ChatGPT’s legal drafting capabilities. We examine whether a) GPT-3.5 can accurately determine which laws are potentially being violated from a fact pattern, and b) whether there is a difference in juror decision-making based on (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  26.  61
    Is Academic Enhancement Possible by Means of Generative AI-Based Digital Twins?Sven Nyholm - 2023 - American Journal of Bioethics 23 (10):44-47.
    Large Language Models (LLMs) “assign probabilities to sequences of text. When given some initial text, they use these probabilities to generate new text. Large language models are language models u...
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  27.  25
    Large language models in medical ethics: useful but not expert.Andrea Ferrario & Nikola Biller-Andorno - forthcoming - Journal of Medical Ethics.
    Large language models (LLMs) have now entered the realm of medical ethics. In a recent study, Balaset alexamined the performance of GPT-4, a commercially available LLM, assessing its performance in generating responses to diverse medical ethics cases. Their findings reveal that GPT-4 demonstrates an ability to identify and articulate complex medical ethical issues, although its proficiency in encoding the depth of real-world ethical dilemmas remains an avenue for improvement. Investigating the integration of LLMs into medical ethics decision-making appears (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  28.  39
    Consent-GPT: is it ethical to delegate procedural consent to conversational AI?Jemima Winifred Allen, Brian D. Earp, Julian Koplin & Dominic Wilkinson - 2024 - Journal of Medical Ethics 50 (2):77-83.
    Obtaining informed consent from patients prior to a medical or surgical procedure is a fundamental part of safe and ethical clinical practice. Currently, it is routine for a significant part of the consent process to be delegated to members of the clinical team not performing the procedure (eg, junior doctors). However, it is common for consent-taking delegates to lack sufficient time and clinical knowledge to adequately promote patient autonomy and informed decision-making. Such problems might be addressed in a number of (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  29. Angry Men, Sad Women: Large Language Models Reflect Gendered Stereotypes in Emotion Attribution.Flor Miriam Plaza-del Arco, Amanda Cercas Curry & Alba Curry - 2024 - Arxiv.
    Large language models (LLMs) reflect societal norms and biases, especially about gender. While societal biases and stereotypes have been extensively researched in various NLP applications, there is a surprising gap for emotion analysis. However, emotion and gender are closely linked in societal discourse. E.g., women are often thought of as more empathetic, while men's anger is more socially accepted. To fill this gap, we present the first comprehensive study of gendered emotion attribution in five state-of-the-art LLMs (open- and (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  30.  18
    Open AI meets open notes: surveillance capitalism, patient privacy and online record access.Charlotte Blease - 2024 - Journal of Medical Ethics 50 (2):84-89.
    Patient online record access (ORA) is spreading worldwide, and in some countries, including Sweden, and the USA, access is advanced with patients obtaining rapid access to their full records. In the UK context, from 31 October 2023 as part of the new NHS England general practitioner (GP) contract it will be mandatory for GPs to offer ORA to patients aged 16 and older. Patients report many benefits from reading their clinical records including feeling more empowered, better understanding and remembering their (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  31.  36
    Generative AI and medical ethics: the state of play.Hazem Zohny, Sebastian Porsdam Mann, Brian D. Earp & John McMillan - 2024 - Journal of Medical Ethics 50 (2):75-76.
    Since their public launch, a little over a year ago, large language models (LLMs) have inspired a flurry of analysis about what their implications might be for medical ethics, and for society more broadly. 1 Much of the recent debate has moved beyond categorical evaluations of the permissibility or impermissibility of LLM use in different general contexts (eg, at work or school), to more fine-grained discussions of the criteria that should govern their appropriate use in specific domains or towards (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  32.  14
    Boosting court judgment prediction and explanation using legal entities.Irene Benedetto, Alkis Koudounas, Lorenzo Vaiani, Eliana Pastor, Luca Cagliero, Francesco Tarasconi & Elena Baralis - forthcoming - Artificial Intelligence and Law:1-36.
    The automatic prediction of court case judgments using Deep Learning and Natural Language Processing is challenged by the variety of norms and regulations, the inherent complexity of the forensic language, and the length of legal judgments. Although state-of-the-art transformer-based architectures and Large Language Models (LLMs) are pre-trained on large-scale datasets, the underlying model reasoning is not transparent to the legal expert. This paper jointly addresses court judgment prediction and explanation by not only predicting the judgment but also providing legal (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  33.  57
    The Epistemological Danger of Large Language Models.Elise Li Zheng & Sandra Soo-Jin Lee - 2023 - American Journal of Bioethics 23 (10):102-104.
    The potential of ChatGPT looms large for the practice of medicine, as both boon and bane. The use of Large Language Models (LLMs) in platforms such as ChatGPT raises critical ethical questions of w...
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  34.  13
    Large Language Models and Inclusivity in Bioethics Scholarship.Sumeeta Varma - 2023 - American Journal of Bioethics 23 (10):105-107.
    In the target article, Porsdam Mann and colleagues (2023) broadly survey the ethical opportunities and risks of using general and personalized large language models (LLMs) to generate academic pros...
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  35.  75
    Large Language Models and the Reverse Turing Test.Terrence Sejnowski - 2023 - Neural Computation 35 (3):309–342.
    Large Language Models (LLMs) have been transformative. They are pre-trained foundational models that are self-supervised and can be adapted with fine tuning to a wide range of natural language tasks, each of which previously would have required a separate network model. This is one step closer to the extraordinary versatility of human language. GPT-3 and more recently LaMDA can carry on dialogs with humans on many topics after minimal priming with a few examples. However, there has been a wide (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  36.  22
    How Can Large Language Models Support the Acquisition of Ethical Competencies in Healthcare?Jilles Smids & Maartje Schermer - 2023 - American Journal of Bioethics 23 (10):68-70.
    Rahimzadeh et al. (2023) provide an interesting and timely discussion of the role of large language models (LLMs) in ethics education. While mentioning broader educational goals, the paper’s main f...
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  37. Machine Advisors: Integrating Large Language Models into Democratic Assemblies.Petr Špecián - manuscript
    Large language models (LLMs) represent the currently most relevant incarnation of artificial intelligence with respect to the future fate of democratic governance. Considering their potential, this paper seeks to answer a pressing question: Could LLMs outperform humans as expert advisors to democratic assemblies? While bearing the promise of enhanced expertise availability and accessibility, they also present challenges of hallucinations, misalignment, or value imposition. Weighing LLMs’ benefits and drawbacks compared to their human counterparts, I argue for their careful (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  38.  21
    ChatGPT’s Relevance for Bioethics: A Novel Challenge to the Intrinsically Relational, Critical, and Reason-Giving Aspect of Healthcare.Ramón Alvarado & Nicolae Morar - 2023 - American Journal of Bioethics 23 (10):71-73.
    The rapid development of large language models (LLM’s) and of their associated interfaces such as ChatGPT has brought forth a wave of epistemic and moral concerns in a variety of domains of inquiry...
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  39.  91
    An Optimal Choice of Cognitive Diagnostic Model for Second Language Listening Comprehension Test.Yanyun Dong, Xiaomei Ma, Chuang Wang & Xuliang Gao - 2021 - Frontiers in Psychology 12.
    Cognitive diagnostic models show great promise in language assessment for providing rich diagnostic information. The lack of a full understanding of second language listening subskills made model selection difficult. In search of optimal CDM that could provide a better understanding of L2 listening subskills and facilitate accurate classification, this study carried a two-layer model selection. At the test level, A-CDM, LLM, and R-RUM had an acceptable and comparable model fit, suggesting mixed inter-attribute relationships of L2 listening subskills. At the item (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  40.  12
    Monotonicity Reasoning in the Age of Neural Foundation Models.Zeming Chen & Qiyue Gao - 2023 - Journal of Logic, Language and Information 33 (1):49-68.
    The recent advance of large language models (LLMs) demonstrates that these large-scale foundation models achieve remarkable capabilities across a wide range of language tasks and domains. The success of the statistical learning approach challenges our understanding of traditional symbolic and logical reasoning. The first part of this paper summarizes several works concerning the progress of monotonicity reasoning through neural networks and deep learning. We demonstrate different methods for solving the monotonicity reasoning task using neural and symbolic approaches and also (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  41. A Talking Cure for Autonomy Traps : How to share our social world with chatbots.Regina Rini - manuscript
    Large Language Models (LLMs) like ChatGPT were trained on human conversation, but in the future they will also train us. As chatbots speak from our smartphones and customer service helplines, they will become a part of everyday life and a growing share of all the conversations we ever have. It’s hard to doubt this will have some effect on us. Here I explore a specific concern about the impact of artificial conversation on our capacity to deliberate and hold ourselves (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  42.  57
    Charting the Terrain of Artificial Intelligence: a Multidimensional Exploration of Ethics, Agency, and Future Directions.Partha Pratim Ray & Pradip Kumar Das - 2023 - Philosophy and Technology 36 (2):1-7.
    This comprehensive analysis dives deep into the intricate interplay between artificial intelligence (AI) and human agency, examining the remarkable capabilities and inherent limitations of large language models (LLMs) such as GPT-3 and ChatGPT. The paper traces the complex trajectory of AI's evolution, highlighting its operation based on statistical pattern recognition, devoid of self-consciousness or innate comprehension. As AI permeates multiple spheres of human life, it raises substantial ethical, legal, and societal concerns that demand immediate attention and deliberation. The metaphorical (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  43. Plurale Autorschaft von Mensch und Künstlicher Intelligenz?David Lauer - 2023 - Literatur in Wissenschaft Und Unterricht 2023 (2):245-266.
    This paper (in German) discusses the question of what is going on when large language models (LLMs) produce meaningful text in reaction to human prompts. Can LLMs be understood as authors or producers of speech acts? I argue that this question has to be answered in the negative, for two reasons. First, due to their lack of semantic understanding, LLMs do not understand what they are saying and hence literally do not know what they are (linguistically) doing. (...)
     
    Export citation  
     
    Bookmark  
  44.  24
    Las lógicas modales en confrontación con los conceptos básicos de la lógica modal de G. W. Leibniz.Jesús Padilla-Gálvez - 1991 - Theoria: Revista de Teoría, Historia y Fundamentos de la Ciencia 6 (1-2):115-127.
    This article is divided into introduction andd three section. In the first section we examine Leibniz’ termini necesitas-possibilitas. In the second section we propose a minimal modal logic, LLM, arise from the addition of modal priciples. Finally in the last section we examine his complex studie towards the interpretation of modal language in the possible worlds. The resulting interplay between the minimal modallogic and the possible worlds perspective is one of the main charms of semantics.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  45.  3
    Leveraging artificial intelligence to detect ethical concerns in medical research: a case study.Kannan Sridharan & Gowri Sivaramakrishnan - forthcoming - Journal of Medical Ethics.
    BackgroundInstitutional review boards (IRBs) have been criticised for delays in approvals for research proposals due to inadequate or inexperienced IRB staff. Artificial intelligence (AI), particularly large language models (LLMs), has significant potential to assist IRB members in a prompt and efficient reviewing process.MethodsFour LLMs were evaluated on whether they could identify potential ethical issues in seven validated case studies. The LLMs were prompted with queries related to the proposed eligibility criteria of the study participants, vulnerability issues, information (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  46. Conditional and Modal Reasoning in Large Language Models.Wesley H. Holliday & Matthew Mandelkern - manuscript
    The reasoning abilities of large language models (LLMs) are the topic of a growing body of research in artificial intelligence and cognitive science. In this paper, we probe the extent to which a dozen LLMs are able to distinguish logically correct inferences from logically fallacious ones. We focus on inference patterns involving conditionals (e.g., 'If Ann has a queen, then Bob has a jack') and epistemic modals (e.g., 'Ann might have an ace', 'Bob must have a king'). These (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  47.  18
    The Impact of AUTOGEN and Similar Fine-Tuned Large Language Models on the Integrity of Scholarly Writing.David B. Resnik & Mohammad Hosseini - 2023 - American Journal of Bioethics 23 (10):50-52.
    Artificial intelligence (AI), large language models (LLMs), such as Open AI’s ChatGPT, have a remarkable ability to process and generate human language but have also raised complex and novel ethica...
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  48.  25
    Friend or foe? Exploring the implications of large language models on the science system.Benedikt Fecher, Marcel Hebing, Melissa Laufer, Jörg Pohle & Fabian Sofsky - forthcoming - AI and Society:1-13.
    The advent of ChatGPT by OpenAI has prompted extensive discourse on its potential implications for science and higher education. While the impact on education has been a primary focus, there is limited empirical research on the effects of large language models (LLMs) and LLM-based chatbots on science and scientific practice. To investigate this further, we conducted a Delphi study involving 72 researchers specializing in AI and digitization. The study focused on applications and limitations of LLMs, their effects on (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  49.  18
    Machines Like Me: 4 Corollaries for Responsible Use of AI in the Bioethics Classroom.Craig M. Klugman & Cheryl J. Erwin - 2023 - American Journal of Bioethics 23 (10):86-88.
    Much of the recent AI-LLM literature has been apocalyptic in pointing out the risks of AI technology, “mitigating the risk of extinction from AI should be a global priority alongside other societal...
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  50.  6
    Education Testing System by Artificial Intelligence.A. E. Ryabinin - forthcoming - Philosophical Problems of IT and Cyberspace (PhilIT&C).
    The article describes the possibilities of using and modifying existing machine learning technologies in the field of natural language processing for the purpose of designing a system for automatically generating control and test tasks (CTT). The reason for such studies was the limitations in generating theminimumrequired amount ofCTtomaintain student engagement in game-based learning formats, such as quizzes, and others. These limitations are associated with the lack of time resources among training professionals for manual generation of tests. The article discusses the (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 118