Results for 'GPT-3'

1000+ found
Order:
  1. GPT-3: its nature, scope, limits, and consequences.Luciano Floridi & Massimo Chiriatti - 2020 - Minds and Machines 30 (4):681–⁠694.
    In this commentary, we discuss the nature of reversible and irreversible questions, that is, questions that may enable one to identify the nature of the source of their answers. We then introduce GPT-3, a third-generation, autoregressive language model that uses deep learning to produce human-like texts, and use the previous distinction to analyse it. We expand the analysis to present three tests based on mathematical, semantic, and ethical questions and show that GPT-3 is not designed to pass any of them. (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   34 citations  
  2.  66
    Plagiarism in the age of massive Generative Pre-trained Transformers (GPT-3).Nassim Dehouche - 2021 - Ethics in Science and Environmental Politics 21:17-23.
    As if 2020 was not a peculiar enough year, its fifth month saw the relatively quiet publication of a preprint describing the most powerful natural language processing (NLP) system to date—GPT-3 (Generative Pre-trained Transformer-3)—created by the Silicon Valley research firm OpenAI. Though the software implementation of GPT-3 is still in its initial beta release phase, and its full capabilities are still unknown as of the time of this writing, it has been shown that this artificial intelligence can comprehend prompts in (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  3.  96
    Playing Games with Ais: The Limits of GPT-3 and Similar Large Language Models.Adam Sobieszek & Tadeusz Price - 2022 - Minds and Machines 32 (2):341-364.
    This article contributes to the debate around the abilities of large language models such as GPT-3, dealing with: firstly, evaluating how well GPT does in the Turing Test, secondly the limits of such models, especially their tendency to generate falsehoods, and thirdly the social consequences of the problems these models have with truth-telling. We start by formalising the recently proposed notion of reversible questions, which Floridi & Chiriatti propose allow one to ‘identify the nature of the source of their answers’, (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  4.  19
    Student Voices on GPT-3, Writing Assignments, and the Future College Classroom.Bada Kim, Sarah Robins & Jihui Huang - 2024 - Teaching Philosophy 47 (2):213-231.
    This paper presents a summary and discussion of an assignment that asked students about the impact of Large Language Models on their college education. Our analysis summarizes students’ perception of GPT-3, categorizes their proposals for modifying college courses, and identifies their stated values about their college education. Furthermore, this analysis provides a baseline for tracking students’ attitudes toward LLMs and contributes to the conversation on student perceptions of the relationship between writing and philosophy.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  5. Plagiarism in the age of massive Generative Pre-trained Transformers (GPT-3).Nassim Dehouche - 2021 - Ethics in Science and Environmental Politics 21:17-23.
    As if 2020 were not a peculiar enough year, its fifth month has seen the relatively quiet publication of a preprint describing the most powerful Natural Language Processing (NLP) system to date, GPT-3 (Generative Pre-trained Transformer-3), by Silicon Valley research firm OpenAI. Though the software implementation of GPT-3 is still in its initial Beta release phase, and its full capabilities are still unknown as of the time of this writing, it has been shown that this Artificial Intelligence can comprehend prompts (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  6. A Loosely Wittgensteinian Conception of the Linguistic Understanding of Large Language Models like BERT, GPT-3, and ChatGPT.Reto Gubelmann - 2023 - Grazer Philosophische Studien 99 (4):485-523.
    In this article, I develop a loosely Wittgensteinian conception of what it takes for a being, including an AI system, to understand language, and I suggest that current state of the art systems are closer to fulfilling these requirements than one might think. Developing and defending this claim has both empirical and conceptual aspects. The conceptual aspects concern the criteria that are reasonably applied when judging whether some being understands language; the empirical aspects concern the question whether a given being (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  7.  15
    Epistemology Goes AI: A Study of GPT-3’s Capacity to Generate Consistent and Coherent Ordered Sets of Propositions on a Single-Input-Multiple-Outputs Basis.Marcelo de Araujo, Guilherme de Almeida & José Luiz Nunes - 2024 - Minds and Machines 34 (1):1-18.
    The more we rely on digital assistants, online search engines, and AI systems to revise our system of beliefs and increase our body of knowledge, the less we are able to resort to some independent criterion, unrelated to further digital tools, in order to asses the epistemic reliability of the outputs delivered by them. This raises some important questions to epistemology in general and pressing questions to applied to epistemology in particular. In this paper, we propose an experimental method for (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  8.  17
    At the intersection of humanity and technology: a technofeminist intersectional critical discourse analysis of gender and race biases in the natural language processing model GPT-3.M. A. Palacios Barea, D. Boeren & J. F. Ferreira Goncalves - forthcoming - AI and Society:1-19.
    Algorithmic biases, or algorithmic unfairness, have been a topic of public and scientific scrutiny for the past years, as increasing evidence suggests the pervasive assimilation of human cognitive biases and stereotypes in such systems. This research is specifically concerned with analyzing the presence of discursive biases in the text generated by GPT-3, an NLPM which has been praised in recent years for resembling human language so closely that it is becoming difficult to differentiate between the human and the algorithm. The (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  9.  75
    How persuasive is AI-generated argumentation? An analysis of the quality of an argumentative text produced by the GPT-3 AI text generator.Martin Hinton & Jean H. M. Wagemans - 2023 - Argument and Computation 14 (1):59-74.
    In this paper, we use a pseudo-algorithmic procedure for assessing an AI-generated text. We apply the Comprehensive Assessment Procedure for Natural Argumentation (CAPNA) in evaluating the arguments produced by an Artificial Intelligence text generator, GPT-3, in an opinion piece written for the Guardian newspaper. The CAPNA examines instances of argumentation in three aspects: their Process, Reasoning and Expression. Initial Analysis is conducted using the Argument Type Identification Procedure (ATIP) to establish, firstly, that an argument is present and, secondly, its specific (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  10.  21
    GPT-4-Trinis: assessing GPT-4’s communicative competence in the English-speaking majority world.Samantha Jackson, Barend Beekhuizen, Zhao Zhao & Rhonda McEwen - forthcoming - AI and Society:1-17.
    Biases and misunderstanding stemming from pre-training in Generative Pre-Trained Transformers are more likely for users of underrepresented English varieties, since the training dataset favors dominant Englishes (e.g., American English). We investigate (potential) bias in GPT-4 when it interacts with Trinidadian English Creole (TEC), a non-hegemonic English variety that partially overlaps with standardized English (SE) but still contains distinctive characteristics. (1) Comparable responses: we asked GPT-4 18 questions in TEC and SE and compared the content and detail of the responses. (2) (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  11.  8
    Large language models in cryptocurrency securities cases: can a GPT model meaningfully assist lawyers?Arianna Trozze, Toby Davies & Bennett Kleinberg - forthcoming - Artificial Intelligence and Law:1-47.
    Large Language Models (LLMs) could be a useful tool for lawyers. However, empirical research on their effectiveness in conducting legal tasks is scant. We study securities cases involving cryptocurrencies as one of numerous contexts where AI could support the legal process, studying GPT-3.5’s legal reasoning and ChatGPT’s legal drafting capabilities. We examine whether a) GPT-3.5 can accurately determine which laws are potentially being violated from a fact pattern, and b) whether there is a difference in juror decision-making based on complaints (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  12.  90
    Detection of GPT-4 Generated Text in Higher Education: Combining Academic Judgement and Software to Identify Generative AI Tool Misuse.Mike Perkins, Jasper Roe, Darius Postma, James McGaughran & Don Hickerson - 2024 - Journal of Academic Ethics 22 (1):89-113.
    This study explores the capability of academic staff assisted by the Turnitin Artificial Intelligence (AI) detection tool to identify the use of AI-generated content in university assessments. 22 different experimental submissions were produced using Open AI’s ChatGPT tool, with prompting techniques used to reduce the likelihood of AI detectors identifying AI-generated content. These submissions were marked by 15 academic staff members alongside genuine student submissions. Although the AI detection tool identified 91% of the experimental submissions as containing AI-generated content, only (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  13.  38
    Exploring the potential utility of AI large language models for medical ethics: an expert panel evaluation of GPT-4.Michael Balas, Jordan Joseph Wadden, Philip C. Hébert, Eric Mathison, Marika D. Warren, Victoria Seavilleklein, Daniel Wyzynski, Alison Callahan, Sean A. Crawford, Parnian Arjmand & Edsel B. Ing - 2024 - Journal of Medical Ethics 50 (2):90-96.
    Integrating large language models (LLMs) like GPT-4 into medical ethics is a novel concept, and understanding the effectiveness of these models in aiding ethicists with decision-making can have significant implications for the healthcare sector. Thus, the objective of this study was to evaluate the performance of GPT-4 in responding to complex medical ethical vignettes and to gauge its utility and limitations for aiding medical ethicists. Using a mixed-methods, cross-sectional survey approach, a panel of six ethicists assessed LLM-generated responses to eight (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  14.  60
    Ethical implications of text generation in the age of artificial intelligence.Laura Illia, Elanor Colleoni & Stelios Zyglidopoulos - 2022 - Business Ethics, the Environment and Responsibility 32 (1):201-210.
    We are at a turning point in the debate on the ethics of Artificial Intelligence (AI) because we are witnessing the rise of general-purpose AI text agents such as GPT-3 that can generate large-scale highly refined content that appears to have been written by a human. Yet, a discussion on the ethical issues related to the blurring of the roles between humans and machines in the production of content in the business arena is lacking. In this conceptual paper, drawing on (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  15. How far can we get in creating a digital replica of a philosopher?Anna Strasser, Eric Schwitzgebel & Matthew Crosby - 2023 - In Raul Hakli, Pekka Mäkelä & Johanna Seibt (eds.), Social Robots in Social Institutions. Proceedings of Robophilosophy 2022. IOS PRESS. pp. 371-380.
    Can we build machines with which we can have interesting conversations? Observing the new optimism of AI regarding deep learning and new language models, we set ourselves an ambitious goal: We want to find out how far we can get in creating a digital replica of a philosopher. This project has two aims; one more technical, investigating of how the best model can be built. The other one, more philosophical, explores the limits and risks which are accompanied by the creation (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  16.  20
    Beginning AI Phenomenology.Robert S. Leib - 2024 - Journal of Speculative Philosophy 38 (1):62-82.
    ABSTRACT This dialogue with GPT-3 took place in November 2022, several weeks before ChatGPT was released to the public. The article’s aim is to find out whether natural language processors can participate in phenomenology at some level by asking about its basic concepts. In the discussion, the dialogue covers questions about phenomenology’s definition and distinction from other subbranches like metaphysics and epistemology. The dialogue discusses the nature of Kermit’s environment and self-conception. The dialogue also establishes some of the basic conditions (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  17.  96
    Language and Intelligence.Carlos Montemayor - 2021 - Minds and Machines 31 (4):471-486.
    This paper explores aspects of GPT-3 that have been discussed as harbingers of artificial general intelligence and, in particular, linguistic intelligence. After introducing key features of GPT-3 and assessing its performance in the light of the conversational standards set by Alan Turing in his seminal paper from 1950, the paper elucidates the difference between clever automation and genuine linguistic intelligence. A central theme of this discussion on genuine conversational intelligence is that members of a linguistic community never merely respond “algorithmically” (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  18. AI and the future of humanity: ChatGPT-4, philosophy and education – Critical responses.Michael A. Peters, Liz Jackson, Marianna Papastephanou, Petar Jandrić, George Lazaroiu, Colin W. Evers, Bill Cope, Mary Kalantzis, Daniel Araya, Marek Tesar, Carl Mika, Lei Chen, Chengbing Wang, Sean Sturm, Sharon Rider & Steve Fuller - forthcoming - Educational Philosophy and Theory.
    Michael A PetersBeijing Normal UniversityChatGPT is an AI chatbot released by OpenAI on November 30, 2022 and a ‘stable release’ on February 13, 2023. It belongs to OpenAI’s GPT-3 family (generativ...
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  19.  42
    A Pragmatic Approach to the Intentional Stance Semantic, Empirical and Ethical Considerations for the Design of Artificial Agents.Guglielmo Papagni & Sabine Koeszegi - 2021 - Minds and Machines 31 (4):505-534.
    Artificial agents are progressively becoming more present in everyday-life situations and more sophisticated in their interaction affordances. In some specific cases, like Google Duplex, GPT-3 bots or Deep Mind’s AlphaGo Zero, their capabilities reach or exceed human levels. The use contexts of everyday life necessitate making such agents understandable by laypeople. At the same time, displaying human levels of social behavior has kindled the debate over the adoption of Dennett’s ‘intentional stance’. By means of a comparative analysis of the literature (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  20.  47
    Do Large Language Models Know What Humans Know?Sean Trott, Cameron Jones, Tyler Chang, James Michaelov & Benjamin Bergen - 2023 - Cognitive Science 47 (7):e13309.
    Humans can attribute beliefs to others. However, it is unknown to what extent this ability results from an innate biological endowment or from experience accrued through child development, particularly exposure to language describing others' mental states. We test the viability of the language exposure hypothesis by assessing whether models exposed to large quantities of human language display sensitivity to the implied knowledge states of characters in written passages. In pre‐registered analyses, we present a linguistic version of the False Belief Task (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  21.  89
    Creating a large language model of a philosopher.Eric Schwitzgebel, David Schwitzgebel & Anna Strasser - 2023 - Mind and Language 39 (2):237-259.
    Can large language models produce expert‐quality philosophical texts? To investigate this, we fine‐tuned GPT‐3 with the works of philosopher Daniel Dennett. To evaluate the model, we asked the real Dennett 10 philosophical questions and then posed the same questions to the language model, collecting four responses for each question without cherry‐picking. Experts on Dennett's work succeeded at distinguishing the Dennett‐generated and machine‐generated answers above chance but substantially short of our expectations. Philosophy blog readers performed similarly to the experts, while ordinary (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  22.  57
    Charting the Terrain of Artificial Intelligence: a Multidimensional Exploration of Ethics, Agency, and Future Directions.Partha Pratim Ray & Pradip Kumar Das - 2023 - Philosophy and Technology 36 (2):1-7.
    This comprehensive analysis dives deep into the intricate interplay between artificial intelligence (AI) and human agency, examining the remarkable capabilities and inherent limitations of large language models (LLMs) such as GPT-3 and ChatGPT. The paper traces the complex trajectory of AI's evolution, highlighting its operation based on statistical pattern recognition, devoid of self-consciousness or innate comprehension. As AI permeates multiple spheres of human life, it raises substantial ethical, legal, and societal concerns that demand immediate attention and deliberation. The metaphorical illustration (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  23.  27
    More is Better: English Language Statistics are Biased Toward Addition.Bodo Winter, Martin H. Fischer, Christoph Scheepers & Andriy Myachykov - 2023 - Cognitive Science 47 (4):e13254.
    We have evolved to become who we are, at least in part, due to our general drive to create new things and ideas. When seeking to improve our creations, ideas, or situations, we systematically overlook opportunities to perform subtractive changes. For example, when tasked with giving feedback on an academic paper, reviewers will tend to suggest additional explanations and analyses rather than delete existing ones. Here, we show that this addition bias is systematically reflected in English language statistics along several (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  24.  18
    Using rhetorical strategies to design prompts: a human-in-the-loop approach to make AI useful.Nupoor Ranade, Marly Saravia & Aditya Johri - forthcoming - AI and Society:1-22.
    The growing capabilities of artificial intelligence (AI) word processing models have demonstrated exceptional potential to impact language related tasks and functions. Their fast pace of adoption and probable effect has also given rise to controversy within certain fields. Models, such as GPT-3, are a particular concern for professionals engaged in writing, particularly as their engagement with these technologies is limited due to lack of ability to control their output. Most efforts to maximize and control output rely on a process known (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  25.  3
    Scrutinizing the foundations: could large language models be solipsistic?Andreea Esanu - 2024 - Synthese 203 (5):1-20.
    In artificial intelligence literature, “delusions” are characterized as the generation of unfaithful output from reliable source content. There is an extensive literature on computer-generated delusions, ranging from visual hallucinations, like the production of nonsensical images in Computer Vision, to nonsensical text generated by (natural) language models, but this literature is predominantly taxonomic. In a recent research paper, however, a group of scientists from DeepMind successfully presented a formal treatment of an entire class of delusions in generative AI models (i.e., models (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  26.  75
    Large Language Models and the Reverse Turing Test.Terrence Sejnowski - 2023 - Neural Computation 35 (3):309–342.
    Large Language Models (LLMs) have been transformative. They are pre-trained foundational models that are self-supervised and can be adapted with fine tuning to a wide range of natural language tasks, each of which previously would have required a separate network model. This is one step closer to the extraordinary versatility of human language. GPT-3 and more recently LaMDA can carry on dialogs with humans on many topics after minimal priming with a few examples. However, there has been a wide range (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  27.  77
    Sharing Our Concepts with Machines.Patrick Butlin - 2021 - Erkenntnis 88 (7):3079-3095.
    As AI systems become increasingly competent language users, it is an apt moment to consider what it would take for machines to understand human languages. This paper considers whether either language models such as GPT-3 or chatbots might be able to understand language, focusing on the question of whether they could possess the relevant concepts. A significant obstacle is that systems of both kinds interact with the world only through text, and thus seem ill-suited to understanding utterances concerning the concrete (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  28.  6
    Combining prompt-based language models and weak supervision for labeling named entity recognition on legal documents.Vitor Oliveira, Gabriel Nogueira, Thiago Faleiros & Ricardo Marcacini - forthcoming - Artificial Intelligence and Law:1-21.
    Named entity recognition (NER) is a very relevant task for text information retrieval in natural language processing (NLP) problems. Most recent state-of-the-art NER methods require humans to annotate and provide useful data for model training. However, using human power to identify, circumscribe and label entities manually can be very expensive in terms of time, money, and effort. This paper investigates the use of prompt-based language models (OpenAI’s GPT-3) and weak supervision in the legal domain. We apply both strategies as alternative (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  29.  39
    Artificial understanding: a step toward robust AI.Erez Firt - forthcoming - AI and Society:1-13.
    In recent years, state-of-the-art artificial intelligence systems have started to show signs of what might be seen as human level intelligence. More specifically, large language models such as OpenAI’s GPT-3, and more recently Google’s PaLM and DeepMind’s GATO, are performing amazing feats involving the generation of texts. However, it is acknowledged by many researchers that contemporary language models, and more generally, learning systems, still lack important capabilities, such as understanding, reasoning and the ability to employ knowledge of the world and (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  30.  30
    The great Transformer: Examining the role of large language models in the political economy of AI.Wiebke Denkena & Dieuwertje Luitse - 2021 - Big Data and Society 8 (2).
    In recent years, AI research has become more and more computationally demanding. In natural language processing, this tendency is reflected in the emergence of large language models like GPT-3. These powerful neural network-based models can be used for a range of NLP tasks and their language generation capacities have become so sophisticated that it can be very difficult to distinguish their outputs from human language. LLMs have raised concerns over their demonstrable biases, heavy environmental footprints, and future social ramifications. In (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  31. Does ChatGPT have semantic understanding?Lisa Miracchi Titus - 2024 - Cognitive Systems Research 83 (101174):1-13.
    Over the last decade, AI models of language and word meaning have been dominated by what we might call a statistics-of-occurrence, strategy: these models are deep neural net structures that have been trained on a large amount of unlabeled text with the aim of producing a model that exploits statistical information about word and phrase co-occurrence in order to generate behavior that is similar to what a human might produce, or representations that can be probed to exhibit behavior similar to (...)
     
    Export citation  
     
    Bookmark   1 citation  
  32. Why AI will never rule the world (interview).Luke Dormehl, Jobst Landgrebe & Barry Smith - 2022 - Digital Trends.
    Call it the Skynet hypothesis, Artificial General Intelligence, or the advent of the Singularity — for years, AI experts and non-experts alike have fretted (and, for a small group, celebrated) the idea that artificial intelligence may one day become smarter than humans. -/- According to the theory, advances in AI — specifically of the machine learning type that’s able to take on new information and rewrite its code accordingly — will eventually catch up with the wetware of the biological brain. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  33.  8
    Who shares about AI? Media exposure, psychological proximity, performance expectancy, and information sharing about artificial intelligence online.Alex W. Kirkpatrick, Amanda D. Boyd & Jay D. Hmielowski - forthcoming - AI and Society:1-12.
    Media exposure can shape audience perceptions surrounding novel innovations, such as artificial intelligence (AI), and could influence whether they share information about AI with others online. This study examines the indirect association between exposure to AI in the media and information sharing about AI online. We surveyed 567 US citizens aged 18 and older in November 2020, several months after the release of Open AI’s transformative GPT-3 model. Results suggest that AI media exposure was related to online information sharing through (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  34. Creating a Large Language Model of a Philosopher.Eric Schwitzgebel, David Schwitzgebel & Anna Strasser - manuscript
    Can large language models be trained to produce philosophical texts that are difficult to distinguish from texts produced by human philosophers? To address this question, we fine-tuned OpenAI's GPT-3 with the works of philosopher Daniel C. Dennett as additional training data. To explore the Dennett model, we asked the real Dennett ten philosophical questions and then posed the same questions to the language model, collecting four responses for each question without cherry-picking. We recruited 425 participants to distinguish Dennett's answer from (...)
     
    Export citation  
     
    Bookmark  
  35.  25
    Bringing legal knowledge to the public by constructing a legal question bank using large-scale pre-trained language model.Mingruo Yuan, Ben Kao, Tien-Hsuan Wu, Michael M. K. Cheung, Henry W. H. Chan, Anne S. Y. Cheung, Felix W. H. Chan & Yongxi Chen - forthcoming - Artificial Intelligence and Law:1-37.
    Access to legal information is fundamental to access to justice. Yet accessibility refers not only to making legal documents available to the public, but also rendering legal information comprehensible to them. A vexing problem in bringing legal information to the public is how to turn formal legal documents such as legislation and judgments, which are often highly technical, to easily navigable and comprehensible knowledge to those without legal education. In this study, we formulate a three-step approach for bringing legal knowledge (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  36. Diagonalization & Forcing FLEX: From Cantor to Cohen and Beyond. Learning from Leibniz, Cantor, Turing, Gödel, and Cohen; crawling towards AGI.Elan Moritz - manuscript
    The paper continues my earlier Chat with OpenAI’s ChatGPT with a Focused LLM Experiment (FLEX). The idea is to conduct Large Language Model (LLM) based explorations of certain areas or concepts. The approach is based on crafting initial guiding prompts and then follow up with user prompts based on the LLMs’ responses. The goals include improving understanding of LLM capabilities and their limitations culminating in optimized prompts. The specific subjects explored as research subject matter include a) diagonalization techniques as practiced (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  37. Investigating gender and racial biases in DALL-E Mini Images.Marc Cheong, Ehsan Abedin, Marinus Ferreira, Ritsaart Willem Reimann, Shalom Chalson, Pamela Robinson, Joanne Byrne, Leah Ruppanner, Mark Alfano & Colin Klein - forthcoming - Acm Journal on Responsible Computing.
    Generative artificial intelligence systems based on transformers, including both text-generators like GPT-4 and image generators like DALL-E 3, have recently entered the popular consciousness. These tools, while impressive, are liable to reproduce, exacerbate, and reinforce extant human social biases, such as gender and racial biases. In this paper, we systematically review the extent to which DALL-E Mini suffers from this problem. In line with the Model Card published alongside DALL-E Mini by its creators, we find that the images it produces (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  38.  17
    Emergent Spacetime, the Megastructure Problem, and the Metaphysics of the Self.Susan Schneider - 2024 - Philosophy East and West 74 (2):314-332.
    In lieu of an abstract, here is a brief excerpt of the content:Emergent Spacetime, the Megastructure Problem, and the Metaphysics of the SelfSusan Schneider (bio)The aim of this article is to introduce new thoughts on some pressing topics relating to my book, Artificial You, ranging from the fundamental nature of reality to quantum theory and emergence in large language models (LLM) like GPT-4. Since Artificial You was published, the innovations in the domain of AI chatbots like GPT-4 have been rapid-fire, (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  39. 340 Maurice J. Dupre.M_2 M_3 & M. Q. M_l5 - 1978 - In A. R. Marlow (ed.), Mathematical foundations of quantum theory. New York: Academic Press. pp. 339.
    No categories
     
    Export citation  
     
    Bookmark  
  40. 3. On the Primacy of Character.Gary Watson - 1997 - In Daniel Statman (ed.), Virtue Ethics: A Critical Reader. Edinburgh University Press. pp. 56-81.
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   22 citations  
  41. 3 Rorty on Knowledge and Truth.Michael Williams - 2003 - In Charles Guignon & David R. Hiley (eds.), Richard Rorty. New York: Cambridge University Press. pp. 61.
    Direct download  
     
    Export citation  
     
    Bookmark   9 citations  
  42.  80
    3. The Rational Role of Perceptual Content.Matthew Boyle - 2022 - In Matthew Boyle & Evgenia Mylonaki (eds.), Reason in Nature: New Essays on Themes From John Mcdowell. Cambridge, Massachusetts: Harvard University Press. pp. 83-110.
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  43.  29
    3 Regulation: A Substitute for Morality.Alasdair Macintyre - 1980 - Hastings Center Report 10 (1):31-33.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  44.  5
    Hoofstuk 3 - Twee afdelings van een fakulteit 1934−1940.J. P. Oberholzer - 2010 - HTS Theological Studies 66 (3).
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  45.  10
    Chapter 3 Identity and Individuation: Some Feminist Reflections.Elizabeth Grosz - 2012 - In AshleyVE Woodward, Alex Murray & Jon Roffe (eds.), Gilbert Simondon: Being and Technology. Edinburgh University Press. pp. 37-56.
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   5 citations  
  46.  15
    I.3 Action and Belief or Scientific Discourse? A Possible Way of Ending Intellectual Vassalage in Social Studies of Science.Michael Mulkay - 1981 - Philosophy of the Social Sciences 11 (2):163-171.
  47.  9
    3-Hz brain stimulation interferes with various aspects of the kindling effect.John Gaito - 1979 - Bulletin of the Psychonomic Society 13 (2):67-70.
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  48.  7
    3. Human Rights, Sovereignty, and the Responsibility to Protect.Cristina Lafont - 2017 - In Cristina Lafont & Penelope Deutscher (eds.), Critical Theory in Critical Times: Transforming the Global Political and Economic Order. New York, USA: Columbia University Press. pp. 47-73.
  49.  34
    Chapter 3. Aristotle on Perception, Appetition, and Self-Motion.Cynthia A. Freeland - 2017 - In Mary Louise Gill & James G. Lennox (eds.), Self-Motion: From Aristotle to Newton. Princeton University Press. pp. 35-64.
  50.  19
    Chapter 3 killing for pleasure.Tzachi Zamir - 2007 - In Ethics and the Beast: A Speciesist Argument for Animal Liberation. Princeton University Press. pp. 35-56.
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
1 — 50 / 1000