Contents
10 found
Order:
  1. Social Choice for AI Alignment: Dealing with Diverse Human Feedback.Vincent Conitzer, Rachel Freedman, Jobst Heitzig, Wesley H. Holliday, Bob M. Jacobs, Nathan Lambert, Milan Mosse, Eric Pacuit, Stuart Russell, Hailey Schoelkopf, Emanuel Tewolde & William S. Zwicker - manuscript
    Foundation models such as GPT-4 are fine-tuned to avoid unsafe or otherwise problematic behavior, so that, for example, they refuse to comply with requests for help with committing crimes or with producing racist text. One approach to fine-tuning, called reinforcement learning from human feedback, learns from humans' expressed preferences over multiple outputs. Another approach is constitutional AI, in which the input from humans is a list of high-level principles. But how do we deal with potentially diverging input from humans? How (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  2. Generative AI in EU Law: Liability, Privacy, Intellectual Property, and Cybersecurity.Claudio Novelli, Federico Casolari, Philipp Hacker, Giorgio Spedicato & Luciano Floridi - manuscript
    The advent of Generative AI, particularly through Large Language Models (LLMs) like ChatGPT and its successors, marks a paradigm shift in the AI landscape. Advanced LLMs exhibit multimodality, handling diverse data formats, thereby broadening their application scope. However, the complexity and emergent autonomy of these models introduce challenges in predictability and legal compliance. This paper analyses the legal and regulatory implications of Generative AI and LLMs in the European Union context, focusing on liability, privacy, intellectual property, and cybersecurity. It examines (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  3. Will Large Language Models Overwrite Us?Walter Barta - forthcoming - Double Helix.
  4. Affective Artificial Agents as sui generis Affective Artifacts.Marco Facchin & Giacomo Zanotti - forthcoming - Topoi.
    AI-based technologies are increasingly pervasive in a number of contexts. Our affective and emotional life makes no exception. In this article, we analyze one way in which AI-based technologies can affect them. In particular, our investigation will focus on affective artificial agents, namely AI-powered software or robotic agents designed to interact with us in affectively salient ways. We build upon the existing literature on affective artifacts with the aim of providing an original analysis of affective artificial agents and their distinctive (...)
    Remove from this list   Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  5. Addressing Social Misattributions of Large Language Models: An HCXAI-based Approach.Andrea Ferrario, Alberto Termine & Alessandro Facchini - forthcoming - Available at Https://Arxiv.Org/Abs/2403.17873 (Extended Version of the Manuscript Accepted for the Acm Chi Workshop on Human-Centered Explainable Ai 2024 (Hcxai24).
    Human-centered explainable AI (HCXAI) advocates for the integration of social aspects into AI explanations. Central to the HCXAI discourse is the Social Transparency (ST) framework, which aims to make the socio-organizational context of AI systems accessible to their users. In this work, we suggest extending the ST framework to address the risks of social misattributions in Large Language Models (LLMs), particularly in sensitive areas like mental health. In fact LLMs, which are remarkably capable of simulating roles and personas, may lead (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  6. Why do We Need to Employ Exemplars in Moral Education? Insights from Recent Advances in Research on Artificial Intelligence.Hyemin Han - forthcoming - Ethics and Behavior.
    In this paper, I examine why moral exemplars are useful and even necessary in moral education despite several critiques from researchers and educators. To support my point, I review recent AI research demonstrating that exemplar-based learning is superior to rule-based learning in model performance in training neural networks, such as large language models. I particularly focus on why education aiming at promoting the development of multifaceted moral functioning can be done effectively by using exemplars, which is similar to exemplar-based learning (...)
    Remove from this list   Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  7. Taking It Not at Face Value: A New Taxonomy for the Beliefs Acquired from Conversational AIs.Shun Iizuka - forthcoming - Techné: Research in Philosophy and Technology.
    One of the central questions in the epistemology of conversational AIs is how to classify the beliefs acquired from them. Two promising candidates are instrument-based and testimony-based beliefs. However, the category of instrument-based beliefs faces an intrinsic problem, and a challenge arises in its application. On the other hand, relying solely on the category of testimony-based beliefs does not encompass the totality of our practice of using conversational AIs. To address these limitations, I propose a novel classification of beliefs that (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  8. Reflection, confabulation, and reasoning.Jennifer Nagel - forthcoming - In Luis Oliveira & Joshua DiPaolo (eds.), Kornblith and His Critics. Wiley-Blackwell.
    Humans have distinctive powers of reflection: no other animal seems to have anything like our capacity for self-examination. Many philosophers hold that this capacity has a uniquely important guiding role in our cognition; others, notably Hilary Kornblith, draw attention to its weaknesses. Kornblith chiefly aims to dispel the sense that there is anything ‘magical’ about second-order mental states, situating them in the same causal net as ordinary first-order mental states. But elsewhere he goes further, suggesting that there is something deeply (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  9. Reviving the Philosophical Dialogue with Large Language Models.Robert Smithson & Adam Zweber - forthcoming - Teaching Philosophy.
    Many philosophers have argued that large language models (LLMs) subvert the traditional undergraduate philosophy paper. For the enthusiastic, LLMs merely subvert the traditional idea that students ought to write philosophy papers “entirely on their own.” For the more pessimistic, LLMs merely facilitate plagiarism. We believe that these controversies neglect a more basic crisis. We argue that, because one can, with minimal philosophical effort, use LLMs to produce outputs that at least “look like” good papers, many students will complete paper assignments (...)
    Remove from this list   Direct download  
     
    Export citation  
     
    Bookmark  
  10. Plurale Autorschaft von Mensch und Künstlicher Intelligenz?David Lauer - 2023 - Literatur in Wissenschaft Und Unterricht 2023 (2):245-266.
    This paper (in German) discusses the question of what is going on when large language models (LLMs) produce meaningful text in reaction to human prompts. Can LLMs be understood as authors or producers of speech acts? I argue that this question has to be answered in the negative, for two reasons. First, due to their lack of semantic understanding, LLMs do not understand what they are saying and hence literally do not know what they are (linguistically) doing. Since the agent’s (...)
    Remove from this list  
     
    Export citation  
     
    Bookmark