Reviving the Philosophical Dialogue with Large Language Models

Teaching Philosophy 47 (2):143-171 (2024)
  Copy   BIBTEX

Abstract

Many philosophers have argued that large language models (LLMs) subvert the traditional undergraduate philosophy paper. For the enthusiastic, LLMs merely subvert the traditional idea that students ought to write philosophy papers “entirely on their own.” For the more pessimistic, LLMs merely facilitate plagiarism. We believe that these controversies neglect a more basic crisis. We argue that, because one can, with minimal philosophical effort, use LLMs to produce outputs that at least “look like” good papers, many students will complete paper assignments in a way that fails to develop their philosophical abilities. We argue that this problem exists even if students can produce better papers with AI and even if instructors can detect AI-generated content with decent reliability. But LLMs also create a pedagogical opportunity. We propose that instructors shift the emphasis of their assignments from philosophy papers to “LLM dialogues”: philosophical conversations between the student and an LLM. We describe our experience with using these types of assignments over the past several semesters. We argue that, far from undermining quality philosophical instruction, LLMs allow us to teach philosophy more effectively than was possible before.

Links

PhilArchive

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Analytics

Added to PP
2024-03-13

Downloads
44 (#372,168)

6 months
44 (#96,705)

Historical graph of downloads
How can I increase my downloads?

Author Profiles

Robert Smithson
University of North Carolina at Wilmington
Adam Zweber
Stanford University

Citations of this work

No citations found.

Add more citations

References found in this work

No references found.

Add more references