Language Writ Large: LLMs, ChatGPT, Grounding, Meaning and Understanding

Abstract

Apart from what (little) OpenAI may be concealing from us, we all know (roughly) how ChatGPT works (its huge text database, its statistics, its vector representations, and their huge number of parameters, its next-word training, and so on). But none of us can say (hand on heart) that we are not surprised by what ChatGPT has proved to be able to do with these resources. This has even driven some of us to conclude that ChatGPT actually understands. It is not true that it understands. But it is also not true that we understand how it can do what it can do. I will suggest some hunches about benign biases: convergent constraints that emerge at LLM scale that may be helping ChatGPT do so much better than we would have expected. These biases are inherent in the nature of language itself, at LLM scale, and they are closely linked to what it is that ChatGPT lacks, which is direct sensorimotor grounding to connect its words to their referents and its propositions to their meanings. These convergent biases are related to (1) the parasitism of indirect verbal grounding on direct sensorimotor grounding, (2) the circularity of verbal definition, (3) the mirroring of language production and comprehension, (4) iconicity in propositions at LLM scale, (5) computational counterparts of human categorical perception in category learning by neural nets, and perhaps also (6) a conjecture by Chomsky about the laws of thought. The exposition will be in the form of a dialogue with ChatGPT-4.

Links

PhilArchive

External links

  • This entry has no external links. Add one.
Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

  • Only published works are available at libraries.

Similar books and articles

The Epistemological Danger of Large Language Models.Elise Li Zheng & Sandra Soo-Jin Lee - 2023 - American Journal of Bioethics 23 (10):102-104.
Large Language Models and Biorisk.William D’Alessandro, Harry R. Lloyd & Nathaniel Sharadin - 2023 - American Journal of Bioethics 23 (10):115-118.
Does thought require sensory grounding? From pure thinkers to large language models.David J. Chalmers - 2023 - Proceedings and Addresses of the American Philosophical Association 97:22-45.
What Should ChatGPT Mean for Bioethics?I. Glenn Cohen - 2023 - American Journal of Bioethics 23 (10):8-16.
The grounding and sharing of symbols.Angelo Cangelosi - 2006 - Pragmatics and Cognition 14 (2):275-286.

Analytics

Added to PP
2024-02-05

Downloads
207 (#98,461)

6 months
207 (#13,398)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Stevan Harnad
Université du Québec à Montréal

Citations of this work

No citations found.

Add more citations

References found in this work

No references found.

Add more references