Even deeper problems with neural network models of language

Behavioral and Brain Sciences 46:e387 (2023)
  Copy   BIBTEX

Abstract

We recognize today's deep neural network (DNN) models of language behaviors as engineering achievements. However, what we know intuitively and scientifically about language shows that what DNNs are and how they are trained on bare texts, makes them poor models of mind and brain for language organization, as it interacts with infant biology, maturation, experience, unique principles, and natural law.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 91,322

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Neural Network Models of Conditionals.Hannes Leitgeb - 2012 - In Sven Ove Hansson & Vincent F. Hendricks (eds.), Introduction to Formal Philosophy. Cham: Springer. pp. 147-176.
Why Can Computers Understand Natural Language?Juan Luis Gastaldi - 2020 - Philosophy and Technology 34 (1):149-214.

Analytics

Added to PP
2023-12-08

Downloads
70 (#228,134)

6 months
70 (#61,223)

Historical graph of downloads
How can I increase my downloads?

Author Profiles

Citations of this work

No citations found.

Add more citations

References found in this work

A Miracle Creed: The Principle of Optimality in Leibniz's Physics and Philosophy.Jeffrey K. McDonough - 2022 - New York,NY, United States of America: Oxford University Press.

Add more references