Can Recurrent Neural Networks Validate Usage-Based Theories of Grammar Acquisition?

Frontiers in Psychology 13 (2022)
  Copy   BIBTEX

Abstract

It has been shown that Recurrent Artificial Neural Networks automatically acquire some grammatical knowledge in the course of performing linguistic prediction tasks. The extent to which such networks can actually learn grammar is still an object of investigation. However, being mostly data-driven, they provide a natural testbed for usage-based theories of language acquisition. This mini-review gives an overview of the state of the field, focusing on the influence of the theoretical framework in the interpretation of results.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 91,139

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Out of their minds: Legal theory in neural networks. [REVIEW]Dan Hunter - 1999 - Artificial Intelligence and Law 7 (2-3):129-151.
Some Neural Networks Compute, Others Don't.Gualtiero Piccinini - 2008 - Neural Networks 21 (2-3):311-321.

Analytics

Added to PP
2022-04-09

Downloads
8 (#1,215,626)

6 months
8 (#241,888)

Historical graph of downloads
How can I increase my downloads?