Topics in Cognitive Science 5 (3):634-667 (2013)
AbstractWe examine two connectionist networks—a fractal learning neural network (FLNN) and a Simple Recurrent Network (SRN)—that are trained to process center-embedded symbol sequences. Previous work provides evidence that connectionist networks trained on infinite-state languages tend to form fractal encodings. Most such work focuses on simple counting recursion cases (e.g., anbn), which are not comparable to the complex recursive patterns seen in natural language syntax. Here, we consider exponential state growth cases (including mirror recursion), describe a new training scheme that seems to facilitate learning, and note that the connectionist learning of these cases has a continuous metamorphosis property that looks very different from what is achievable with symbolic encodings. We identify a property—ragged progressive generalization—which helps make this difference clearer. We suggest two conclusions. First, the fractal analysis of these more complex learning cases reveals the possibility of comparing connectionist networks and symbolic models of grammatical structure in a principled way—this helps remove the black box character of connectionist networks and indicates how the theory they support is different from symbolic approaches. Second, the findings indicate the value of future, linked mathematical and empirical work on these models—something that is more possible now than it was 10 years ago
Similar books and articles
Expert Networks: Paradigmatic Conflict, Technological Rapproachement. [REVIEW]R. C. Lacher - 1993 - Minds and Machines 3 (1):53-71.
Systematicity: Psychological Evidence with Connectionist Implications.S. Phillips & G. S. Halford - unknown
To Transform the Phenomena: Feyerabend, Proliferation, and Recurrent Neural Networks.Paul M. Churchland - 1997 - Philosophy of Science 64 (4):420.
Are Feedforward and Recurrent Networks Systematic? Analysis and Implications for a Connectionist Cognitive Architecture.S. Phillips - unknown
PDP Networks Can Provide Models That Are Not Mere Implementations of Classical Theories.Michael R. W. Dawson, David A. Medler & Istvan S. N. Berkeley - 1997 - Philosophical Psychology 10 (1):25-40.
On the Compatibility of Connectionist and Classical Models.John Hawthorne - 1989 - Philosophical Psychology 2 (1):5-16.
The Allure of Connectionism Reexamined.Brian P. McLaughlin & F. Warfield - 1994 - Synthese 101 (3):365-400.
The Connectionist Framework for Gene Regulation.Roger Sansom - 2008 - Biology and Philosophy 23 (4):475-491.
Abductive Reasoning in Neural-Symbolic Systems.Artur S. D’Avila Garcez, Dov M. Gabbay, Oliver Ray & John Woods - 2007 - Topoi 26 (1):37-49.
Scientific Models, Connectionist Networks, and Cognitive Science.Christopher D. Green - 2001 - Theory & Psychology 11:97-117.
Added to PP
Historical graph of downloads
Citations of this work
Discovery of a Recursive Principle: An Artificial Grammar Investigation of Human Learning of a Counting Recursion Language.Pyeong Whan Cho, Emily Szkudlarek & Whitney Tabor - 2016 - Frontiers in Psychology 7.
A Self-Organized Sentence Processing Theory of Gradience: The Case of Islands.Sandra Villata & Whitney Tabor - 2022 - Cognition 222 (C):104943.
Finding Event Structure in Time: What Recurrent Neural Networks Can Tell Us About Event Structure in Mind.Forrest Davis & Gerry T. M. Altmann - 2021 - Cognition 213 (C):104651.
References found in this work
On the Proper Treatment of Connectionism.Paul Smolensky - 1988 - Behavioral and Brain Sciences 11 (1):1-23.
Learning and Development in Neural Networks: The Importance of Starting Small.Jeffrey L. Elman - 1993 - Cognition 48 (1):71-99.
Word Learning as Bayesian Inference.Fei Xu & Joshua B. Tenenbaum - 2007 - Psychological Review 114 (2):245-272.
Maturational Constraints on Language Learning.Elissa L. Newport - 1990 - Cognitive Science 14 (1):11-28.