Learning correction grammars

Journal of Symbolic Logic 74 (2):489-516 (2009)
  Copy   BIBTEX

Abstract

We investigate a new paradigm in the context of learning in the limit, namely, learning correction grammars for classes of computably enumerable (c.e.) languages. Knowing a language may feature a representation of it in terms of two grammars. The second grammar is used to make corrections to the first grammar. Such a pair of grammars can be seen as a single description of (or grammar for) the language. We call such grammars correction grammars. Correction grammars capture the observable fact that people do correct their linguistic utterances during their usual linguistic activities. We show that learning correction grammars for classes of c.e. languages in the TxtEx-model (i.e., converging to a single correct correction grammar in the limit) is sometimes more powerful than learning ordinary grammars even in the TxtBe-model (where the learner is allowed to converge to infinitely many syntactically distinct but correct conjectures in the limit). For each n ≥ 0, there is a similar learning advantage, again in learning correction grammars for classes of c.e. languages, but where we compare learning correction grammars that make n + 1 corrections to those that make n corrections. The concept of a correction grammar can be extended into the constructive transfinite, using the idea of counting-down from notations for transfinite constructive ordinals. This transfinite extension can also be conceptualized as being about learning Ershov-descriptions for c.e. languages. For u a notation in Kleene's general system $(O,\, < _o )$ of ordinal notations for constructive ordinals, we introduce the concept of an u-correction grammar, where u is used to bound the number of corrections that the grammar is allowed to make. We prove a general hierarchy result: if u and v are notations for constructive ordinals such that $u\, < _o \,v$ , then there are classes of c.e. languages that can be TxtEx-learned by conjecturing v-correction grammars but not by conjecturing u-correction grammars. Surprisingly, we show that— above "ω-many" corrections—it is not possible to strengthen the hierarchy: TxtEx-learning u-correction grammars of classes of c.e. languages, where u is a notation in O for any ordinal, can be simulated by TxtBe-learning ω-correction grammars, where ω is any notation for the smallest infinite ordinal ω

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 91,386

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Implicit learning of artificial grammars.Arthur S. Reber - 1967 - Journal of Verbal Learning and Verbal Behavior 6:855-863.
Identification in the limit of categorial grammars.Makoto Kanazawa - 1996 - Journal of Logic, Language and Information 5 (2):115-155.
Highly constrained unification grammars.Daniel Feinstein & Shuly Wintner - 2008 - Journal of Logic, Language and Information 17 (3):345-381.

Analytics

Added to PP
2010-09-12

Downloads
26 (#596,950)

6 months
6 (#512,819)

Historical graph of downloads
How can I increase my downloads?

Citations of this work

Add more citations

References found in this work

Proof Theory.Gaisi Takeuti - 1990 - Studia Logica 49 (1):160-161.
Formal models of language learning.Steven Pinker - 1979 - Cognition 7 (3):217-283.
Theory of Recursive Functions and Effective Computability.Hartley Rogers - 1971 - Journal of Symbolic Logic 36 (1):141-146.
Proof-theoretic analysis of termination proofs.Wilfried Buchholz - 1995 - Annals of Pure and Applied Logic 75 (1-2):57-65.
Recursive Structures and Ershov's Hierarchy.Christopher J. Ash & Julia F. Knight - 1996 - Mathematical Logic Quarterly 42 (1):461-468.

View all 10 references / Add more references