Abstract
It is sometimes argued that if PDP networks can be trained to
make correct judgements of grammaticality we have an existence proof
that there is enough information in the stimulus to permit learning
grammar by inductive means alone. This seems inconsistent
superficially with Gold's theorem and at a deeper level with the fact
that networks are designed on the basis of assumptions about the
domain of the function to be learned. To clarify the issue I consider
what we should learn from Gold's theorem, then go on to inquire into
what it means to say that knowledge is domain specific. I first try
sharpening the intuitive notion of domain specific knowledge by
reviewing the alleged difference between processing limitatons due to
shartage of resources vs shortages of knowledge. After rejecting
different formulations of this idea, I suggest that a model is
language specific if it transparently refer to entities and facts
about language as opposed to entities and facts of more general
mathematical domains. This is a useful but not necessary condition.
I then suggest that a theory is domain specific if it belongs to a
model family which is attuned in a law-like way to domain
regularities. This leads to a comparison of PDP and parameter setting
models of language learning. I conclude with a novel version of the
poverty of stimulus argument.