Parity is not a generalisation problem

Behavioral and Brain Sciences 20 (1):69-70 (1997)
  Copy   BIBTEX

Abstract

Uninformed learning mechanisms will not discover “type- 2” regularities in their inputs, except fortuitously. Clark & Thornton argue that error back-propagation only learns the classical parity problem – which is “always pure type-2” – because of restrictive assumptions implicit in the learning algorithm and network employed. Empirical analysis showing that back-propagation fails to generalise on the parity problem is cited to support their position. The reason for failure, however, is that generalisation is simply not a relevant issue. Nothing can be gleaned about back-propagation in particular, or learning in general, from this failure

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 91,709

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Reducing problem complexity by analogical transfer.Peter F. Dominey - 1997 - Behavioral and Brain Sciences 20 (1):71-72.
Reading the generalizer's mind.Chris Thornton & Andy Clark - 1998 - Behavioral and Brain Sciences 21 (2):308-310.
Taming type-2 tigers: A nonmonotonic strategy.István S. N. Berkeley - 1997 - Behavioral and Brain Sciences 20 (1):66-67.
Parity demystified.Erik Carlson - 2010 - Theoria 76 (2):119-128.
Model-based learning problem taxonomies.Richard M. Golden - 1997 - Behavioral and Brain Sciences 20 (1):73-74.
Parity, interval value, and choice.Ruth Chang - 2005 - Ethics 115 (2):331-350.
Parity still isn't a generalisation problem.R. I. Damper - 1998 - Behavioral and Brain Sciences 21 (2):307-308.

Analytics

Added to PP
2009-01-28

Downloads
29 (#548,607)

6 months
4 (#778,909)

Historical graph of downloads
How can I increase my downloads?

Citations of this work

No citations found.

Add more citations

References found in this work

No references found.

Add more references