Additional tests of Amit's attractor neural networks

Behavioral and Brain Sciences 18 (4):634-635 (1995)
  Copy   BIBTEX

Abstract

Further tests of Amit's model are indicated. One strategy is to use the apparent coding sparseness of the model to make predictions about coding sparseness in Miyashita's network. A second approach is to use memory overload to induce false positive responses in modules and biological systems. In closing, the importance of temporal coding and timing requirements in developing biologically plausible attractor networks is mentioned.

Other Versions

No versions found

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 107,455

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Analytics

Added to PP
2014-01-20

Downloads
20 (#1,191,799)

6 months
4 (#1,159,966)

Historical graph of downloads
How can I increase my downloads?

Citations of this work

No citations found.

Add more citations

References found in this work

The Language of Thought.Jerry Fodor - 1975 - Harvard University Press.
The descent of man, and selection in relation to sex.Charles Darwin - 1871 - New York: Plume. Edited by Carl Zimmer.
The direction of time.Hans Reichenbach - 1956 - Mineola, N.Y.: Dover Publications. Edited by Maria Reichenbach.
Word and Object.Willard Van Orman Quine - 1960 - Les Etudes Philosophiques 17 (2):278-279.

View all 46 references / Add more references