Unsupervised learning of visual structure

Abstract

To learn a visual code in an unsupervised manner, one may attempt to capture those features of the stimulus set that would contribute significantly to a statistically efficient representation. Paradoxically, all the candidate features in this approach need to be known before statistics over them can be computed. This paradox may be circumvented by confining the repertoire of candidate features to actual scene fragments, which resemble the “what+where” receptive fields found in the ventral visual stream in primates. We describe a single-layer network that learns such fragments from unsegmented raw images of structured objects. The learning method combines fast imprinting in the feedforward stream with lateral interactions to achieve single-epoch unsupervised acquisition of spatially localized features that can support systematic treatment of structured objects [1]

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 92,471

External links

  • This entry has no external links. Add one.
Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

  • Only published works are available at libraries.

Similar books and articles

How Infants Learn About the Visual World.Scott P. Johnson - 2010 - Cognitive Science 34 (7):1158-1184.

Analytics

Added to PP
2010-12-22

Downloads
47 (#341,462)

6 months
1 (#1,478,830)

Historical graph of downloads
How can I increase my downloads?