Incremental Bayesian Category Learning From Natural Language

Cognitive Science 40 (6):1333-1381 (2016)
  Copy   BIBTEX

Abstract

Models of category learning have been extensively studied in cognitive science and primarily tested on perceptual abstractions or artificial stimuli. In this paper, we focus on categories acquired from natural language stimuli, that is, words. We present a Bayesian model that, unlike previous work, learns both categories and their features in a single process. We model category induction as two interrelated subproblems: the acquisition of features that discriminate among categories, and the grouping of concepts into categories based on those features. Our model learns categories incrementally using particle filters, a sequential Monte Carlo method commonly used for approximate probabilistic inference that sequentially integrates newly observed data and can be viewed as a plausible mechanism for human learning. Experimental results show that our incremental learner obtains meaningful categories which yield a closer fit to behavioral data compared to related models while at the same time acquiring features which characterize the learned categories.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 91,386

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Bayesian model learning based on predictive entropy.Jukka Corander & Pekka Marttinen - 2006 - Journal of Logic, Language and Information 15 (1-2):5-20.
Natural Categories.Eleanor Rosch - 1973 - Cognitive Psychology 4 (3):328-350.
Natural Language Ontology.Friederike Moltmann - 2017 - Oxford Encyclopedia of Linguistics.

Analytics

Added to PP
2015-11-02

Downloads
25 (#616,937)

6 months
5 (#629,136)

Historical graph of downloads
How can I increase my downloads?