Semantic Coherence Facilitates Distributional Learning

Cognitive Science 41 (S4):855-884 (2017)
  Copy   BIBTEX

Abstract

Computational models have shown that purely statistical knowledge about words’ linguistic contexts is sufficient to learn many properties of words, including syntactic and semantic category. For example, models can infer that “postman” and “mailman” are semantically similar because they have quantitatively similar patterns of association with other words. In contrast to these computational results, artificial language learning experiments suggest that distributional statistics alone do not facilitate learning of linguistic categories. However, experiments in this paradigm expose participants to entirely novel words, whereas real language learners encounter input that contains some known words that are semantically organized. In three experiments, we show that the presence of familiar semantic reference points facilitates distributional learning and this effect crucially depends both on the presence of known words and the adherence of these known words to some semantic organization.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 91,139

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Analytics

Added to PP
2016-03-14

Downloads
27 (#542,098)

6 months
6 (#349,140)

Historical graph of downloads
How can I increase my downloads?