Citations of:
Add citations
You must login to add citations.
|
|
Cognitive Science, Volume 46, Issue 4, April 2022. |
|
We evaluate here the performance of four models of cross-situational word learning: two global models, which extract and retain multiple referential alternatives from each word occurrence; and two local models, which extract just a single referent from each occurrence. One of these local models, dubbed Pursuit, uses an associative learning mechanism to estimate word-referent probability but pursues and tests the best referent-meaning at any given time. Pursuit is found to perform as well as global models under many conditions extracted from (...) |
|
Although the language we encounter is typically embedded in rich discourse contexts, many existing models of processing focus largely on phenomena that occur sentence-internally. Similarly, most work on children's language learning does not consider how information can accumulate as a discourse progresses. Research in pragmatics, however, points to ways in which each subsequent utterance provides new opportunities for listeners to infer speaker meaning. Such inferences allow the listener to build up a representation of the speakers' intended topic and more generally (...) No categories |
|
Cross-situational learning has recently gained attention as a plausible candidate for the mechanism that underlies the learning of word-meaning mappings. In a recent study, Blythe and colleagues have studied how many trials are theoretically required to learn a human-sized lexicon using cross-situational learning. They show that the level of referential uncertainty exposed to learners could be relatively large. However, one of the assumptions they made in designing their mathematical model is questionable. Although they rightfully assumed that words are distributed according (...) |
|
|
|
Cross-situational word learning, like any statistical learning problem, involves tracking the regularities in the environment. However, the information that learners pick up from these regularities is dependent on their learning mechanism. This article investigates the role of one type of mechanism in statistical word learning: competition. Competitive mechanisms would allow learners to find the signal in noisy input and would help to explain the speed with which learners succeed in statistical learning tasks. Because cross-situational word learning provides information at multiple (...) No categories |
|
Cross-situational statistical learning of words involves tracking co-occurrences of auditory words and objects across time to infer word-referent mappings. Previous research has demonstrated that learners can infer referents across sets of very phonologically distinct words, but it remains unknown whether learners can encode fine phonological differences during cross-situational statistical learning. This study examined learners’ cross-situational statistical learning of minimal pairs that differed on one consonant segment, minimal pairs that differed on one vowel segment, and non-minimal pairs that differed on two (...) |
|
Previous research on cross-situational word learning has demonstrated that learners are able to reduce ambiguity in mapping words to referents by tracking co-occurrence probabilities across learning events. In the current experiments, we examined whether learners are able to retain mappings over time. The results revealed that learners are able to retain mappings for up to 1 week later. However, there were interactions between the amount of retention and the different learning conditions. Interestingly, the strongest retention was associated with a learning (...) |
|
Infant language learners are faced with the difficult inductive problem of determining how new words map to novel or known objects in their environment. Bayesian inference models have been successful at using the sparse information available in natural child-directed speech to build candidate lexicons and infer speakers’ referential intentions. We begin by asking how a Bayesian model optimized for monolingual input generalizes to new monolingual or bilingual corpora and find that, especially in the case of the bilingual input, the model (...) |
|
According to usage-based approaches to language acquisition, linguistic knowledge is represented in the form of constructions—form-meaning pairings—at multiple levels of abstraction and complexity. The emergence of syntactic knowledge is assumed to be a result of the gradual abstraction of lexically specific and item-based linguistic knowledge. In this article, we explore how the gradual emergence of a network consisting of constructions at varying degrees of complexity can be modeled computationally. Linguistic knowledge is learned by observing natural language utterances in an ambiguous (...) |
|
|
|
|
|
|
|
|
|
|
|
No categories |
|
No categories |
|
No categories |
|
|
|
|