Recent research has demonstrated that word learners can determine word-referent mappings by tracking co-occurrences across multiple ambiguous naming events. The current study addresses the mechanisms underlying this capacity to learn words cross-situationally. This replication and extension of Yu and Smith (2007) investigates the factors influencing both successful cross-situational word learning and mis-mappings. Item analysis and error patterns revealed that the co-occurrence structure of the learning environment as well as the context of the testing environment jointly affected learning across observations. Learners (...) also adopted an exclusion strategy, which contributed conjointly with statistical tracking to performance. Implications for our understanding of the processes underlying cross-situational word learning are discussed. (shrink)
Although language has long been regarded as a primarily arbitrary system, sound symbolism, or non-arbitrary correspondences between the sound of a word and its meaning, also exists in natural language. Previous research suggests that listeners are sensitive to sound symbolism. However, little is known about the specificity of these mappings. This study investigated whether sound symbolic properties correspond to specific meanings, or whether these properties generalize across semantic dimensions. In three experiments, native English-speaking adults heard sound symbolic foreign words for (...) dimensional adjective pairs and for each foreign word, selected a translation among English antonyms that either matched or mismatched with the correct meaning dimension. Listeners agreed more reliably on the English translation for matched relative to mismatched dimensions, though reliable cross-dimensional mappings did occur. These findings suggest that although sound symbolic properties generalize to meanings that may share overlapping semantic features, sound symbolic mappings offer semantic specificity. (shrink)
The proposal that language has evolved to conform to general cognitive and learning constraints inherent in the human brain calls for specification of these mechanisms. We propose that just as cognition appears to be grounded in cross-modal perceptual-motor capabilities, so too must language. Evidence for perceptual-motor grounding comes from non-arbitrary sound-to-meaning correspondences and their role in word learning.
The current study assessed the extent to which the use of referential prosody varies with communicative demand. Speaker–listener dyads completed a referential communication task during which speakers attempted to indicate one of two color swatches (one bright, one dark) to listeners. Speakers' bright sentences were reliably higher pitched than dark sentences for ambiguous (e.g., bright red versus dark red) but not unambiguous (e.g., bright red versus dark purple) trials, suggesting that speakers produced meaningful acoustic cues to brightness when the accompanying (...) linguistic content was underspecified (e.g., “Can you get the red one?”). Listening partners reliably chose the correct corresponding swatch for ambiguous trials when lexical information was insufficient to identify the target, suggesting that listeners recruited prosody to resolve lexical ambiguity. Prosody can thus be conceptualized as a type of vocal gesture that can be recruited to resolve referential ambiguity when there is communicative demand to do so. (shrink)