Abstract
Yang returns to the problem of referential ambiguity, addressed in the opening paper by Gleitman and Trueswell. Using a computational approach, he argues that “big data” approaches to resolving referential ambiguity are destined to fail, because of the inevitable computational explosion needed to keep track of contextual associations present when a word is uttered. Yang tests several computational models, two of which depend on one‐trial learning, as described in Gleitman and Trueswell’s paper. He concludes that such models outperform cross‐situational learning models, thereby formalizing, reinforcing, and advancing the conclusions offered by Gleitman and Trueswell.