Order:
  1.  84
    Composition in Distributional Models of Semantics.Jeff Mitchell & Mirella Lapata - 2010 - Cognitive Science 34 (8):1388-1429.
    Vector-based models of word meaning have become increasingly popular in cognitive science. The appeal of these models lies in their ability to represent meaning simply by using distributional information under the assumption that words occurring within similar contexts are semantically similar. Despite their widespread use, vector-based models are typically directed at representing words in isolation, and methods for constructing representations for phrases or sentences have received little attention in the literature. This is in marked contrast to experimental evidence (e.g., in (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   19 citations  
  2.  20
    Incremental Bayesian Category Learning From Natural Language.Lea Frermann & Mirella Lapata - 2016 - Cognitive Science 40 (6):1333-1381.
    Models of category learning have been extensively studied in cognitive science and primarily tested on perceptual abstractions or artificial stimuli. In this paper, we focus on categories acquired from natural language stimuli, that is, words. We present a Bayesian model that, unlike previous work, learns both categories and their features in a single process. We model category induction as two interrelated subproblems: the acquisition of features that discriminate among categories, and the grouping of concepts into categories based on those features. (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  3.  17
    Intra‐Sentential Context Effects on the Interpretation of Logical Metonymy⋆.Mirella Lapata, Frank Keller & Christoph Scheepers - 2003 - Cognitive Science 27 (4):649-668.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations