Over the past 15 years, there have been two increasingly popular approaches to the study of meaning in cognitive science. One, based on theories of embodied cognition, treats meaning as a simulation of perceptual and motor states. An alternative approach treats meaning as a consequence of the statistical distribution of words across spoken and written language. On the surface, these appear to be opposing scientific paradigms. In this review, we aim to show how recent cross-disciplinary developments have done much to (...) reconcile these two approaches. The foundation to these developments has been the recognition that intralinguistic distributional and sensory-motor data are interdependent. We describe recent work in philosophy, psychology, cognitive neuroscience, and computational modeling that are all based on or consistent with this conclusion. We conclude by considering some possible directions for future research that arise as a consequence of these developments. (shrink)
In this paper, we describe a model that learns semantic representations from the distributional statistics of language. This model, however, goes beyond the common bag‐of‐words paradigm, and infers semantic representations by taking into account the inherent sequential nature of linguistic data. The model we describe, which we refer to as a Hidden Markov Topics model, is a natural extension of the current state of the art in Bayesian bag‐of‐words models, that is, the Topics model of Griffiths, Steyvers, and Tenenbaum (2007), (...) preserving its strengths while extending its scope to incorporate more fine‐grained linguistic information. (shrink)
Pulvermüller restricts himself to an unnecessarily narrow range of evidence to support his claims. Evidence from neural modeling and behavioral experiments provides further support for an account of words encoded as transcortical cell assemblies. A cognitive neuroscience of language must include a range of methodologies (e.g., neural, computational, and behavioral) and will need to focus on the on-line processes of real-time language processing in more natural contexts.