Visual and Affective Multimodal Models of Word Meaning in Language and Mind

Cognitive Science 45 (1):e12922 (2021)
  Copy   BIBTEX

Abstract

One of the main limitations of natural language‐based approaches to meaning is that they do not incorporate multimodal representations the way humans do. In this study, we evaluate how well different kinds of models account for people's representations of both concrete and abstract concepts. The models we compare include unimodal distributional linguistic models as well as multimodal models which combine linguistic with perceptual or affective information. There are two types of linguistic models: those based on text corpora and those derived from word association data. We present two new studies and a reanalysis of a series of previous studies. The studies demonstrate that both visual and affective multimodal models better capture behavior that reflects human representations than unimodal linguistic models. The size of the multimodal advantage depends on the nature of semantic representations involved, and it is especially pronounced for basic‐level concepts that belong to the same superordinate category. Additional visual and affective features improve the accuracy of linguistic models based on text corpora more than those based on word associations; this suggests systematic qualitative differences between what information is encoded in natural language versus what information is reflected in word associations. Altogether, our work presents new evidence that multimodal information is important for capturing both abstract and concrete words and that fully representing word meaning requires more than purely linguistic information. Implications for both embodied and distributional views of semantic representation are discussed.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 91,386

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Models for anodic and cathodic multimodalities.Juliana Bueno-Soler - 2012 - Logic Journal of the IGPL 20 (2):458-476.
The Epistemology of Non-visual Perception.Dimitria Gatzia & Berit Brogaard (eds.) - 2020 - Oxford, U.K.: Oxford University Press.
The Multimodal Experience of Art.Bence Nanay - 2012 - British Journal of Aesthetics 52 (4):353-363.

Analytics

Added to PP
2021-01-12

Downloads
9 (#1,224,450)

6 months
2 (#1,232,442)

Historical graph of downloads
How can I increase my downloads?