Learning the Meanings of Function Words From Grounded Language Using a Visual Question Answering Model

Cognitive Science 48 (5):e13448 (2024)
  Copy   BIBTEX

Abstract

Interpreting a seemingly simple function word like “or,” “behind,” or “more” can require logical, numerical, and relational reasoning. How are such words learned by children? Prior acquisition theories have often relied on positing a foundation of innate knowledge. Yet recent neural‐network‐based visual question answering models apparently can learn to use function words as part of answering questions about complex visual scenes. In this paper, we study what these models learn about function words, in the hope of better understanding how the meanings of these words can be learned by both models and children. We show that recurrent models trained on visually grounded language learn gradient semantics for function words requiring spatial and numerical reasoning. Furthermore, we find that these models can learn the meanings of logical connectives and and or without any prior knowledge of logical reasoning as well as early evidence that they are sensitive to alternative expressions when interpreting language. Finally, we show that word learning difficulty is dependent on the frequency of models' input. Our findings offer proof‐of‐concept evidence that it is possible to learn the nuanced interpretations of function words in a visually grounded context by using non‐symbolic general statistical learning algorithms, without any prior knowledge of linguistic meaning.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 92,654

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Learning, Empowerment and Judgement.Michael Luntley - 2008 - In Mark Mason (ed.), Critical Thinking and Learning. Wiley-Blackwell. pp. 79–92.
Can Becoming Bilingualism In The Childhood And Becoming Bilingual Later Be Parallel?Emin Yas - 2022 - Journal of Current Debates in Social Sciences 2 (2):243-249.
Monotonicity Reasoning in the Age of Neural Foundation Models.Zeming Chen & Qiyue Gao - 2023 - Journal of Logic, Language and Information 33 (1):49-68.

Analytics

Added to PP
2024-05-16

Downloads
0

6 months
0

Historical graph of downloads

Sorry, there are not enough data points to plot this chart.
How can I increase my downloads?