The Efficiency of Question‐Asking Strategies in a Real‐World Visual Search Task

Cognitive Science 47 (12):e13396 (2023)
  Copy   BIBTEX

Abstract

In recent years, a multitude of datasets of human–human conversations has been released for the main purpose of training conversational agents based on data‐hungry artificial neural networks. In this paper, we argue that datasets of this sort represent a useful and underexplored source to validate, complement, and enhance cognitive studies on human behavior and language use. We present a method that leverages the recent development of powerful computational models to obtain the fine‐grained annotation required to apply metrics and techniques from Cognitive Science to large datasets. Previous work in Cognitive Science has investigated the question‐asking strategies of human participants by employing different variants of the so‐called 20‐question‐game setting and proposing several evaluation methods. In our work, we focus on GuessWhat, a task proposed within the Computer Vision and Natural Language Processing communities that is similar in structure to the 20‐question‐game setting. Crucially, the GuessWhat dataset contains tens of thousands of dialogues based on real‐world images, making it a suitable setting to investigate the question‐asking strategies of human players on a large scale and in a natural setting. Our results demonstrate the effectiveness of computational tools to automatically code how the hypothesis space changes throughout the dialogue in complex visual scenes. On the one hand, we confirm findings from previous work on smaller and more controlled settings. On the other hand, our analyses allow us to highlight the presence of “uninformative” questions (in terms of Expected Information Gain) at specific rounds of the dialogue. We hypothesize that these questions fulfill pragmatic constraints that are exploited by human players to solve visual tasks in complex scenes successfully. Our work illustrates a method that brings together efforts and findings from different disciplines to gain a better understanding of human question‐asking strategies on large‐scale datasets, while at the same time posing new questions about the development of conversational systems.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 91,435

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

The rate of blinking during prolonged visual search.A. Carpenter - 1948 - Journal of Experimental Psychology 38 (5):587.
The operation of set in a visual search task.Harriet Foster - 1962 - Journal of Experimental Psychology 63 (1):74.
Effect of m value on visual search.Gerald J. Organt - 1971 - Journal of Experimental Psychology 89 (1):171.
Attentional effects in visual search: Relating search accuracy and search time.John Palmer - 1998 - In Richard D. Wright (ed.), Visual Attention. Oxford University Press. pp. 8--348.
Color coding in a visual search task.Bert F. Green & Lois K. Anderson - 1956 - Journal of Experimental Psychology 51 (1):19.

Analytics

Added to PP
2023-12-26

Downloads
9 (#1,239,121)

6 months
9 (#296,611)

Historical graph of downloads
How can I increase my downloads?