In many learning or inference tasks human behavior approximates that of a Bayesian ideal observer, suggesting that, at some level, cognition can be described as Bayesian inference. However, a number of findings have highlighted an intriguing mismatch between human behavior and standard assumptions about optimality: People often appear to make decisions based on just one or a few samples from the appropriate posterior probability distribution, rather than using the full distribution. Although sampling-based approximations are a common way to implement Bayesian (...) inference, the very limited numbers of samples often used by humans seem insufficient to approximate the required probability distributions very accurately. Here, we consider this discrepancy in the broader framework of statistical decision theory, and ask: If people are making decisions based on samples—but as samples are costly—how many samples should people use to optimize their total expected or worst-case reward over a large number of decisions? We find that under reasonable assumptions about the time costs of sampling, making many quick but locally suboptimal decisions based on very few samples may be the globally optimal strategy over long periods. These results help to reconcile a large body of work showing sampling-based or probability matching behavior with the hypothesis that human cognition can be understood in Bayesian terms, and they suggest promising future directions for studies of resource-constrained cognition. (shrink)
Recent work suggests that people predict how objects interact in a manner consistent with Newtonian physics, but with additional uncertainty. However, the sources of uncertainty have not been examined. In this study, we measure perceptual noise in initial conditions and stochasticity in the physical model used to make predictions. Participants predicted the trajectory of a moving object through occluded motion and bounces, and we compared their behavior to an ideal observer model. We found that human judgments cannot be captured by (...) simple heuristics and must incorporate noisy dynamics. Moreover, these judgments are biased consistently with a prior expectation on object destinations, suggesting that people use simple expectations about outcomes to compensate for uncertainty about their physical models. (shrink)
DeStefano, Oey, Brockbank, and Vul explore interdisciplinary collaboration using data‐driven measures of research topics and co‐authorship, constructed from a rich dataset of over 11,000 Cogsci conference papers. Findings suggest the cognitive science research community has become increasingly integrated in the last 19 years.
Pietraszewski proposes four triadic “primitives” for representing social groups. We argue that, despite surface differences, these triads can all be reduced to similar underlying welfare trade-off ratios, which are a better candidate for social group primitives. Welfare trade-off ratios also have limitations, however, and we suggest there are multiple computational strategies by which people recognize and reason about social groups.
Like most domains of science, the study of the mind has been tackled at many scales of analysis, from the behavior of large groups of people, to the diffusion of ions across cellular membranes. At each of these scales, researchers often believe that the critical phenomena of interest, and the most powerful explanatory constructs and mechanisms, reside at their scale of analysis, with finer scales argued to be incapable of predicting the interesting phenomena, while coarser scales are purported to miss (...) critical mechanistic subtleties. Here we argue by analogy that, for better or worse, researchers at all scales are correct: phenomena at each scale of analysis are intractable from other scales; thus, while reductionism is a useful scientific goal, it will not obviate the need for macroscopic research, constructs, and formalisms. (shrink)