Recent advances in the field of statistical learning have established that learners are able to track regularities of multimodal stimuli, yet it is unknown whether the statistical computations are performed on integrated representations or on separate, unimodal representations. In the present study, we investigated the ability of adults to integrate audio and visual input during statistical learning. We presented learners with a speech stream synchronized with a video of a speaker’s face. In the critical condition, the visual (e.g. /gi/) and (...) auditory (e.g. /mi/) signals were occasionally incongruent, which we predicted would produce the McGurk illusion, resulting in the perception of an audiovisual syllable (e.g. /ni/). In this way, we used the McGurk illusion to manipulate the underlying statistical structure of the speech streams, such that perception of these illusory syllables facilitated participants’ ability to segment the speech stream. Our results therefore demonstrate that participants can integrate audio and visual input to perceive the McGurk illusion during statistical learning. We interpret our findings as support for modality-interactive accounts of statistical learning. (shrink)
Vaesen asks whether goal maintenance and planning ahead are critical for innovative tool use. We suggest that these aptitudes may have an evolutionary foundation in motor planning abilities that span all primate species. Anticipatory effects evidenced in the reaching behaviors of lemurs, tamarins, and rhesus monkeys similarly bear on the evolutionary origins of foresight as it pertains to tool use.