Human language is a salient example of a neurocognitive system that is specialized to process complex dependencies between sensory events distributed in time, yet how this system evolved and specialized remains unclear. Artificial Grammar Learning (AGL) studies have generated a wealth of insights into how human adults and infants process different types of sequencing dependencies of varying complexity. The AGL paradigm has also been adopted to examine the sequence processing abilities of nonhuman animals. We critically evaluate this growing literature in (...) species ranging from mammals (primates and rats) to birds (pigeons, songbirds, and parrots) considering also cross‐species comparisons. The findings are contrasted with seminal studies in human infants that motivated the work in nonhuman animals. This synopsis identifies advances in knowledge and where uncertainty remains regarding the various strategies that nonhuman animals can adopt for processing sequencing dependencies. The paucity of evidence in the few species studied to date and the need for follow‐up experiments indicate that we do not yet understand the limits of animal sequence processing capacities and thereby the evolutionary pattern. This vibrant, yet still budding, field of research carries substantial promise for advancing knowledge on animal abilities, cognitive substrates, and language evolution. (shrink)
Artificial grammar learning (AGL) is used to study how human adults, infants, animals or machines learn various sorts of rules defined over sounds or visual items. Ten Cate et al. introduce the topic and provide a critical synthesis of this important interdisciplinary area of research. They identify the questions that remain open and the challenges that lie ahead, and argue that the limits of human, animal and machine learning abilities have yet to be found.