Switch to: References

Add citations

You must login to add citations.
  1. iMinerva: A Mathematical Model of Distributional Statistical Learning.Erik D. Thiessen & Philip I. Pavlik - 2013 - Cognitive Science 37 (2):310-343.
    Statistical learning refers to the ability to identify structure in the input based on its statistical properties. For many linguistic structures, the relevant statistical features are distributional: They are related to the frequency and variability of exemplars in the input. These distributional regularities have been suggested to play a role in many different aspects of language learning, including phonetic categories, using phonemic distinctions in word learning, and discovering non-adjacent relations. On the surface, these different aspects share few commonalities. Despite this, (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  • Discovering Words in Fluent Speech: The Contribution of Two Kinds of Statistical Information.Erik D. Thiessen & Lucy C. Erickson - 2012 - Frontiers in Psychology 3.
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  • When learning goes beyond statistics: Infants represent visual sequences in terms of chunks.Lauren K. Slone & Scott P. Johnson - 2018 - Cognition 178 (C):92-102.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  • What exactly is learned in visual statistical learning? Insights from Bayesian modeling.Noam Siegelman, Louisa Bogaerts, Blair C. Armstrong & Ram Frost - 2019 - Cognition 192 (C):104002.
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  • Redefining “Learning” in Statistical Learning: What Does an Online Measure Reveal About the Assimilation of Visual Regularities?Noam Siegelman, Louisa Bogaerts, Ofer Kronenfeld & Ram Frost - 2018 - Cognitive Science 42 (S3):692-727.
    From a theoretical perspective, most discussions of statistical learning have focused on the possible “statistical” properties that are the object of learning. Much less attention has been given to defining what “learning” is in the context of “statistical learning.” One major difficulty is that SL research has been monitoring participants’ performance in laboratory settings with a strikingly narrow set of tasks, where learning is typically assessed offline, through a set of two-alternative-forced-choice questions, which follow a brief visual or auditory familiarization (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   16 citations  
  • All words are not created equal: Expectations about word length guide infant statistical learning.Jenny R. Saffran & Casey Lew-Williams - 2012 - Cognition 122 (2):241-246.
    Infants have been described as 'statistical learners' capable of extracting structure (such as words) from patterned input (such as language). Here, we investigated whether prior knowledge influences how infants track transitional probabilities in word segmentation tasks. Are infants biased by prior experience when engaging in sequential statistical learning? In a laboratory simulation of learning across time, we exposed 9- and 10-month-old infants to a list of either disyllabic or trisyllabic nonsense words, followed by a pause-free speech stream composed of a (...)
    Direct download (8 more)  
     
    Export citation  
     
    Bookmark   13 citations  
  • MDLChunker: A MDL-Based Cognitive Model of Inductive Learning.Vivien Robinet, Benoît Lemaire & Mirta B. Gordon - 2011 - Cognitive Science 35 (7):1352-1389.
    This paper presents a computational model of the way humans inductively identify and aggregate concepts from the low-level stimuli they are exposed to. Based on the idea that humans tend to select the simplest structures, it implements a dynamic hierarchical chunking mechanism in which the decision whether to create a new chunk is based on an information-theoretic criterion, the Minimum Description Length (MDL) principle. We present theoretical justifications for this approach together with results of an experiment in which participants, exposed (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   10 citations  
  • Learning Higher‐Order Transitional Probabilities in Nonhuman Primates.Arnaud Rey, Joël Fagot, Fabien Mathy, Laura Lazartigues, Laure Tosatto, Guillem Bonafos, Jean-Marc Freyermuth & Frédéric Lavigne - 2022 - Cognitive Science 46 (4):e13121.
    Cognitive Science, Volume 46, Issue 4, April 2022.
    No categories
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  • What Mechanisms Underlie Implicit Statistical Learning? Transitional Probabilities Versus Chunks in Language Learning.Pierre Perruchet - 2019 - Topics in Cognitive Science 11 (3):520-535.
    In 2006, Perruchet and Pacton (2006) asked whether implicit learning and statistical learning represent two approaches to the same phenomenon. This article represents an important follow‐up to their seminal review article. As in the previous paper, the focus is on the formation of elementary cognitive units. Both approaches favor different explanations on what these units consist of and how they are formed. Perruchet weighs up the evidence for different explanations and concludes with a helpful agenda for future research.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  • Exploiting Multiple Sources of Information in Learning an Artificial Language: Human Data and Modeling.Pierre Perruchet & Barbara Tillmann - 2010 - Cognitive Science 34 (2):255-285.
    This study investigates the joint influences of three factors on the discovery of new word‐like units in a continuous artificial speech stream: the statistical structure of the ongoing input, the initial word‐likeness of parts of the speech flow, and the contextual information provided by the earlier emergence of other word‐like units. Results of an experiment conducted with adult participants show that these sources of information have strong and interactive influences on word discovery. The authors then examine the ability of different (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  • Language experience changes subsequent learning.Luca Onnis & Erik Thiessen - 2013 - Cognition 126 (2):268-284.
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  • The Temporal Dynamics of Regularity Extraction in Non‐Human Primates.Laure Minier, Joël Fagot & Arnaud Rey - 2016 - Cognitive Science 40 (4):1019-1030.
    Extracting the regularities of our environment is one of our core cognitive abilities. To study the fine-grained dynamics of the extraction of embedded regularities, a method combining the advantages of the artificial language paradigm and the serial response time task was used with a group of Guinea baboons in a new automatic experimental device. After a series of random trials, monkeys were exposed to language-like patterns. We found that the extraction of embedded patterns positioned at the end of larger patterns (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  • Parts beget parts: Bootstrapping hierarchical object representations through visual statistical learning.Alan L. F. Lee, Zili Liu & Hongjing Lu - 2021 - Cognition 209 (C):104515.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Learning a Generative Probabilistic Grammar of Experience: A Process‐Level Model of Language Acquisition.Oren Kolodny, Arnon Lotem & Shimon Edelman - 2014 - Cognitive Science 38 (4):227-267.
    We introduce a set of biologically and computationally motivated design choices for modeling the learning of language, or of other types of sequential, hierarchically structured experience and behavior, and describe an implemented system that conforms to these choices and is capable of unsupervised learning from raw natural-language corpora. Given a stream of linguistic input, our model incrementally learns a grammar that captures its statistical patterns, which can then be used to parse or generate new data. The grammar constructed in this (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  • Learning a Generative Probabilistic Grammar of Experience: A Process-Level Model of Language Acquisition.Oren Kolodny, Arnon Lotem & Shimon Edelman - 2015 - Cognitive Science 39 (2):227-267.
    We introduce a set of biologically and computationally motivated design choices for modeling the learning of language, or of other types of sequential, hierarchically structured experience and behavior, and describe an implemented system that conforms to these choices and is capable of unsupervised learning from raw natural-language corpora. Given a stream of linguistic input, our model incrementally learns a grammar that captures its statistical patterns, which can then be used to parse or generate new data. The grammar constructed in this (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  • Modeling human performance in statistical word segmentation.Michael C. Frank, Sharon Goldwater, Thomas L. Griffiths & Joshua B. Tenenbaum - 2010 - Cognition 117 (2):107-125.
    No categories
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   40 citations  
  • A Recurrent Connectionist Model of Melody Perception: An Exploration Using TRACX2.Daniel Defays, Robert M. French & Barbara Tillmann - 2023 - Cognitive Science 47 (4):e13283.
    Are similar, or even identical, mechanisms used in the computational modeling of speech segmentation, serial image processing, and music processing? We address this question by exploring how TRACX2, a recognition‐based, recursive connectionist autoencoder model of chunking and sequence segmentation, which has successfully simulated speech and serial‐image processing, might be applied to elementary melody perception. The model, a three‐layer autoencoder that recognizes “chunks” of short sequences of intervals that have been frequently encountered on input, is trained on the tone intervals of (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark