Switch to: References

Add citations

You must login to add citations.
  1. Non-lexical conversational sounds in American English.Nigel Ward - 2006 - Pragmatics and Cognition 14 (1):129-182.
    Sounds like h-nmm, hh-aaaah, hn-hn, unkay, nyeah, ummum, uuh, um-hm-uh-hm, um and uh-huh occur frequently in American English conversation but have thus far escaped systematic study. This article reports a study of both the forms and functions of such tokens in a corpus of American English conversations. These sounds appear not to be lexical, in that they are productively generated rather than finite in number, and in that the sound¿meaning mapping is compositional rather than arbitrary. This implies that English bears (...)
    Direct download (7 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  • Why not model spoken word recognition instead of phoneme monitoring?Jean Vroomen & Beatrice de Gelder - 2000 - Behavioral and Brain Sciences 23 (3):349-350.
    Norris, McQueen & Cutler present a detailed account of the decision stage of the phoneme monitoring task. However, we question whether this contributes to our understanding of the speech recognition process itself, and we fail to see why phonotactic knowledge is playing a role in phoneme recognition.
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark  
  • Continuous processing in word recognition at 24 months.Daniel Swingley, John P. Pinto & Anne Fernald - 1999 - Cognition 71 (2):73-108.
  • The Dynamics of Lexical Competition During Spoken Word Recognition.James S. Magnuson, James A. Dixon, Michael K. Tanenhaus & Richard N. Aslin - 2007 - Cognitive Science 31 (1):133-156.
    The sounds that make up spoken words are heard in a series and must be mapped rapidly onto words in memory because their elements, unlike those of visual words, cannot simultaneously exist or persist in time. Although theories agree that the dynamics of spoken word recognition are important, they differ in how they treat the nature of the competitor set—precisely which words are activated as an auditory word form unfolds in real time. This study used eye tracking to measure the (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   20 citations  
  • How Should a Speech Recognizer Work?Odette Scharenborg, Dennis Norris, Louis ten Bosch & James M. McQueen - 2005 - Cognitive Science 29 (6):867-918.
    Although researchers studying human speech recognition (HSR) and automatic speech recognition (ASR) share a common interest in how information processing systems (human or machine) recognize spoken language, there is little communication between the two disciplines. We suggest that this lack of communication follows largely from the fact that research in these related fields has focused on the mechanics of how speech can be recognized. In Marr's (1982) terms, emphasis has been on the algorithmic and implementational levels rather than on the (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  • How Should a Speech Recognizer Work?Odette Scharenborg, Dennis Norris, Louis Bosch & James M. McQueen - 2005 - Cognitive Science 29 (6):867-918.
    Although researchers studying human speech recognition (HSR) and automatic speech recognition (ASR) share a common interest in how information processing systems (human or machine) recognize spoken language, there is little communication between the two disciplines. We suggest that this lack of communication follows largely from the fact that research in these related fields has focused on the mechanics of how speech can be recognized. In Marr's (1982) terms, emphasis has been on the algorithmic and implementational levels rather than on the (...)
    No categories
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  • The role of prosodic boundaries in the resolution of lexical embedding in speech comprehension.Anne Pier Salverda, Delphine Dahan & James M. McQueen - 2003 - Cognition 90 (1):51-89.
  • Global model analysis by parameter space partitioning.Mark A. Pitt, Woojae Kim, Daniel J. Navarro & Jay I. Myung - 2006 - Psychological Review 113 (1):57-83.
    No categories
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   14 citations  
  • Shortlist B: A Bayesian model of continuous speech recognition.Dennis Norris & James M. McQueen - 2008 - Psychological Review 115 (2):357-395.
    No categories
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   66 citations  
  • The effect of simultaneous exposure on the attention selection and integration of segments and lexical tones by Urdu-Cantonese bilingual speakers.Jinghong Ning, Gang Peng, Yi Liu & Yingnan Li - 2022 - Frontiers in Psychology 13.
    In the perceptual learning of lexical tones, an automatic and robust attention-to-phonology system enables native tonal listeners to adapt to acoustically non-optimal speech, such as phonetic conflicts in daily communications. Previous tone research reveals that non-native listeners who do not linguistically employ lexical tones in their mother tongue may find it challenging to attend to the tonal dimension or integrate it with the segmental features. However, it is unknown whether the attentional interference initially caused by a maternal attentional system would (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  • The Recognition of Phonologically Assimilated Words Does Not Depend on Specific Language Experience.Holger Mitterer, Valéria Csépe, Ferenc Honbolygo & Leo Blomert - 2006 - Cognitive Science 30 (3):451-479.
    In a series of 5 experiments, we investigated whether the processing of phonologically assimilated utterances is influenced by language learning. Previous experiments had shown that phonological assimilations, such as /lean#bacon/→ [leam bacon], are compensated for in perception. In this article, we investigated whether compensation for assimilation can occur without experience with an assimilation rule using automatic event-related potentials. Our first experiment indicated that Dutch listeners compensate for a Hungarian assimilation rule. Two subsequent experiments, however, failed to show compensation for assimilation (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  • Phonological abstraction without phonemes in speech perception.Holger Mitterer, Odette Scharenborg & James M. McQueen - 2013 - Cognition 129 (2):356-361.
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  • Listening through the native tongue: A review essay on Cutler's Native listening: Language experience and the recognition of spoken words.Ramesh Kumar Mishra - 2015 - Philosophical Psychology 28 (7):1064-1078.
    Speech perception has been a very productive and important area in psycholinguistics. In this review easy, I discuss Cutler's new book on native language listening. Cutler argues for a theory of speech perception, where all speech perception is accomplished by competence in native speech. I review this book and attempt to situate its main contributions in the broader context of cognitive science.
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark  
  • What information is necessary for speech categorization? Harnessing variability in the speech signal by integrating cues computed relative to expectations.Bob McMurray & Allard Jongman - 2011 - Psychological Review 118 (2):219-246.
  • Gradient effects of within-category phonetic variation on lexical access.Bob McMurray, Michael K. Tanenhaus & Richard N. Aslin - 2002 - Cognition 86 (2):B33-B42.
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   35 citations  
  • The Dynamics of Lexical Competition During Spoken Word Recognition.James S. Magnuson, James A. Dixon, Michael K. Tanenhaus & Richard N. Aslin - 2007 - Cognitive Science 31 (1):133-156.
    The sounds that make up spoken words are heard in a series and must be mapped rapidly onto words in memory because their elements, unlike those of visual words, cannot simultaneously exist or persist in time. Although theories agree that the dynamics of spoken word recognition are important, they differ in how they treat the nature of the competitor set—precisely which words are activated as an auditory word form unfolds in real time. This study used eye tracking to measure the (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   20 citations  
  • The Dynamics of Lexical Competition During Spoken Word Recognition.James S. Magnuson, James A. Dixon, Michael K. Tanenhaus & Richard N. Aslin - 2007 - Cognitive Science 31 (1):133-156.
    The sounds that make up spoken words are heard in a series and must be mapped rapidly onto words in memory because their elements, unlike those of visual words, cannot simultaneously exist or persist in time. Although theories agree that the dynamics of spoken word recognition are important, they differ in how they treat the nature of the competitor set—precisely which words are activated as an auditory word form unfolds in real time. This study used eye tracking to measure the (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   27 citations  
  • Does visual word identification involve a sub-phonemic level?G. Lukatela, T. Eaton, C. Lee & M. T. Turvey - 2001 - Cognition 78 (3):B41-B52.
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Immediate lexical integration of novel word forms.Efthymia C. Kapnoula, Stephanie Packard, Prahlad Gupta & Bob McMurray - 2015 - Cognition 134:85-99.
    No categories
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  • Are there interactive processes in speech perception?Lori L. Holt James L. McClelland, Daniel Mirman - 2006 - Trends in Cognitive Sciences 10 (8):363.
  • What makes words sound similar?Ulrike Hahn & Todd M. Bailey - 2005 - Cognition 97 (3):227-267.
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  • Ambiguity, Competition, and Blending in Spoken Word Recognition.M. Gareth Gaskell & William D. Marslen-Wilson - 1999 - Cognitive Science 23 (4):439-462.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   13 citations  
  • Phoneme‐Order Encoding During Spoken Word Recognition: A Priming Investigation.Sophie Dufour & Jonathan Grainger - 2019 - Cognitive Science 43 (10):e12785.
    In three experiments, we examined priming effects where primes were formed by transposing the first and last phoneme of tri‐phonemic target words (e.g., /byt/ as a prime for /tyb/). Auditory lexical decisions were found not to be sensitive to this transposed‐phoneme priming manipulation in long‐term priming (Experiment 1), with primes and targets presented in two separated blocks of stimuli and with unrelated primes used as control condition (/mul/‐/tyb/), while a long‐term repetition priming effect was observed (/tyb/‐/tyb/). However, a clear transposed‐phoneme (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  • Heeding the voice of experience: The role of talker variation in lexical access.Sarah C. Creel, Richard N. Aslin & Michael K. Tanenhaus - 2008 - Cognition 106 (2):633-664.
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   30 citations  
  • Morphological units in the Arabic mental lexicon.Sami Boudelaa & William D. Marslen-Wilson - 2001 - Cognition 81 (1):65-92.
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   5 citations