Results for ' Speech recognition in noise'

1000+ found
Order:
  1.  8
    Restricted Speech Recognition in Noise and Quality of Life of Hearing-Impaired Children and Adolescents With Cochlear Implants – Need for Studies Addressing This Topic With Valid Pediatric Quality of Life Instruments.Maria Huber & Clara Havas - 2019 - Frontiers in Psychology 10.
    Cochlear implants (CI) support the development of oral language in hearing-impaired children. However, even with CI, speech recognition in noise (SRiN) is limited. This raised the question, whether these restrictions are related to the quality of life (QoL) of children and adolescents with CI and how SRiN and QoL are related to each other. As a result of a systematic literature research only three studies were found, indicating positive moderating effects between SRiN and QoL of young CI (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  2.  3
    Longitudinal Speech Recognition in Noise in Children: Effects of Hearing Status and Vocabulary.Elizabeth A. Walker, Caitlin Sapp, Jacob J. Oleson & Ryan W. McCreery - 2019 - Frontiers in Psychology 10.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  3.  5
    Do Age and Linguistic Status Alter the Effect of Sound Source Diffuseness on Speech Recognition in Noise?Meital Avivi-Reich, Rupinder Kaur Sran & Bruce A. Schneider - 2022 - Frontiers in Psychology 13.
    One aspect of auditory scenes that has received very little attention is the level of diffuseness of sound sources. This aspect has increasing importance due to growing use of amplification systems. When an auditory stimulus is amplified and presented over multiple, spatially-separated loudspeakers, the signal’s timbre is altered due to comb filtering. In a previous study we examined how increasing the diffuseness of the sound sources might affect listeners’ ability to recognize speech presented in different types of background (...). Listeners performed similarly when both the target and the masker were presented via a similar number of loudspeakers. However, performance improved when the target was presented using a single speaker and the masker from three spatially separate speakers but worsened when the target was diffuse, and the masker was compact. In the current study, we extended our research to examine whether the effects of timbre changes with age and linguistic experience. Twenty-four older adults whose first language was English and 24 younger adults whose second language was English were asked to repeat non-sense sentences masked by either Noise, Babble, or Speech and their results were compared with those of the Young-EFLs previously tested. Participants were divided into two experimental groups: A Compact-Target group where the target sentences were presented over a single loudspeaker, while the masker was either presented over three loudspeakers or over a single loudspeaker; A Diffuse-Target group, where the target sentences were diffuse while the masker was either compact or diffuse. The results indicate that the Target Timbre has a negligible effect on thresholds when the timbre of the target matches the timbre of the masker in all three groups. When there is a timbre contrast between target and masker, thresholds are significantly lower when the target is compact than when it is diffuse for all three listening groups in a Noise background. However, while this difference is maintained for the Young and Old-EFLs when the masker is Babble or Speech, speech reception thresholds in the Young-ESL group tend to be equivalent for all four combinations of target and masker timbre. (shrink)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  4.  7
    Age-Related Differences in Lexical Access Relate to Speech Recognition in Noise.Rebecca Carroll, Anna Warzybok, Birger Kollmeier & Esther Ruigendijk - 2016 - Frontiers in Psychology 7.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  5.  5
    On Dynamic Pitch Benefit for Speech Recognition in Speech Masker.Jing Shen & Pamela E. Souza - 2018 - Frontiers in Psychology 9.
    Previous work demonstrated that dynamic pitch (i.e., pitch variation in speech) aids speech recognition in various types of noises. While this finding suggests dynamic pitch enhancement in target speech can benefit speech recognition in noise, it is of importance to know what noise characteristics affect dynamic pitch benefit and who will benefit from enhanced dynamic pitch cues. Following our recent finding that temporal modulation in noise influences dynamic pitch benefit, we examined (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  6.  19
    Differences in Speech Recognition Between Children with Attention Deficits and Typically Developed Children Disappear When Exposed to 65 dB of Auditory Noise.Göran B. W. Söderlund & Elisabeth Nilsson Jobs - 2016 - Frontiers in Psychology 7.
    Direct download (7 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  7.  21
    Audiovisual cues benefit recognition of accented speech in noise but not perceptual adaptation.Briony Banks, Emma Gowen, Kevin J. Munro & Patti Adank - 2015 - Frontiers in Human Neuroscience 9.
  8.  20
    Second Language Experience Facilitates Sentence Recognition in Temporally-Modulated Noise for Non-native Listeners.Jingjing Guan, Xuetong Cao & Chang Liu - 2021 - Frontiers in Psychology 12.
    Non-native listeners deal with adverse listening conditions in their daily life much harder than native listeners. However, previous work in our laboratories found that native Chinese listeners with native English exposure may improve the use of temporal fluctuations of noise for English vowel identification. The purpose of this study was to investigate whether Chinese listeners can generalize the use of temporal cues for the English sentence recognition in noise. Institute of Electrical and Electronics Engineers sentence recognition (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  9.  3
    English Phrase Speech Recognition Based on Continuous Speech Recognition Algorithm and Word Tree Constraints.Haifan Du & Haiwen Duan - 2021 - Complexity 2021:1-11.
    This paper combines domestic and international research results to analyze and study the difference between the attribute features of English phrase speech and noise to enhance the short-time energy, which is used to improve the threshold judgment sensitivity; noise addition to the discrepancy data set is used to enhance the recognition robustness. The backpropagation algorithm is improved to constrain the range of weight variation, avoid oscillation phenomenon, and shorten the training time. In the real English phrase (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  10.  23
    Speaker Recognition in Uncontrolled Environment: A Review.Ramaswamy Kumaraswamy & Narendra Karamangala - 2013 - Journal of Intelligent Systems 22 (1):49-65.
    . Speaker recognition has been an active research area for many years. Methods to represent and quantify information embedded in speech signal are termed as features of the signal. The features are obtained, modeled and stored for further reference when the system is to be tested. Decision whether to accept or reject speakers are taken based on parameters of the data modeling techniques. Real world offers various degradations to the signal that hamper the signal quality. The degradations may (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  11.  17
    Single-Channel Speech Enhancement Techniques for Distant Speech Recognition.Ramaswamy Kumaraswamy & Jaya Kumar Ashwini - 2013 - Journal of Intelligent Systems 22 (2):81-93.
    This article presents an overview of the single-channel dereverberation methods suitable for distant speech recognition application. The dereverberation methods are mainly classified based on the domain of enhancement of speech signal captured by a distant microphone. Many single-channel speech enhancement methods focus on either denoising or dereverberating the distorted speech signal. There are very few methods that consider both noise and reverberation effects. Such methods are discussed under a multistage approach in this article. The (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  12.  92
    Speech Perception in Noise Is Associated With Different Cognitive Abilities in Chinese-Speaking Older Adults With and Without Hearing Aids.Yuan Chen, Lena L. N. Wong, Shaina Shing Chan & Joannie Yu - 2022 - Frontiers in Psychology 12.
    Chinese-speaking older adults usually do not perceive a hearing problem until audiometric thresholds exceed 45 dB HL, and the audiometric thresholds of the average hearing-aid user often exceed 60 dB HL. The purpose of this study was to examine the relationships between cognitive and hearing functions in older Chinese adults with HAs and with untreated hearing loss. Participants were 49 Chinese older adults who used HAs and had moderate to severe HL, and 46 older Chinese who had mild to moderately (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  13.  6
    The Principle of Inverse Effectiveness in Audiovisual Speech Perception.Luuk P. H. van de Rijt, Anja Roye, Emmanuel A. M. Mylanus, A. John van Opstal & Marc M. van Wanrooij - 2019 - Frontiers in Human Neuroscience 13:468577.
    We assessed how synchronous speech listening and lipreading affects speech recognition in acoustic noise. In simple audiovisual perceptual tasks, inverse effectiveness is often observed, which holds that the weaker the unimodal stimuli, or the poorer their signal-to-noise ratio, the stronger the audiovisual benefit. So far, however, inverse effectiveness has not been demonstrated for complex audiovisual speech stimuli. Here we assess whether this multisensory integration effect can also be observed for the recognizability of spoken words. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  14.  11
    One Size Does Not Fit All: Examining the Effects of Working Memory Capacity on Spoken Word Recognition in Older Adults Using Eye Tracking.Gal Nitsan, Karen Banai & Boaz M. Ben-David - 2022 - Frontiers in Psychology 13.
    Difficulties understanding speech form one of the most prevalent complaints among older adults. Successful speech perception depends on top-down linguistic and cognitive processes that interact with the bottom-up sensory processing of the incoming acoustic information. The relative roles of these processes in age-related difficulties in speech perception, especially when listening conditions are not ideal, are still unclear. In the current study, we asked whether older adults with a larger working memory capacity process speech more efficiently than (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  15.  18
    Masked Speech Recognition in School-Age Children.Lori J. Leibold & Emily Buss - 2019 - Frontiers in Psychology 10.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  16.  7
    Age Differences in Speech Perception in Noise and Sound Localization in Individuals With Subjective Normal Hearing.Tobias Weissgerber, Carmen Müller, Timo Stöver & Uwe Baumann - 2022 - Frontiers in Psychology 13.
    Hearing loss in old age, which often goes untreated, has far-reaching consequences. Furthermore, reduction of cognitive abilities and dementia can also occur, which also affects quality of life. The aim of this study was to investigate the hearing performance of seniors without hearing complaints with respect to speech perception in noise and the ability to localize sounds. Results were tested for correlations with age and cognitive performance. The study included 40 subjects aged between 60 and 90 years with (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  17.  9
    Lexical Effects on the Perceived Clarity of Noise-Vocoded Speech in Younger and Older Listeners.Terrin N. Tamati, Victoria A. Sevich, Emily M. Clausing & Aaron C. Moberly - 2022 - Frontiers in Psychology 13.
    When listening to degraded speech, such as speech delivered by a cochlear implant, listeners make use of top-down linguistic knowledge to facilitate speech recognition. Lexical knowledge supports speech recognition and enhances the perceived clarity of speech. Yet, the extent to which lexical knowledge can be used to effectively compensate for degraded input may depend on the degree of degradation and the listener’s age. The current study investigated lexical effects in the compensation for (...) that was degraded via noise-vocoding in younger and older listeners. In an online experiment, younger and older normal-hearing listeners rated the clarity of noise-vocoded sentences on a scale from 1 to 7. Lexical information was provided by matching text primes and the lexical content of the target utterance. Half of the sentences were preceded by a matching text prime, while half were preceded by a non-matching prime. Each sentence also consisted of three key words of high or low lexical frequency and neighborhood density. Sentences were processed to simulate CI hearing, using an eight-channel noise vocoder with varying filter slopes. Results showed that lexical information impacted the perceived clarity of noise-vocoded speech. Noise-vocoded speech was perceived as clearer when preceded by a matching prime, and when sentences included key words with high lexical frequency and low neighborhood density. However, the strength of the lexical effects depended on the level of degradation. Matching text primes had a greater impact for speech with poorer spectral resolution, but lexical content had a smaller impact for speech with poorer spectral resolution. Finally, lexical information appeared to benefit both younger and older listeners. Findings demonstrate that lexical knowledge can be employed by younger and older listeners in cognitive compensation during the processing of noise-vocoded speech. However, lexical content may not be as reliable when the signal is highly degraded. Clinical implications are that for adult CI users, lexical knowledge might be used to compensate for the degraded speech signal, regardless of age, but some CI users may be hindered by a relatively poor signal. (shrink)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  18.  4
    Detection and Recognition of Asynchronous Auditory/Visual Speech: Effects of Age, Hearing Loss, and Talker Accent.Sandra Gordon-Salant, Maya S. Schwartz, Kelsey A. Oppler & Grace H. Yeni-Komshian - 2022 - Frontiers in Psychology 12.
    This investigation examined age-related differences in auditory-visual integration as reflected on perceptual judgments of temporally misaligned AV English sentences spoken by native English and native Spanish talkers. In the detection task, it was expected that slowed auditory temporal processing of older participants, relative to younger participants, would be manifest as a shift in the range over which participants would judge asynchronous stimuli as synchronous. The older participants were also expected to exhibit greater declines in speech recognition for asynchronous (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  19.  7
    Exploring the Role of Brain Oscillations in Speech Perception in Noise: Intelligibility of Isochronously Retimed Speech.Vincent Aubanel, Chris Davis & Jeesun Kim - 2016 - Frontiers in Human Neuroscience 10.
  20.  12
    Age of Acquisition Modulates Alpha Power During Bilingual Speech Comprehension in Noise.Angela M. Grant, Shanna Kousaie, Kristina Coulter, Annie C. Gilbert, Shari R. Baum, Vincent Gracco, Debra Titone, Denise Klein & Natalie A. Phillips - 2022 - Frontiers in Psychology 13.
    Research on bilingualism has grown exponentially in recent years. However, the comprehension of speech in noise, given the ubiquity of both bilingualism and noisy environments, has seen only limited focus. Electroencephalogram studies in monolinguals show an increase in alpha power when listening to speech in noise, which, in the theoretical context where alpha power indexes attentional control, is thought to reflect an increase in attentional demands. In the current study, English/French bilinguals with similar second language proficiency (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  21.  8
    The Bilingual Disadvantage in Speech Understanding in Noise Is Likely a Frequency Effect Related to Reduced Language Exposure.Jens Schmidtke - 2016 - Frontiers in Psychology 7.
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  22.  2
    Interactions between acoustic challenges and processing depth in speech perception as measured by task-evoked pupil response.Jing Shen, Laura P. Fitzgerald & Erin R. Kulick - 2022 - Frontiers in Psychology 13.
    Speech perception under adverse conditions is a multistage process involving a dynamic interplay among acoustic, cognitive, and linguistic factors. Nevertheless, prior research has primarily focused on factors within this complex system in isolation. The primary goal of the present study was to examine the interaction between processing depth and the acoustic challenge of noise and its effect on processing effort during speech perception in noise. Two tasks were used to represent different depths of processing. The (...) recognition task involved repeating back a sentence after auditory presentation, while the tiredness judgment task entailed a subjective judgment of whether the speaker sounded tired. The secondary goal of the study was to investigate whether pupil response to alteration of dynamic pitch cues stems from difficult linguistic processing of speech content in noise or a perceptual novelty effect due to the unnatural pitch contours. Task-evoked peak pupil response from two groups of younger adult participants with typical hearing was measured in two experiments. Both tasks were implemented in both experiments, and stimuli were presented with background noise in Experiment 1 and without noise in Experiment 2. Increased peak pupil dilation was associated with deeper processing, particularly in the presence of background noise. Importantly, there is a non-additive interaction between noise and task, as demonstrated by the heightened peak pupil dilation to noise in the speech recognition task as compared to in the tiredness judgment task. Additionally, peak pupil dilation data suggest dynamic pitch alteration induced an increased perceptual novelty effect rather than reflecting effortful linguistic processing of the speech content in noise. These findings extend current theories of speech perception under adverse conditions by demonstrating that the level of processing effort expended by a listener is influenced by the interaction between acoustic challenges and depth of linguistic processing. The study also provides a foundation for future work to investigate the effects of this complex interaction in clinical populations who experience both hearing and cognitive challenges. (shrink)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  23.  9
    Speech-in-Noise Perception in Children With Cochlear Implants, Hearing Aids, Developmental Language Disorder and Typical Development: The Effects of Linguistic and Cognitive Abilities.Janne von Koss Torkildsen, Abigail Hitchins, Marte Myhrum & Ona Bø Wie - 2019 - Frontiers in Psychology 10.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  24.  95
    Merging information in speech recognition: Feedback is never necessary.Dennis Norris, James M. McQueen & Anne Cutler - 2000 - Behavioral and Brain Sciences 23 (3):299-325.
    Top-down feedback does not benefit speech recognition; on the contrary, it can hinder it. No experimental data imply that feedback loops are required for speech recognition. Feedback is accordingly unnecessary and spoken word recognition is modular. To defend this thesis, we analyse lexical involvement in phonemic decision making. TRACE (McClelland & Elman 1986), a model with feedback from the lexicon to prelexical processes, is unable to account for all the available data on phonemic decision making. (...)
    Direct download (8 more)  
     
    Export citation  
     
    Bookmark   46 citations  
  25.  26
    How does susceptibility to proactive interference relate to speech recognition in aided and unaided conditions?Rachel J. Ellis & Jerker Rönnberg - 2015 - Frontiers in Psychology 6.
    Direct download (7 more)  
     
    Export citation  
     
    Bookmark  
  26.  10
    Exploring the Link Between Cognitive Abilities and Speech Recognition in the Elderly Under Different Listening Conditions.Theresa Nuesse, Rike Steenken, Tobias Neher & Inga Holube - 2018 - Frontiers in Psychology 9.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  27.  89
    Neutrosophic speech recognition Algorithm for speech under stress by Machine learning.Florentin Smarandache, D. Nagarajan & Said Broumi - 2023 - Neutrosophic Sets and Systems 53.
    It is well known that the unpredictable speech production brought on by stress from the task at hand has a significant negative impact on the performance of speech processing algorithms. Speech therapy benefits from being able to detect stress in speech. Speech processing performance suffers noticeably when perceptually produced stress causes variations in speech production. Using the acoustic speech signal to objectively characterize speaker stress is one method for assessing production variances brought on (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  28.  28
    Modelling asynchrony in automatic speech recognition using loosely coupled hidden Markov models.H. J. Nock & S. J. Young - 2002 - Cognitive Science 26 (3):283-301.
    Hidden Markov models (HMMs) have been successful for modelling the dynamics of carefully dictated speech, but their performance degrades severely when used to model conversational speech. Since speech is produced by a system of loosely coupled articulators, stochastic models explicitly representing this parallelism may have advantages for automatic speech recognition (ASR), particularly when trying to model the phonological effects inherent in casual spontaneous speech. This paper presents a preliminary feasibility study of one such model (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  29. Gated audiovisual speech identification in silence vs. noise: effects on time and accuracy.Shahram Moradi, Björn Lidestam & Jerker Rönnberg - 2013 - Frontiers in Psychology 4.
    Direct download (7 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  30.  11
    Understanding Miscommunication: Speech Act Recognition in Digital Contexts.Thomas Holtgraves - 2021 - Cognitive Science 45 (10):e13023.
    Successful language use requires accurate intention recognition. However, sometimes this can be undermined because communication occurs within an interpersonal context. In this research, I used a relatively large set of speech acts (n = 32) and explored how variability in their inherent face‐threat influences the extent to which they are successfully recognized by a recipient, as well as the confidence of senders and receivers in their communicative success. Participants in two experiments either created text messages (senders) designed to (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  31.  34
    Perceptual units in speech recognition.Dominic W. Massaro - 1974 - Journal of Experimental Psychology 102 (2):199.
  32.  24
    Automatic Speech Recognition: A Comprehensive Survey.Arbana Kadriu & Amarildo Rista - 2020 - Seeu Review 15 (2):86-112.
    Speech recognition is an interdisciplinary subfield of natural language processing (NLP) that facilitates the recognition and translation of spoken language into text by machine. Speech recognition plays an important role in digital transformation. It is widely used in different areas such as education, industry, and healthcare and has recently been used in many Internet of Things and Machine Learning applications. The process of speech recognition is one of the most difficult processes in computer (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  33.  48
    Mandarin-Speaking Children’s Speech Recognition: Developmental Changes in the Influences of Semantic Context and F0 Contours.Zhou Hong, Li Yu, Liang Meng, Guan Connie Qun, Zhang Linjun, Shu Hua & Zhang Yang - 2017 - Frontiers in Psychology 8.
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  34.  11
    Effects of Wearing Face Masks While Using Different Speaking Styles in Noise on Speech Intelligibility During the COVID-19 Pandemic.Hoyoung Yi, Ashly Pingsterhaus & Woonyoung Song - 2021 - Frontiers in Psychology 12.
    The coronavirus pandemic has resulted in the recommended/required use of face masks in public. The use of a face mask compromises communication, especially in the presence of competing noise. It is crucial to measure the potential effects of wearing face masks on speech intelligibility in noisy environments where excessive background noise can create communication challenges. The effects of wearing transparent face masks and using clear speech to facilitate better verbal communication were evaluated in this study. We (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  35.  6
    A Clinical Paradigm for Listening Effort Assessment in Middle-Aged Listeners.Ricky Kaplan Neeman, Ilan Roziner & Chava Muchnik - 2022 - Frontiers in Psychology 13.
    Listening effort has been known to characterize speech recognition in noise regardless of hearing sensitivity and age. Whereas the behavioral measure of dual-task paradigm effectively manifests the cognitive cost that listeners exert when processing speech in background noise, there is no consensus as to a clinical procedure that might best express LE. In order to assess the cognitive load underlying speech recognition in noise and promote counselling for coping strategies, a feasible clinical (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  36.  35
    Merging information versus speech recognition.Irene Appelbaum - 2000 - Behavioral and Brain Sciences 23 (3):325-326.
    Norris, McQueen & Cutler claim that all known speech recognition data can be accounted for with their autonomous model, “Merge.” But this claim is doubly misleading. (1) Although speech recognition is autonomous in their view, the Merge model is not. (2) The body of data which the Merge model accounts for, is not, in their view, speech recognition data. Footnotes1 Author is also affiliated with the Center for the Study of Language and Information, Stanford (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  37.  12
    Do Musicians and Non-musicians Differ in Speech-on-Speech Processing?Elif Canseza Kaplan, Anita E. Wagner, Paolo Toffanin & Deniz Başkent - 2021 - Frontiers in Psychology 12.
    Earlier studies have shown that musically trained individuals may have a benefit in adverse listening situations when compared to non-musicians, especially in speech-on-speech perception. However, the literature provides mostly conflicting results. In the current study, by employing different measures of spoken language processing, we aimed to test whether we could capture potential differences between musicians and non-musicians in speech-on-speech processing. We used an offline measure of speech perception, which reveals a post-task response, and online measures (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  38.  8
    Multi-Talker Speech Promotes Greater Knowledge-Based Spoken Mandarin Word Recognition in First and Second Language Listeners.Seth Wiener & Chao-Yang Lee - 2020 - Frontiers in Psychology 11.
    Spoken word recognition involves a perceptual tradeoff between the reliance on the incoming acoustic signal and knowledge about likely sound categories and their co-occurrences as words. This study examined how adult second language (L2) learners navigate between acoustic-based and knowledge-based spoken word recognition when listening to highly variable, multi-talker truncated speech, and whether this perceptual tradeoff changes as L2 listeners gradually become more proficient in their L2 after multiple months of structured classroom learning. First language (L1) Mandarin (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  39.  13
    A State-of-the-Art Review of EEG-Based Imagined Speech Decoding.Diego Lopez-Bernal, David Balderas, Pedro Ponce & Arturo Molina - 2022 - Frontiers in Human Neuroscience 16:867281.
    Currently, the most used method to measure brain activity under a non-invasive procedure is the electroencephalogram (EEG). This is because of its high temporal resolution, ease of use, and safety. These signals can be used under a Brain Computer Interface (BCI) framework, which can be implemented to provide a new communication channel to people that are unable to speak due to motor disabilities or other neurological diseases. Nevertheless, EEG-based BCI systems have presented challenges to be implemented in real life situations (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  40.  37
    DLD: An Optimized Chinese Speech Recognition Model Based on Deep Learning.Hong Lei, Yue Xiao, Yanchun Liang, Dalin Li & Heow Pueh Lee - 2022 - Complexity 2022:1-8.
    Speech recognition technology has played an indispensable role in realizing human-computer intelligent interaction. However, most of the current Chinese speech recognition systems are provided online or offline models with low accuracy and poor performance. To improve the performance of offline Chinese speech recognition, we propose a hybrid acoustic model of deep convolutional neural network, long short-term memory, and deep neural network. This model utilizes DCNN to reduce frequency variation and adds a batch normalization layer (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  41.  11
    Transcranial Alternating Current Stimulation With the Theta-Band Portion of the Temporally-Aligned Speech Envelope Improves Speech-in-Noise Comprehension.Mahmoud Keshavarzi & Tobias Reichenbach - 2020 - Frontiers in Human Neuroscience 14.
  42.  48
    Alteration of the dynamic modulation of auditory beta-band oscillations by voice power during speech-in-noise.Vander Ghinst Marc, Bourguignon Mathieu, Wens Vincent, Marty Brice, Op De Beeck Marc, Van Bogaert Patrick, Goldman Serge & De Tiège Xavier - 2014 - Frontiers in Human Neuroscience 8.
  43.  23
    EEG Correlates of Learning From Speech Presented in Environmental Noise.Ehsan Eqlimi, Annelies Bockstael, Bert De Coensel, Marc Schönwiesner, Durk Talsma & Dick Botteldooren - 2020 - Frontiers in Psychology 11.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  44.  53
    Speech acts in mathematics.Marco Ruffino, Luca San Mauro & Giorgio Venturi - 2020 - Synthese 198 (10):10063-10087.
    We offer a novel picture of mathematical language from the perspective of speech act theory. There are distinct speech acts within mathematics, and, as we intend to show, distinct illocutionary force indicators as well. Even mathematics in its most formalized version cannot do without some such indicators. This goes against a certain orthodoxy both in contemporary philosophy of mathematics and in speech act theory. As we will comment, the recognition of distinct illocutionary acts within logic and (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  45.  29
    The development of the orthographic consistency effect in speech recognition: From sublexical to lexical involvement.Paulo Ventura, José Morais & Régine Kolinsky - 2007 - Cognition 105 (3):547-576.
  46.  17
    EARSHOT: A Minimal Neural Network Model of Incremental Human Speech Recognition.James S. Magnuson, Heejo You, Sahil Luthra, Monica Li, Hosung Nam, Monty Escabí, Kevin Brown, Paul D. Allopenna, Rachel M. Theodore, Nicholas Monto & Jay G. Rueckl - 2020 - Cognitive Science 44 (4):e12823.
    Despite the lack of invariance problem (the many‐to‐many mapping between acoustics and percepts), human listeners experience phonetic constancy and typically perceive what a speaker intends. Most models of human speech recognition (HSR) have side‐stepped this problem, working with abstract, idealized inputs and deferring the challenge of working with real speech. In contrast, carefully engineered deep learning networks allow robust, real‐world automatic speech recognition (ASR). However, the complexities of deep learning architectures and training regimens make it (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  47.  2
    Difficulties Experienced by Older Listeners in Utilizing Voice Cues for Speaker Discrimination.Yael Zaltz & Liat Kishon-Rabin - 2022 - Frontiers in Psychology 13.
    Human listeners are assumed to apply different strategies to improve speech recognition in background noise. Young listeners with normal hearing, e.g., have been shown to follow the voice of a particular speaker based on the fundamental and formant frequencies, which are both influenced by the gender, age, and size of the speaker. However, the auditory and cognitive processes that underlie the extraction and discrimination of these voice cues across speakers may be subject to age-related decline. The present (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  48.  4
    Speech Perception in Older Adults: An Interplay of Hearing, Cognition, and Learning?Liat Shechter Shvartzman, Limor Lavie & Karen Banai - 2022 - Frontiers in Psychology 13.
    Older adults with age-related hearing loss exhibit substantial individual differences in speech perception in adverse listening conditions. We propose that the ability to rapidly adapt to changes in the auditory environment is among the processes contributing to these individual differences, in addition to the cognitive and sensory processes that were explored in the past. Seventy older adults with age-related hearing loss participated in this study. We assessed the relative contribution of hearing acuity, cognitive factors, rapid perceptual learning of time-compressed (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  49.  13
    A Hybrid of Deep CNN and Bidirectional LSTM for Automatic Speech Recognition.Rajesh Kumar Aggarwal & Vishal Passricha - 2019 - Journal of Intelligent Systems 29 (1):1261-1274.
    Deep neural networks (DNNs) have been playing a significant role in acoustic modeling. Convolutional neural networks (CNNs) are the advanced version of DNNs that achieve 4–12% relative gain in the word error rate (WER) over DNNs. Existence of spectral variations and local correlations in speech signal makes CNNs more capable of speech recognition. Recently, it has been demonstrated that bidirectional long short-term memory (BLSTM) produces higher recognition rate in acoustic modeling because they are adequate to reinforce (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  50.  9
    A Multiscale Chaotic Feature Extraction Method for Speaker Recognition.Jiang Lin, Yi Yumei, Zhang Maosheng, Chen Defeng, Wang Chao & Wang Tonghan - 2020 - Complexity 2020:1-9.
    In speaker recognition systems, feature extraction is a challenging task under environment noise conditions. To improve the robustness of the feature, we proposed a multiscale chaotic feature for speaker recognition. We use a multiresolution analysis technique to capture more finer information on different speakers in the frequency domain. Then, we extracted the speech chaotic characteristics based on the nonlinear dynamic model, which helps to improve the discrimination of features. Finally, we use a GMM-UBM model to develop (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
1 — 50 / 1000