Results for ' speech signal'

992 found
Order:
  1.  15
    Linear correlates in the speech signal: The orderly output constraint.Harvey M. Sussman, David Fruchter, Jon Hilbert & Joseph Sirosh - 1998 - Behavioral and Brain Sciences 21 (2):241-259.
    Neuroethological investigations of mammalian and avian auditory systems have documented species-specific specializations for processing complex acoustic signals that could, if viewed in abstract terms, have an intriguing and striking relevance for human speech sound categorization and representation. Each species forms biologically relevant categories based on combinatorial analysis of information-bearing parameters within the complex input signal. This target article uses known neural models from the mustached bat and barn owl to develop, by analogy, a conceptualization of human processing of (...)
    Direct download (7 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  2.  7
    Linear correlates in the speech signal: Consequences of the specific use of an acoustic tube?René Carré - 1998 - Behavioral and Brain Sciences 21 (2):261-262.
    The debate on the origin of the locus equation is circular. In this commentary the locus equation is obtained by way of a theoretical model based on acoustics without recourse to articulatory knowledge or perceptual constraints. The proposed model is driven by criteria of minimum energy and maximum simplicity.
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark  
  3.  4
    Time-Order Representation Based Method for Epoch Detection from Speech Signals.Ram Bilas Pachori & Pooja Jain - 2012 - Journal of Intelligent Systems 21 (1):79-95.
    . Epochs present in the voiced speech are defined as time instants of significant excitation of the vocal tract system during the production of speech. Nonstationary nature of excitation source and vocal tract system makes accurate identification of epochs a difficult task. Most of the existing methods for epoch detection require prior knowledge of voiced regions and a rough estimation of pitch frequency. In this paper, we propose a novel method that relies on time-order representation based on short-time (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  4.  15
    What information is necessary for speech categorization? Harnessing variability in the speech signal by integrating cues computed relative to expectations.Bob McMurray & Allard Jongman - 2011 - Psychological Review 118 (2):219-246.
  5.  14
    Correlates of linguistic rhythm in the speech signal.Franck Ramus, Marina Nespor & Jacques Mehler - 2000 - Cognition 75 (1):AD3-AD30.
    No categories
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   10 citations  
  6.  14
    Correlates of linguistic rhythm in the speech signal.Franck Ramus, Marina Nespor & Jacques Mehler - 1999 - Cognition 73 (3):265-292.
  7.  15
    When to simulate and when to associate? Accounting for inter-talker variability in the speech signal.Alison M. Trude - 2013 - Behavioral and Brain Sciences 36 (4):375-376.
    Pickering & Garrod's (P&G's) theory could be modified to describe how listeners rapidly incorporate context to generate predictions about speech despite inter-talker variability. However, in order to do so, the content of predicted percepts must be expanded to include phonetic information. Further, the way listeners identify and represent inter-talker differences and subsequently determine which prediction method to use would require further specification.
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  8. Do the laws of form apply to speech signals.Re Remez & Pe Rubin - 1989 - Bulletin of the Psychonomic Society 27 (6):497-497.
    No categories
     
    Export citation  
     
    Bookmark  
  9. Principles of auditory organization versus the speech signal.Re Remez, Sm Berns & Pe Rubin - 1990 - Bulletin of the Psychonomic Society 28 (6):525-525.
     
    Export citation  
     
    Bookmark  
  10.  13
    Visual signal detection as a function of sequential variability of simultaneous speech.John S. Antrobus & Jerome L. Singer - 1964 - Journal of Experimental Psychology 68 (6):603.
  11.  11
    Speech intelligibility and recall of first and second language words heard at different signal-to-noise ratios.Staffan Hygge, Anders Kjellberg & Anatole Nöstl - 2015 - Frontiers in Psychology 6.
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  12.  9
    The Learning Signal in Perceptual Tuning of Speech: Bottom Up Versus Top‐Down Information.Xujin Zhang, Yunan Charles Wu & Lori L. Holt - 2021 - Cognitive Science 45 (3):e12947.
    Cognitive systems face a tension between stability and plasticity. The maintenance of long‐term representations that reflect the global regularities of the environment is often at odds with pressure to flexibly adjust to short‐term input regularities that may deviate from the norm. This tension is abundantly clear in speech communication when talkers with accents or dialects produce input that deviates from a listener's language community norms. Prior research demonstrates that when bottom‐up acoustic information or top‐down word knowledge is available to (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  13. Inner Speech.Peter Langland-Hassan - forthcoming - WIREs Cognitive Science.
    Inner speech travels under many aliases: the inner voice, verbal thought, thinking in words, internal verbalization, “talking in your head,” the “little voice in the head,” and so on. It is both a familiar element of first-person experience and a psychological phenomenon whose complex cognitive components and distributed neural bases are increasingly well understood. There is evidence that inner speech plays a variety of cognitive roles, from enabling abstract thought, to supporting metacognition, memory, and executive function. One active (...)
    Direct download  
     
    Export citation  
     
    Bookmark   10 citations  
  14.  98
    Neutrosophic speech recognition Algorithm for speech under stress by Machine learning.Florentin Smarandache, D. Nagarajan & Said Broumi - 2023 - Neutrosophic Sets and Systems 53.
    It is well known that the unpredictable speech production brought on by stress from the task at hand has a significant negative impact on the performance of speech processing algorithms. Speech therapy benefits from being able to detect stress in speech. Speech processing performance suffers noticeably when perceptually produced stress causes variations in speech production. Using the acoustic speech signal to objectively characterize speaker stress is one method for assessing production variances brought (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  15.  23
    Why Early Tactile Speech Aids May Have Failed: No Perceptual Integration of Tactile and Auditory Signals.Aurora Rizza, Alexander V. Terekhov, Guglielmo Montone, Marta Olivetti-Belardinelli & J. Kevin O’Regan - 2018 - Frontiers in Psychology 9.
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark  
  16.  19
    Talker adaptation in speech perception: Adjusting the signal or the representations?Delphine Dahan, Sarah J. Drucker & Rebecca A. Scarborough - 2008 - Cognition 108 (3):710-718.
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   10 citations  
  17.  5
    Talker adaptation in speech perception: Adjusting the signal or the representations?Rebecca A. Scarborough Delphine Dahan, Sarah J. Drucker - 2008 - Cognition 108 (3):710.
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  18.  4
    Conventionalisation and discrimination as competing pressures on continuous speech-like signals.Hannah Little, Kerem Eryilmaz & Bart de Boer - 2017 - Interaction Studies 18 (3):352-375.
    Arbitrary communication systems can emerge from iconic beginnings through processes of conventionalisation via interaction. Here, we explore whether this process of conventionalisation occurs with continuous, auditory signals. We conducted an artificial signalling experiment. Participants either created signals for themselves, or for a partner in a communication game. We found no evidence that the speech-like signals in our experiment became less iconic or simpler through interaction. We hypothesise that the reason for our results is that when it is difficult to (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  19.  10
    “Motherese” Prosody in Fetal-Directed Speech: An Exploratory Study Using Automatic Social Signal Processing.Erika Parlato-Oliveira, Catherine Saint-Georges, David Cohen, Hugues Pellerin, Isabella Marques Pereira, Catherine Fouillet, Mohamed Chetouani, Marc Dommergues & Sylvie Viaux-Savelon - 2021 - Frontiers in Psychology 12.
    Introduction: Motherese, or emotional infant directed speech, is the specific form of speech used by parents to address their infants. The prosody of IDS has affective properties, expresses caregiver involvement, is a marker of caregiver-infant interaction quality. IDS prosodic characteristics can be detected with automatic analysis. We aimed to explore whether pregnant women “speak” to their unborn baby, whether they use motherese while speaking and whether anxio-depressive or obstetrical status impacts speaking to the fetus.Participants and Methods: We conducted (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  20.  4
    A hidden Markov optimization model for processing and recognition of English speech feature signals.Yinchun Chen - 2022 - Journal of Intelligent Systems 31 (1):716-725.
    Speech recognition plays an important role in human–computer interaction. The higher the accuracy and efficiency of speech recognition are, the larger the improvement of human–computer interaction performance. This article briefly introduced the hidden Markov model -based English speech recognition algorithm and combined it with a back-propagation neural network to further improve the recognition accuracy and reduce the recognition time of English speech. Then, the BPNN-combined HMM algorithm was simulated and compared with the HMM algorithm and the (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  21.  11
    Seeking Temporal Predictability in Speech: Comparing Statistical Approaches on 18 World Languages.Yannick Jadoul, Andrea Ravignani, Bill Thompson, Piera Filippi & Bart de Boer - 2016 - Frontiers in Human Neuroscience 10:196337.
    Temporal regularities in speech, such as interdependencies in the timing of speech events, are thought to scaffold early acquisition of the building blocks in speech. By providing on-line clues to the location and duration of upcoming syllables, temporal structure may aid segmentation and clustering of continuous speech into separable units. This hypothesis tacitly assumes that learners exploit predictability in the temporal structure of speech. Existing measures of speech timing tend to focus on first-order regularities (...)
    Direct download (7 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  22. Linguistic Intuitions: Error Signals and the Voice of Competence.Steven Gross - 2020 - In Samuel Schindler, Anna Drożdżowicz & Karen Brøcker (eds.), Linguistic Intuitions: Evidence and Method. Oxford, UK: Oxford University Press.
    Linguistic intuitions are a central source of evidence across a variety of linguistic domains. They have also long been a source of controversy. This chapter aims to illuminate the etiology and evidential status of at least some linguistic intuitions by relating them to error signals of the sort posited by accounts of on-line monitoring of speech production and comprehension. The suggestion is framed as a novel reply to Michael Devitt’s claim that linguistic intuitions are theory-laden “central systems” responses, rather (...)
    Direct download  
     
    Export citation  
     
    Bookmark   4 citations  
  23.  9
    Can you hear my age? Influences of speech rate and speech spontaneity on estimation of speaker age.Sara Skoog Waller, Mårten Eriksson & Patrik Sörqvist - 2015 - Frontiers in Psychology 6:144456.
    Cognitive hearing science is mainly about the study of how cognitive factors contribute to speech comprehension, but cognitive factors also partake in speech processing to infer non-linguistic information from speech signals, such as the intentions of the talker and the speaker’s age. Here, we report two experiments on age estimation by “naïve” listeners. The aim was to study how speech rate influences estimation of speaker age by comparing the speakers’ natural speech rate with increased or (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  24.  4
    23 A voice is worth a thousand words: the implications of the micro-coding of social signals in speech for trust research.Benjamin Waber & Michele Williams - 2012 - In Fergus Lyon, Guido Möllering & Mark Saunders (eds.), Handbook of research methods on trust. Northampton, Mass.: Edward Elgar. pp. 249.
    Direct download  
     
    Export citation  
     
    Bookmark  
  25.  6
    Close Reading with Computers: Genre Signals, Parts of Speech, and David Mitchell’s Cloud Atlas.Martin Paul Eve - 2017 - Substance 46 (3):76-104.
    Reading literature with the aid of computational techniques is controversial. For some, digital approaches apparently fetishize the curation of textual archives, lack interpretative rigor, and are thoroughly ’neoliberal’ in their pursuit of Silicon Valley-esque software-tool production. For others, the potential benefits of amplifying reading-labor-power through non-consumptive use of book corpora fulfills the dreams of early twentieth-century Russian formalism and yields new, distant ways in which we can consider textual pattern-making (Jockers; Moretti, Distant Reading; Moretti...
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  26.  11
    Pure word deafness and the bilateral processing of the speech code.David Poeppel - 2001 - Cognitive Science 25 (5):679-693.
    The analysis of pure word deafness (PWD) suggests that speech perception, construed as the integration of acoustic information to yield representations that enter into the linguistic computational system, (i) is separable in a modular sense from other aspects of auditory cognition and (ii) is mediated by the posterior superior temporal cortex in both hemispheres. PWD data are consistent with neuropsychological and neuroimaging evidence in a manner that suggests that the speech code is analyzed bilaterally. The typical lateralization associated (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   13 citations  
  27.  9
    Identifying Selective Auditory Attention to Speech from Electrocorticographic Signals.Dijkstra Karen, Brunner Peter, Gunduz Aysegul, Coon Wiliam, Ritaccio Anthony, Farquhar Jason & Schalk Gerwin - 2015 - Frontiers in Human Neuroscience 9.
  28.  6
    Single-Channel Speech Enhancement Techniques for Distant Speech Recognition.Ramaswamy Kumaraswamy & Jaya Kumar Ashwini - 2013 - Journal of Intelligent Systems 22 (2):81-93.
    This article presents an overview of the single-channel dereverberation methods suitable for distant speech recognition application. The dereverberation methods are mainly classified based on the domain of enhancement of speech signal captured by a distant microphone. Many single-channel speech enhancement methods focus on either denoising or dereverberating the distorted speech signal. There are very few methods that consider both noise and reverberation effects. Such methods are discussed under a multistage approach in this article. The (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  29.  17
    Perceptual Restoration of Temporally Distorted Speech in L1 vs. L2: Local Time Reversal and Modulation Filtering.Mako Ishida, Takayuki Arai & Makio Kashino - 2018 - Frontiers in Psychology 9.
    Speech is intelligible even when the temporal envelope of speech is distorted. The current study investigates how native and non-native speakers perceptually restore temporally distorted speech. Participants were native English speakers (NS), and native Japanese speakers who spoke English as a second language (NNS). In Experiment 1, participants listened to “locally time-reversed speech” where every x-ms of speech signal was reversed on the temporal axis. Here, the local time reversal shifted the constituents of the (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  30.  17
    Bringing back the voice: on the auditory objects of speech perception.Anna Drożdżowicz - 2020 - Synthese (x):1-27.
    When you hear a person speaking in a familiar language you perceive the speech sounds uttered and the voice that produces them. How are speech sounds and voice related in a typical auditory experience of hearing speech in a particular voice? And how to conceive of the objects of such experiences? I propose a conception of auditory objects of speech perception as temporally structured mereologically complex individuals. A common experience is that speech sounds and the (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  31. Influence of lipreading on detection of speech in signal-correlated noise.Bh Repp & R. Frost - 1990 - Bulletin of the Psychonomic Society 28 (6):526-526.
     
    Export citation  
     
    Bookmark  
  32.  47
    The ConDialInt Model: Condensation, Dialogality, and Intentionality Dimensions of Inner Speech Within a Hierarchical Predictive Control Framework.Romain Grandchamp, Lucile Rapin, Marcela Perrone-Bertolotti, Cédric Pichat, Célise Haldin, Emilie Cousin, Jean-Philippe Lachaux, Marion Dohen, Pascal Perrier, Maëva Garnier, Monica Baciu & Hélène Lœvenbruck - 2019 - Frontiers in Psychology 10.
    Inner speech has been shown to vary in form along several dimensions. Along condensation, condensed inner speech forms have been described, that are supposed to be deprived of acoustic, phonological and even syntactic qualities. Expanded forms, on the other extreme, display articulatory and auditory properties. Along dialogality, inner speech can be monologal, when we engage in internal soliloquy, or dialogal, when we recall past conversations or imagine future dialogues involving our own voice as well as that of (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  33.  10
    Integrating cues in speech perception.Dominic W. Massaro - 1998 - Behavioral and Brain Sciences 21 (2):275-275.
    Sussman et al. describe an ecological property of the speech signal that is putatively functional in perception. An important issue, however, is whether their putative cue is an emerging feature or whether the second formant (F2) onset and the F2 vowel actually provide independent cues to perceptual categorization. Regardless of the outcome of this issue, an important goal of speech research is to understand how multiple cues are evaluated and integrated to achieve categorization.
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  34.  6
    On Dynamic Pitch Benefit for Speech Recognition in Speech Masker.Jing Shen & Pamela E. Souza - 2018 - Frontiers in Psychology 9.
    Previous work demonstrated that dynamic pitch (i.e., pitch variation in speech) aids speech recognition in various types of noises. While this finding suggests dynamic pitch enhancement in target speech can benefit speech recognition in noise, it is of importance to know what noise characteristics affect dynamic pitch benefit and who will benefit from enhanced dynamic pitch cues. Following our recent finding that temporal modulation in noise influences dynamic pitch benefit, we examined the effect of speech (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  35.  4
    Realistic Speech-Driven Talking Video Generation with Personalized Pose.Xu Zhang & Liguo Weng - 2020 - Complexity 2020:1-8.
    In this work, we propose a method to transform a speaker’s speech information into a target character’s talking video; the method could make the mouth shape synchronization, expression, and body posture more realistic in the synthesized speaker video. This is a challenging task because changes of mouth shape and posture are coupled with audio semantic information. The model training is difficult to converge, and the model effect is unstable in complex scenes. Existing speech-driven speaker methods cannot solve this (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  36.  14
    A Hybrid of Deep CNN and Bidirectional LSTM for Automatic Speech Recognition.Rajesh Kumar Aggarwal & Vishal Passricha - 2019 - Journal of Intelligent Systems 29 (1):1261-1274.
    Deep neural networks (DNNs) have been playing a significant role in acoustic modeling. Convolutional neural networks (CNNs) are the advanced version of DNNs that achieve 4–12% relative gain in the word error rate (WER) over DNNs. Existence of spectral variations and local correlations in speech signal makes CNNs more capable of speech recognition. Recently, it has been demonstrated that bidirectional long short-term memory (BLSTM) produces higher recognition rate in acoustic modeling because they are adequate to reinforce higher-level (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  37.  89
    Hearing a Voice as one’s own: Two Views of Inner Speech Self-Monitoring Deficits in Schizophrenia.Peter Langland-Hassan - 2016 - Review of Philosophy and Psychology 7 (3):675-699.
    Many philosophers and psychologists have sought to explain experiences of auditory verbal hallucinations and “inserted thoughts” in schizophrenia in terms of a failure on the part of patients to appropriately monitor their own inner speech. These self-monitoring accounts have recently been challenged by some who argue that AVHs are better explained in terms of the spontaneous activation of auditory-verbal representations. This paper defends two kinds of self-monitoring approach against the spontaneous activation account. The defense requires first making some important (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  38.  4
    Detection and Recognition of Asynchronous Auditory/Visual Speech: Effects of Age, Hearing Loss, and Talker Accent.Sandra Gordon-Salant, Maya S. Schwartz, Kelsey A. Oppler & Grace H. Yeni-Komshian - 2022 - Frontiers in Psychology 12.
    This investigation examined age-related differences in auditory-visual integration as reflected on perceptual judgments of temporally misaligned AV English sentences spoken by native English and native Spanish talkers. In the detection task, it was expected that slowed auditory temporal processing of older participants, relative to younger participants, would be manifest as a shift in the range over which participants would judge asynchronous stimuli as synchronous. The older participants were also expected to exhibit greater declines in speech recognition for asynchronous AV (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  39.  12
    Automatic phonetic segmentation of Hindi speech using hidden Markov model.Archana Balyan, S. S. Agrawal & Amita Dev - 2012 - AI and Society 27 (4):543-549.
    In this paper, we study the performance of baseline hidden Markov model (HMM) for segmentation of speech signals. It is applied on single-speaker segmentation task, using Hindi speech database. The automatic phoneme segmentation framework evolved imitates the human phoneme segmentation process. A set of 44 Hindi phonemes were chosen for the segmentation experiment, wherein we used continuous density hidden Markov model (CDHMM) with a mixture of Gaussian distribution. The left-to-right topology with no skip states has been selected as (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  40.  10
    Statistical learning of social signals and its implications for the social brain hypothesis.Hjalmar K. Turesson & Asif A. Ghazanfar - 2011 - Interaction Studies. Social Behaviour and Communication in Biological and Artificial Systemsinteraction Studies / Social Behaviour and Communication in Biological and Artificial Systemsinteraction Studies 12 (3):397-417.
    The social brain hypothesis implies that humans and other primates evolved “modules” for representing social knowledge. Alternatively, no such cognitive specializations are needed because social knowledge is already present in the world — we can simply monitor the dynamics of social interactions. Given the latter idea, what mechanism could account for coalition formation? We propose that statistical learning can provide a mechanism for fast and implicit learning of social signals. Using human participants, we compared learning of social signals with arbitrary (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  41.  23
    Dimension‐Based Statistical Learning Affects Both Speech Perception and Production.Matthew Lehet & Lori L. Holt - 2017 - Cognitive Science 41 (S4):885-912.
    Multiple acoustic dimensions signal speech categories. However, dimensions vary in their informativeness; some are more diagnostic of category membership than others. Speech categorization reflects these dimensional regularities such that diagnostic dimensions carry more “perceptual weight” and more effectively signal category membership to native listeners. Yet perceptual weights are malleable. When short-term experience deviates from long-term language norms, such as in a foreign accent, the perceptual weight of acoustic dimensions in signaling speech category membership rapidly adjusts. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  42.  9
    Multi-Talker Speech Promotes Greater Knowledge-Based Spoken Mandarin Word Recognition in First and Second Language Listeners.Seth Wiener & Chao-Yang Lee - 2020 - Frontiers in Psychology 11.
    Spoken word recognition involves a perceptual tradeoff between the reliance on the incoming acoustic signal and knowledge about likely sound categories and their co-occurrences as words. This study examined how adult second language (L2) learners navigate between acoustic-based and knowledge-based spoken word recognition when listening to highly variable, multi-talker truncated speech, and whether this perceptual tradeoff changes as L2 listeners gradually become more proficient in their L2 after multiple months of structured classroom learning. First language (L1) Mandarin Chinese (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  43.  12
    Realization of Self-Adaptive Higher Teaching Management Based Upon Expression and Speech Multimodal Emotion Recognition.Huihui Zhou & Zheng Liu - 2022 - Frontiers in Psychology 13.
    In the process of communication between people, everyone will have emotions, and different emotions will have different effects on communication. With the help of external performance information accompanied by emotional expression, such as emotional speech signals or facial expressions, people can easily communicate with each other and understand each other. Emotion recognition is an important network of affective computers and research centers for signal processing, pattern detection, artificial intelligence, and human-computer interaction. Emotions convey important information in human communication (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  44.  10
    Lexical Effects on the Perceived Clarity of Noise-Vocoded Speech in Younger and Older Listeners.Terrin N. Tamati, Victoria A. Sevich, Emily M. Clausing & Aaron C. Moberly - 2022 - Frontiers in Psychology 13.
    When listening to degraded speech, such as speech delivered by a cochlear implant, listeners make use of top-down linguistic knowledge to facilitate speech recognition. Lexical knowledge supports speech recognition and enhances the perceived clarity of speech. Yet, the extent to which lexical knowledge can be used to effectively compensate for degraded input may depend on the degree of degradation and the listener’s age. The current study investigated lexical effects in the compensation for speech that (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  45.  7
    The Principle of Inverse Effectiveness in Audiovisual Speech Perception.Luuk P. H. van de Rijt, Anja Roye, Emmanuel A. M. Mylanus, A. John van Opstal & Marc M. van Wanrooij - 2019 - Frontiers in Human Neuroscience 13:468577.
    We assessed how synchronous speech listening and lipreading affects speech recognition in acoustic noise. In simple audiovisual perceptual tasks, inverse effectiveness is often observed, which holds that the weaker the unimodal stimuli, or the poorer their signal-to-noise ratio, the stronger the audiovisual benefit. So far, however, inverse effectiveness has not been demonstrated for complex audiovisual speech stimuli. Here we assess whether this multisensory integration effect can also be observed for the recognizability of spoken words. To that (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  46.  4
    Temporal Cortex Activation to Audiovisual Speech in Normal-Hearing and Cochlear Implant Users Measured with Functional Near-Infrared Spectroscopy.Luuk P. H. van de Rijt, A. John van Opstal, Emmanuel A. M. Mylanus, Louise V. Straatman, Hai Yin Hu, Ad F. M. Snik & Marc M. van Wanrooij - 2016 - Frontiers in Human Neuroscience 10:173204.
    Background Speech understanding may rely not only on auditory, but also on visual information. Non-invasive functional neuroimaging techniques can expose the neural processes underlying the integration of multisensory processes required for speech understanding in humans. Nevertheless, noise (from fMRI) limits the usefulness in auditory experiments, and electromagnetic artefacts caused by electronic implants worn by subjects can severely distort the scans (EEG, fMRI). Therefore, we assessed audio-visual activation of temporal cortex with a silent, optical neuroimaging technique: functional near-infrared spectroscopy (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  47. Emotivity in the Voice: Prosodic, Lexical, and Cultural Appraisal of Complaining Speech.Maël Mauchand & Marc D. Pell - 2021 - Frontiers in Psychology 11.
    Emotive speech is a social act in which a speaker displays emotional signals with a specific intention; in the case of third-party complaints, this intention is to elicit empathy in the listener. The present study assessed how the emotivity of complaints was perceived in various conditions. Participants listened to short statements describing painful or neutral situations, spoken with a complaining or neutral prosody, and evaluated how complaining the speaker sounded. In addition to manipulating features of the message, social-affiliative factors (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  48.  8
    Age-Related Differences in Lexical Access Relate to Speech Recognition in Noise.Rebecca Carroll, Anna Warzybok, Birger Kollmeier & Esther Ruigendijk - 2016 - Frontiers in Psychology 7:170619.
    Vocabulary size has been suggested as a useful measure of “verbal abilities” that correlates with speech recognition scores. Knowing more words is linked to better speech recognition. How vocabulary knowledge translates to general speech recognition mechanisms, how these mechanisms relate to offline speech recognition scores, and how they may be modulated by acoustical distortion or age, is less clear. Age-related differences in linguistic measures may predict age-related differences in speech recognition in noise performance. We hypothesized (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  49.  21
    Statistical learning of social signals and its implications for the social brain hypothesis.Hjalmar K. Turesson & Asif A. Ghazanfar - 2011 - Interaction Studies 12 (3):397-417.
    The social brain hypothesis implies that humans and other primates evolved “modules“ for representing social knowledge. Alternatively, no such cognitive specializations are needed because social knowledge is already present in the world — we can simply monitor the dynamics of social interactions. Given the latter idea, what mechanism could account for coalition formation? We propose that statistical learning can provide a mechanism for fast and implicit learning of social signals. Using human participants, we compared learning of social signals with arbitrary (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  50.  13
    A State-of-the-Art Review of EEG-Based Imagined Speech Decoding.Diego Lopez-Bernal, David Balderas, Pedro Ponce & Arturo Molina - 2022 - Frontiers in Human Neuroscience 16:867281.
    Currently, the most used method to measure brain activity under a non-invasive procedure is the electroencephalogram (EEG). This is because of its high temporal resolution, ease of use, and safety. These signals can be used under a Brain Computer Interface (BCI) framework, which can be implemented to provide a new communication channel to people that are unable to speak due to motor disabilities or other neurological diseases. Nevertheless, EEG-based BCI systems have presented challenges to be implemented in real life situations (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
1 — 50 / 992