Results for 'speech recognition'

980 found
Order:
  1.  98
    Neutrosophic speech recognition Algorithm for speech under stress by Machine learning.Florentin Smarandache, D. Nagarajan & Said Broumi - 2023 - Neutrosophic Sets and Systems 53.
    It is well known that the unpredictable speech production brought on by stress from the task at hand has a significant negative impact on the performance of speech processing algorithms. Speech therapy benefits from being able to detect stress in speech. Speech processing performance suffers noticeably when perceptually produced stress causes variations in speech production. Using the acoustic speech signal to objectively characterize speaker stress is one method for assessing production variances brought on (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  2.  97
    Merging information in speech recognition: Feedback is never necessary.Dennis Norris, James M. McQueen & Anne Cutler - 2000 - Behavioral and Brain Sciences 23 (3):299-325.
    Top-down feedback does not benefit speech recognition; on the contrary, it can hinder it. No experimental data imply that feedback loops are required for speech recognition. Feedback is accordingly unnecessary and spoken word recognition is modular. To defend this thesis, we analyse lexical involvement in phonemic decision making. TRACE (McClelland & Elman 1986), a model with feedback from the lexicon to prelexical processes, is unable to account for all the available data on phonemic decision making. (...)
    Direct download (8 more)  
     
    Export citation  
     
    Bookmark   47 citations  
  3.  24
    Automatic Speech Recognition: A Comprehensive Survey.Arbana Kadriu & Amarildo Rista - 2020 - Seeu Review 15 (2):86-112.
    Speech recognition is an interdisciplinary subfield of natural language processing (NLP) that facilitates the recognition and translation of spoken language into text by machine. Speech recognition plays an important role in digital transformation. It is widely used in different areas such as education, industry, and healthcare and has recently been used in many Internet of Things and Machine Learning applications. The process of speech recognition is one of the most difficult processes in computer (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  4.  8
    Restricted Speech Recognition in Noise and Quality of Life of Hearing-Impaired Children and Adolescents With Cochlear Implants – Need for Studies Addressing This Topic With Valid Pediatric Quality of Life Instruments.Maria Huber & Clara Havas - 2019 - Frontiers in Psychology 10.
    Cochlear implants (CI) support the development of oral language in hearing-impaired children. However, even with CI, speech recognition in noise (SRiN) is limited. This raised the question, whether these restrictions are related to the quality of life (QoL) of children and adolescents with CI and how SRiN and QoL are related to each other. As a result of a systematic literature research only three studies were found, indicating positive moderating effects between SRiN and QoL of young CI users. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  5.  4
    Longitudinal Speech Recognition in Noise in Children: Effects of Hearing Status and Vocabulary.Elizabeth A. Walker, Caitlin Sapp, Jacob J. Oleson & Ryan W. McCreery - 2019 - Frontiers in Psychology 10.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  6.  9
    Discriminatively trained continuous Hindi speech recognition using integrated acoustic features and recurrent neural network language modeling.R. K. Aggarwal & A. Kumar - 2020 - Journal of Intelligent Systems 30 (1):165-179.
    This paper implements the continuous Hindi Automatic Speech Recognition (ASR) system using the proposed integrated features vector with Recurrent Neural Network (RNN) based Language Modeling (LM). The proposed system also implements the speaker adaptation using Maximum-Likelihood Linear Regression (MLLR) and Constrained Maximum likelihood Linear Regression (C-MLLR). This system is discriminatively trained by Maximum Mutual Information (MMI) and Minimum Phone Error (MPE) techniques with 256 Gaussian mixture per Hidden Markov Model(HMM) state. The training of the baseline system has been (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  7.  28
    Modelling asynchrony in automatic speech recognition using loosely coupled hidden Markov models.H. J. Nock & S. J. Young - 2002 - Cognitive Science 26 (3):283-301.
    Hidden Markov models (HMMs) have been successful for modelling the dynamics of carefully dictated speech, but their performance degrades severely when used to model conversational speech. Since speech is produced by a system of loosely coupled articulators, stochastic models explicitly representing this parallelism may have advantages for automatic speech recognition (ASR), particularly when trying to model the phonological effects inherent in casual spontaneous speech. This paper presents a preliminary feasibility study of one such model (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  8. Speech recognition technology.F. Beaufays, H. Bourlard, Horacio Franco & Nelson Morgan - 2002 - In M. Arbib (ed.), The Handbook of Brain Theory and Neural Networks. MIT Press.
     
    Export citation  
     
    Bookmark  
  9. Speech recognition: Statistical methods.L. R. Rabiner & B. H. Juang - 2006 - In Keith Brown (ed.), Encyclopedia of Language and Linguistics. Elsevier. pp. 1--18.
     
    Export citation  
     
    Bookmark  
  10.  18
    Masked Speech Recognition in School-Age Children.Lori J. Leibold & Emily Buss - 2019 - Frontiers in Psychology 10.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  11.  41
    DLD: An Optimized Chinese Speech Recognition Model Based on Deep Learning.Hong Lei, Yue Xiao, Yanchun Liang, Dalin Li & Heow Pueh Lee - 2022 - Complexity 2022:1-8.
    Speech recognition technology has played an indispensable role in realizing human-computer intelligent interaction. However, most of the current Chinese speech recognition systems are provided online or offline models with low accuracy and poor performance. To improve the performance of offline Chinese speech recognition, we propose a hybrid acoustic model of deep convolutional neural network, long short-term memory, and deep neural network. This model utilizes DCNN to reduce frequency variation and adds a batch normalization layer (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  12.  46
    Effects of Semantic Context and Fundamental Frequency Contours on Mandarin Speech Recognition by Second Language Learners.Linjun Zhang, Yu Li, Han Wu, Xin Li, Hua Shu, Yang Zhang & Ping Li - 2016 - Frontiers in Psychology 7:189783.
    Speech recognition by second language (L2) learners in optimal and suboptimal conditions has been examined extensively with English as the target language in most previous studies. This study extended existing experimental protocols ( Wang et al., 2013 ) to investigate Mandarin speech recognition by Japanese learners of Mandarin at two different levels (elementary vs. intermediate) of proficiency. The overall results showed that in addition to L2 proficiency, semantic context, F0 contours, and listening condition all affected the (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  13.  36
    Perceptual units in speech recognition.Dominic W. Massaro - 1974 - Journal of Experimental Psychology 102 (2):199.
  14.  26
    EARSHOT: A Minimal Neural Network Model of Incremental Human Speech Recognition.James S. Magnuson, Heejo You, Sahil Luthra, Monica Li, Hosung Nam, Monty Escabí, Kevin Brown, Paul D. Allopenna, Rachel M. Theodore, Nicholas Monto & Jay G. Rueckl - 2020 - Cognitive Science 44 (4):e12823.
    Despite the lack of invariance problem (the many‐to‐many mapping between acoustics and percepts), human listeners experience phonetic constancy and typically perceive what a speaker intends. Most models of human speech recognition (HSR) have side‐stepped this problem, working with abstract, idealized inputs and deferring the challenge of working with real speech. In contrast, carefully engineered deep learning networks allow robust, real‐world automatic speech recognition (ASR). However, the complexities of deep learning architectures and training regimens make it (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  15.  36
    Merging information versus speech recognition.Irene Appelbaum - 2000 - Behavioral and Brain Sciences 23 (3):325-326.
    Norris, McQueen & Cutler claim that all known speech recognition data can be accounted for with their autonomous model, “Merge.” But this claim is doubly misleading. (1) Although speech recognition is autonomous in their view, the Merge model is not. (2) The body of data which the Merge model accounts for, is not, in their view, speech recognition data. Footnotes1 Author is also affiliated with the Center for the Study of Language and Information, Stanford (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  16.  14
    A Hybrid of Deep CNN and Bidirectional LSTM for Automatic Speech Recognition.Rajesh Kumar Aggarwal & Vishal Passricha - 2019 - Journal of Intelligent Systems 29 (1):1261-1274.
    Deep neural networks (DNNs) have been playing a significant role in acoustic modeling. Convolutional neural networks (CNNs) are the advanced version of DNNs that achieve 4–12% relative gain in the word error rate (WER) over DNNs. Existence of spectral variations and local correlations in speech signal makes CNNs more capable of speech recognition. Recently, it has been demonstrated that bidirectional long short-term memory (BLSTM) produces higher recognition rate in acoustic modeling because they are adequate to reinforce (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  17.  6
    English Phrase Speech Recognition Based on Continuous Speech Recognition Algorithm and Word Tree Constraints.Haifan Du & Haiwen Duan - 2021 - Complexity 2021:1-11.
    This paper combines domestic and international research results to analyze and study the difference between the attribute features of English phrase speech and noise to enhance the short-time energy, which is used to improve the threshold judgment sensitivity; noise addition to the discrepancy data set is used to enhance the recognition robustness. The backpropagation algorithm is improved to constrain the range of weight variation, avoid oscillation phenomenon, and shorten the training time. In the real English phrase sound (...) system, there are problems such as massive training data and low training efficiency caused by the super large-scale model parameters of the convolutional neural network. To address these problems, the NWBP algorithm is based on the oscillation phenomenon that tends to occur when searching for the minimum error value in the late training period of the network parameters, using the K-MEANS algorithm to obtain the seed nodes that approach the minimal error value, and using the boundary value rule to reduce the range of weight change to reduce the oscillation phenomenon so that the network error converges as soon as possible and improve the training efficiency. Through simulation experiments, the NWBP algorithm improves the degree of fitting and convergence speed in the training of complex convolutional neural networks compared with other algorithms, reduces the redundant computation, and shortens the training time to a certain extent, and the algorithm has the advantage of accelerating the convergence of the network compared with simple networks. The word tree constraint and its efficient storage structure are introduced, which improves the storage efficiency of the word tree constraint and the retrieval efficiency in the English phrase recognition search. (shrink)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  18. Techno-Telepathy & Silent Subvocal Speech-Recognition Robotics.Virgil W. Brower - 2021 - HORIZON. Studies in Phenomenology 10 (1):232-257.
    The primary focus of this project is the silent and subvocal speech-recognition interface unveiled in 2018 as an ambulatory device wearable on the neck that detects a myoelectrical signature by electrodes worn on the surface of the face, throat, and neck. These emerge from an alleged “intending to speak” by the wearer silently-saying-something-to-oneself. This inner voice is believed to occur while one reads in silence or mentally talks to oneself. The artifice does not require spoken sounds, opening the (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  19.  6
    On Dynamic Pitch Benefit for Speech Recognition in Speech Masker.Jing Shen & Pamela E. Souza - 2018 - Frontiers in Psychology 9.
    Previous work demonstrated that dynamic pitch (i.e., pitch variation in speech) aids speech recognition in various types of noises. While this finding suggests dynamic pitch enhancement in target speech can benefit speech recognition in noise, it is of importance to know what noise characteristics affect dynamic pitch benefit and who will benefit from enhanced dynamic pitch cues. Following our recent finding that temporal modulation in noise influences dynamic pitch benefit, we examined the effect of (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  20. Audio-visual speech recognition.G. Potamianos & J. Luettin - 2005 - In Alex Barber (ed.), Encyclopedia of Language and Linguistics. Elsevier.
  21.  49
    Mandarin-Speaking Children’s Speech Recognition: Developmental Changes in the Influences of Semantic Context and F0 Contours.Zhou Hong, Li Yu, Liang Meng, Guan Connie Qun, Zhang Linjun, Shu Hua & Zhang Yang - 2017 - Frontiers in Psychology 8.
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  22.  8
    Age-Related Differences in Lexical Access Relate to Speech Recognition in Noise.Rebecca Carroll, Anna Warzybok, Birger Kollmeier & Esther Ruigendijk - 2016 - Frontiers in Psychology 7:170619.
    Vocabulary size has been suggested as a useful measure of “verbal abilities” that correlates with speech recognition scores. Knowing more words is linked to better speech recognition. How vocabulary knowledge translates to general speech recognition mechanisms, how these mechanisms relate to offline speech recognition scores, and how they may be modulated by acoustical distortion or age, is less clear. Age-related differences in linguistic measures may predict age-related differences in speech recognition (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  23.  20
    Differences in Speech Recognition Between Children with Attention Deficits and Typically Developed Children Disappear When Exposed to 65 dB of Auditory Noise.Göran B. W. Söderlund & Elisabeth Nilsson Jobs - 2016 - Frontiers in Psychology 7.
    Direct download (7 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  24.  40
    Temporal cortex activation during speech recognition: an optical topography study.Hiroki Sato, Tatsuya Takeuchi & Kuniyoshi L. Sakai - 1999 - Cognition 73 (3):B55-B66.
  25.  59
    Shortlist B: A Bayesian model of continuous speech recognition.Dennis Norris & James M. McQueen - 2008 - Psychological Review 115 (2):357-395.
    No categories
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   66 citations  
  26. Speech and Language Processing: An Introduction to Natural Language Processing, Computational Linguistics, and Speech Recognition.Dan Jurafsky & James H. Martin - 2000 - Prentice-Hall.
    The first of its kind to thoroughly cover language technology at all levels and with all modern technologies this book takes an empirical approach to the ...
    Direct download  
     
    Export citation  
     
    Bookmark   32 citations  
  27.  96
    Shortlist: a connectionist model of continuous speech recognition.Dennis Norris - 1994 - Cognition 52 (3):189-234.
  28.  17
    Single-Channel Speech Enhancement Techniques for Distant Speech Recognition.Ramaswamy Kumaraswamy & Jaya Kumar Ashwini - 2013 - Journal of Intelligent Systems 22 (2):81-93.
    This article presents an overview of the single-channel dereverberation methods suitable for distant speech recognition application. The dereverberation methods are mainly classified based on the domain of enhancement of speech signal captured by a distant microphone. Many single-channel speech enhancement methods focus on either denoising or dereverberating the distorted speech signal. There are very few methods that consider both noise and reverberation effects. Such methods are discussed under a multistage approach in this article. The article (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  29.  12
    Vlsi architecture design for bnn speech recognition.Jia-Ching Wang, Jhing-Fa Wang & Fan-Min Li - 2003 - Signal Processing, Pattern Recognition, and Applications.
    Direct download  
     
    Export citation  
     
    Bookmark  
  30.  12
    Multitask Learning with Local Attention for Tibetan Speech Recognition.Hui Wang, Fei Gao, Yue Zhao, Li Yang, Jianjian Yue & Huilin Ma - 2020 - Complexity 2020:1-10.
    In this paper, we propose to incorporate the local attention in WaveNet-CTC to improve the performance of Tibetan speech recognition in multitask learning. With an increase in task number, such as simultaneous Tibetan speech content recognition, dialect identification, and speaker recognition, the accuracy rate of a single WaveNet-CTC decreases on speech recognition. Inspired by the attention mechanism, we introduce the local attention to automatically tune the weights of feature frames in a window and (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  31.  6
    Do Age and Linguistic Status Alter the Effect of Sound Source Diffuseness on Speech Recognition in Noise?Meital Avivi-Reich, Rupinder Kaur Sran & Bruce A. Schneider - 2022 - Frontiers in Psychology 13.
    One aspect of auditory scenes that has received very little attention is the level of diffuseness of sound sources. This aspect has increasing importance due to growing use of amplification systems. When an auditory stimulus is amplified and presented over multiple, spatially-separated loudspeakers, the signal’s timbre is altered due to comb filtering. In a previous study we examined how increasing the diffuseness of the sound sources might affect listeners’ ability to recognize speech presented in different types of background noise. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  32.  12
    Recognition of English speech – using a deep learning algorithm.Shuyan Wang - 2023 - Journal of Intelligent Systems 32 (1).
    The accurate recognition of speech is beneficial to the fields of machine translation and intelligent human–computer interaction. After briefly introducing speech recognition algorithms, this study proposed to recognize speech with a recurrent neural network (RNN) and adopted the connectionist temporal classification (CTC) algorithm to align input speech sequences and output text sequences forcibly. Simulation experiments compared the RNN-CTC algorithm with the Gaussian mixture model–hidden Markov model and convolutional neural network-CTC algorithms. The results demonstrated that (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  33.  15
    Effects of Hearing Loss and Cognitive Load on Speech Recognition with Competing Talkers.Hartmut Meister, Stefan Schreitmüller, Magdalene Ortmann, Sebastian Rählmann & Martin Walger - 2016 - Frontiers in Psychology 7.
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  34.  26
    A salience-driven approach to speech recognition for human-robot interaction.Pierre Lison - 2010 - In T. Icard & R. Muskens (eds.), Interfaces: Explorations in Logic, Language and Computation. Springer Berlin. pp. 102--113.
  35.  11
    Exploring the Link Between Cognitive Abilities and Speech Recognition in the Elderly Under Different Listening Conditions.Theresa Nuesse, Rike Steenken, Tobias Neher & Inga Holube - 2018 - Frontiers in Psychology 9.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  36.  29
    The development of the orthographic consistency effect in speech recognition: From sublexical to lexical involvement.Paulo Ventura, José Morais & Régine Kolinsky - 2007 - Cognition 105 (3):547-576.
  37.  56
    Recognition of continuous speech requires top-down processing.Kenneth N. Stevens - 2000 - Behavioral and Brain Sciences 23 (3):348-348.
    The proposition that feedback is never necessary in speech recognition is examined for utterances consisting of sequences of words. In running speech the features near word boundaries are often modified according to language-dependent rules. Application of these rules during word recognition requires top-down processing. Because isolated words are not usually modified by rules, their recognition could be achieved by bottom-up processing only.
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark  
  38.  28
    How does susceptibility to proactive interference relate to speech recognition in aided and unaided conditions?Rachel J. Ellis & Jerker Rönnberg - 2015 - Frontiers in Psychology 6.
    Direct download (7 more)  
     
    Export citation  
     
    Bookmark  
  39.  33
    Recognition, Authority Relations, and Rejecting Hate Speech.Suzanne Whitten - 2019 - Ethical Theory and Moral Practice 22 (3):555-571.
    A key focus in many debates surrounding the harm in hate speech centres on the subordinating impact hate speech has on its victims. Under such a view, and provided there exists a requisite level of speaker authority a particular speech situation, hate speech can be conceived as something which directly impact’s the victim’s status, and can be contrasted to the view that such speech merely expresses hateful ideas. Missing from these conceptions, however, are the ways (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  40.  16
    Speech act theory and the rule of recognition.Marcin Matczak - 2019 - Jurisprudence 10 (4):552-581.
    In this paper, I re-interpret Hart’s concept of the rule of recognition using the theoretical framework of J. L. Austin’s speech act theory, in particular by treating recognition, change and adjudi...
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  41.  4
    Detection and Recognition of Asynchronous Auditory/Visual Speech: Effects of Age, Hearing Loss, and Talker Accent.Sandra Gordon-Salant, Maya S. Schwartz, Kelsey A. Oppler & Grace H. Yeni-Komshian - 2022 - Frontiers in Psychology 12.
    This investigation examined age-related differences in auditory-visual integration as reflected on perceptual judgments of temporally misaligned AV English sentences spoken by native English and native Spanish talkers. In the detection task, it was expected that slowed auditory temporal processing of older participants, relative to younger participants, would be manifest as a shift in the range over which participants would judge asynchronous stimuli as synchronous. The older participants were also expected to exhibit greater declines in speech recognition for asynchronous (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  42.  4
    A hidden Markov optimization model for processing and recognition of English speech feature signals.Yinchun Chen - 2022 - Journal of Intelligent Systems 31 (1):716-725.
    Speech recognition plays an important role in human–computer interaction. The higher the accuracy and efficiency of speech recognition are, the larger the improvement of human–computer interaction performance. This article briefly introduced the hidden Markov model -based English speech recognition algorithm and combined it with a back-propagation neural network to further improve the recognition accuracy and reduce the recognition time of English speech. Then, the BPNN-combined HMM algorithm was simulated and compared with (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  43.  9
    Multi-Talker Speech Promotes Greater Knowledge-Based Spoken Mandarin Word Recognition in First and Second Language Listeners.Seth Wiener & Chao-Yang Lee - 2020 - Frontiers in Psychology 11.
    Spoken word recognition involves a perceptual tradeoff between the reliance on the incoming acoustic signal and knowledge about likely sound categories and their co-occurrences as words. This study examined how adult second language (L2) learners navigate between acoustic-based and knowledge-based spoken word recognition when listening to highly variable, multi-talker truncated speech, and whether this perceptual tradeoff changes as L2 listeners gradually become more proficient in their L2 after multiple months of structured classroom learning. First language (L1) Mandarin (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  44.  36
    How Should a Speech Recognizer Work?Odette Scharenborg, Dennis Norris, Louis Bosch & James M. McQueen - 2005 - Cognitive Science 29 (6):867-918.
    Although researchers studying human speech recognition (HSR) and automatic speech recognition (ASR) share a common interest in how information processing systems (human or machine) recognize spoken language, there is little communication between the two disciplines. We suggest that this lack of communication follows largely from the fact that research in these related fields has focused on the mechanics of how speech can be recognized. In Marr's (1982) terms, emphasis has been on the algorithmic and implementational (...)
    No categories
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  45.  1
    Recognition of difference in the political action - the relation of speech-hearing. 윤은주 - 2013 - Korean Feminist Philosophy 20 (null):181-206.
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  46.  11
    Understanding Miscommunication: Speech Act Recognition in Digital Contexts.Thomas Holtgraves - 2021 - Cognitive Science 45 (10):e13023.
    Successful language use requires accurate intention recognition. However, sometimes this can be undermined because communication occurs within an interpersonal context. In this research, I used a relatively large set of speech acts (n = 32) and explored how variability in their inherent face‐threat influences the extent to which they are successfully recognized by a recipient, as well as the confidence of senders and receivers in their communicative success. Participants in two experiments either created text messages (senders) designed to (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  47.  18
    How Should a Speech Recognizer Work?Odette Scharenborg, Dennis Norris, Louis ten Bosch & James M. McQueen - 2005 - Cognitive Science 29 (6):867-918.
    Although researchers studying human speech recognition (HSR) and automatic speech recognition (ASR) share a common interest in how information processing systems (human or machine) recognize spoken language, there is little communication between the two disciplines. We suggest that this lack of communication follows largely from the fact that research in these related fields has focused on the mechanics of how speech can be recognized. In Marr's (1982) terms, emphasis has been on the algorithmic and implementational (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  48. Emotion Recognition from speech Support for WEB Lectures.Dragos Datcu & Léon Rothkrantz - 2007 - Communication and Cognition. Monographies 40 (3-4):203-214.
     
    Export citation  
     
    Bookmark  
  49. Speech and spelling interaction: the interdependence of visual and auditory word recognition.Ram Frost & Ziegler & C. Johannes - 2009 - In Gareth Gaskell (ed.), Oxford Handbook of Psycholinguistics. Oxford University Press.
     
    Export citation  
     
    Bookmark  
  50.  54
    A recognition-sensitive phenomenology of hate speech.Suzanne Whitten - 2018 - Critical Review of International Social and Political Philosophy 23 (7):1-21.
1 — 50 / 980