Results for 'Modelling Language'

995 found
Order:
  1.  12
    Ernest Lepore.What Model-Theoretic Semantics Cannot Do - 1997 - In Peter Ludlow (ed.), Readings in the Philosophy of Language. MIT Press.
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  2.  21
    Developing model language for disclosing financial interests to potential clinical research participants.K. P. Weinfurt, J. S. Allsbrook, J. Y. Friedman, M. A. Dinan, M. A. Hall, K. A. Schulman & J. Sugarman - 2006 - IRB: Ethics & Human Research 29 (1):1-5.
    As part of a larger research study, we present model language for disclosing financial interests in clinical research to potential research participants, and we describe the empirical basis and theoretical assumptions used in developing the language. The empirical process for creating appropriate disclosure language resulted in a generic disclosure statement for cases in which no risk to participants’ welfare or the scientific integrity of the research is expected, and nine more specific disclosure statements for cases in which (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark  
  3. Language Models as Critical Thinking Tools: A Case Study of Philosophers.Andre Ye, Jared Moore, Rose Novick & Amy Zhang - manuscript
    Current work in language models (LMs) helps us speed up or even skip thinking by accelerating and automating cognitive work. But can LMs help us with critical thinking -- thinking in deeper, more reflective ways which challenge assumptions, clarify ideas, and engineer new concepts? We treat philosophy as a case study in critical thinking, and interview 21 professional philosophers about how they engage in critical thinking and on their experiences with LMs. We find that philosophers do not find LMs (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  4.  17
    Models, languages and representations: philosophical reflections driven from a research on teaching and learning about cellular respiration.Martín Pérgola & Lydia Galagovsky - 2022 - Foundations of Chemistry 25 (1):151-166.
    Mental model construction is supposed to be a useful cognitive devise for learning. Beyond human capacity of constructing mental models, scientists construct complex explanations about phenomena, named scientific or theoretical models. In this work we revisit three vissions: the first one concern about the polisemic term “model”. Our proposal is to discriminate between “mental models” and “explicit models”, being the former those “imaginistic” ideas constructed in scientists’—o teachers—minds, and the latter those teaching devices expressed in different languages that tend to (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  5.  31
    The Problem of the Model Language-Game in Wittgenstein's Later Philosophy.Helen Hervey - 1961 - Philosophy 36 (138):333 - 351.
    In his Memoir of Wittgenstein Professor Malcolm describes the occasion on which, as far as he knows, the idea that as an activity language is a game, or that ‘games are played with words’, first occurred to Wittgenstein. Wittgenstein was passing a playing field where there was a game of football in progress. As he watched the game, the thought suddenly flashed into his mind, ‘We play games with words !’ This account may be compared with that given by (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  6. Large Language Models and Biorisk.William D’Alessandro, Harry R. Lloyd & Nathaniel Sharadin - 2023 - American Journal of Bioethics 23 (10):115-118.
    We discuss potential biorisks from large language models (LLMs). AI assistants based on LLMs such as ChatGPT have been shown to significantly reduce barriers to entry for actors wishing to synthesize dangerous, potentially novel pathogens and chemical weapons. The harms from deploying such bioagents could be further magnified by AI-assisted misinformation. We endorse several policy responses to these dangers, including prerelease evaluations of biomedical AIs by subject-matter experts, enhanced surveillance and lab screening procedures, restrictions on AI training data, and (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  7. Language: A Biological Model.Ruth Millikan - 2005 - Oxford, GB: Oxford: Clarendon Press.
    Ruth Millikan is well known for having developed a strikingly original way for philosophers to seek understanding of mind and language, which she sees as biological phenomena. She now draws together a series of groundbreaking essays which set out her approach to language. Guiding the work of most linguists and philosophers of language today is the assumption that language is governed by prescriptive normative rules. Millikan offers a fundamentally different way of viewing the partial regularities that (...)
  8. HELEN: Using Brain Regions and Mechanisms for Story Understanding to Model Language as Human Behavior.Robert Swaine & C. T. O. Bioware - 2009 - In B. Goertzel, P. Hitzler & M. Hutter (eds.), Proceedings of the Second Conference on Artificial General Intelligence. Atlantis Press.
     
    Export citation  
     
    Bookmark  
  9.  5
    Modern computational models of semantic discovery in natural language.Jan Žižka & Frantisek Darena (eds.) - 2015 - Hershey, PA: Information Science Reference.
    This book compiles and reviews the most prominent linguistic theories into a single source that serves as an essential reference for future solutions to one of the most important challenges of our age.
    Direct download  
     
    Export citation  
     
    Bookmark  
  10. Are Language Models More Like Libraries or Like Librarians? Bibliotechnism, the Novel Reference Problem, and the Attitudes of LLMs.Harvey Lederman & Kyle Mahowald - forthcoming - Transactions of the Association for Computational Linguistics.
    Are LLMs cultural technologies like photocopiers or printing presses, which transmit information but cannot create new content? A challenge for this idea, which we call bibliotechnism, is that LLMs generate novel text. We begin with a defense of bibliotechnism, showing how even novel text may inherit its meaning from original human-generated text. We then argue that bibliotechnism faces an independent challenge from examples in which LLMs generate novel reference, using new names to refer to new entities. Such examples could be (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  11. AI Language Models Cannot Replace Human Research Participants.Jacqueline Harding, William D’Alessandro, N. G. Laskowski & Robert Long - forthcoming - AI and Society:1-3.
    In a recent letter, Dillion et. al (2023) make various suggestions regarding the idea of artificially intelligent systems, such as large language models, replacing human subjects in empirical moral psychology. We argue that human subjects are in various ways indispensable.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  12. Chapter Thirteen Philosophical Foundations for A Unified Enterprise Modelling Language.Gerald R. Khoury & Simeon J. Simoff - 2007 - In Soraj Hongladarom (ed.), Computing and Philosophy in Asia. Cambridge Scholars Press. pp. 191.
    No categories
     
    Export citation  
     
    Bookmark  
  13.  71
    Large Language Models and the Reverse Turing Test.Terrence Sejnowski - 2023 - Neural Computation 35 (3):309–342.
    Large Language Models (LLMs) have been transformative. They are pre-trained foundational models that are self-supervised and can be adapted with fine tuning to a wide range of natural language tasks, each of which previously would have required a separate network model. This is one step closer to the extraordinary versatility of human language. GPT-3 and more recently LaMDA can carry on dialogs with humans on many topics after minimal priming with a few examples. However, there has been (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  14. Could a large language model be conscious?David J. Chalmers - 2023 - Boston Review 1.
    [This is an edited version of a keynote talk at the conference on Neural Information Processing Systems (NeurIPS) on November 28, 2022, with some minor additions and subtractions.] -/- There has recently been widespread discussion of whether large language models might be sentient or conscious. Should we take this idea seriously? I will break down the strongest reasons for and against. Given mainstream assumptions in the science of consciousness, there are significant obstacles to consciousness in current models: for example, (...)
    Direct download  
     
    Export citation  
     
    Bookmark   14 citations  
  15.  60
    Connectionist Models of Language Production: Lexical Access and Grammatical Encoding.Gary S. Dell, Franklin Chang & Zenzi M. Griffin - 1999 - Cognitive Science 23 (4):517-542.
    Theories of language production have long been expressed as connectionist models. We outline the issues and challenges that must be addressed by connectionist models of lexical access and grammatical encoding, and review three recent models. The models illustrate the value of an interactive activation approach to lexical access in production, the need for sequential output in both phonological and grammatical encoding, and the potential for accounting for structural effects on errors and structural priming from learning.
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   17 citations  
  16. On language and connectionism: Analysis of a parallel distributed processing model of language acquisition.Steven Pinker & Alan Prince - 1988 - Cognition 28 (1-2):73-193.
  17.  56
    Large Language Models Demonstrate the Potential of Statistical Learning in Language.Pablo Contreras Kallens, Ross Deans Kristensen-McLachlan & Morten H. Christiansen - 2023 - Cognitive Science 47 (3):e13256.
    To what degree can language be acquired from linguistic input alone? This question has vexed scholars for millennia and is still a major focus of debate in the cognitive science of language. The complexity of human language has hampered progress because studies of language–especially those involving computational modeling–have only been able to deal with small fragments of our linguistic skills. We suggest that the most recent generation of Large Language Models (LLMs) might finally provide the (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  18.  52
    A context-based computational model of language acquisition by infants and children.Steven Walczak - 2002 - Foundations of Science 7 (4):393-411.
    This research attempts to understand howchildren learn to use language. Instead ofusing syntax-based grammar rules to model thedifferences between children''s language andadult language, as has been done in the past, anew model is proposed. In the new researchmodel, children acquire language by listeningto the examples of speech that they hear intheir environment and subsequently use thespeech examples that have been previously heardin similar contextual situations. A computermodel is generated to simulate this new modelof language acquisition. (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  19.  41
    Spatial Language and the Embedded Listener Model in Parents’ Input to Children.Katrina Ferrara, Malena Silva, Colin Wilson & Barbara Landau - 2016 - Cognitive Science 40 (8):1877-1910.
    Language is a collaborative act: To communicate successfully, speakers must generate utterances that are not only semantically valid but also sensitive to the knowledge state of the listener. Such sensitivity could reflect the use of an “embedded listener model,” where speakers choose utterances on the basis of an internal model of the listener's conceptual and linguistic knowledge. In this study, we ask whether parents’ spatial descriptions incorporate an embedded listener model that reflects their children's understanding of spatial relations and (...)
    No categories
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  20. Language, Models, and Reality: Weak existence and a threefold correspondence.Neil Barton & Giorgio Venturi - manuscript
    How does our language relate to reality? This is a question that is especially pertinent in set theory, where we seem to talk of large infinite entities. Based on an analogy with the use of models in the natural sciences, we argue for a threefold correspondence between our language, models, and reality. We argue that so conceived, the existence of models can be underwritten by a weak notion of existence, where weak existence is to be understood as existing (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  21.  98
    Language and its Models: Is Model Theory a Theory of Semantics?Jaroslav Peregrin - 1997 - Nordic Journal of Philosophical Logic 2 (1):1-23.
    Tarskian model theory is almost universally understood as a formal counterpart of the preformal notion of semantics, of the “linkage between words and things”. The wide-spread opinion is that to account for the semantics of natural language is to furnish its settheoretic interpretation in a suitable model structure; as exemplified by Montague 1974.
    Direct download  
     
    Export citation  
     
    Bookmark   14 citations  
  22.  26
    Natural Language Grammar Induction using a Constituent-Context Model.Dan Klein & Christopher D. Manning - unknown
    This paper presents a novel approach to the unsupervised learning of syntactic analyses of natural language text. Most previous work has focused on maximizing likelihood according to generative PCFG models. In contrast, we employ a simpler probabilistic model over trees based directly on constituent identity and linear context, and use an EM-like iterative procedure to induce structure. This method produces much higher quality analyses, giving the best published results on the ATIS dataset.
    Direct download  
     
    Export citation  
     
    Bookmark   5 citations  
  23. Finitary models of language users.George A. Miller & Noam Chomsky - 1963 - In D. Luce (ed.), Handbook of Mathematical Psychology. John Wiley & Sons.. pp. 2--419.
     
    Export citation  
     
    Bookmark   102 citations  
  24.  29
    Language learning as language use: A cross-linguistic model of child language development.Stewart M. McCauley & Morten H. Christiansen - 2019 - Psychological Review 126 (1):1-51.
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   17 citations  
  25.  21
    The language of worry: Examining linguistic elements of worry models.Elena M. C. Geronimi & Janet Woodruff-Borden - 2015 - Cognition and Emotion 29 (2):311-318.
    Despite strong evidence that worry is a verbal process, studies examining linguistic features in individuals with generalised anxiety disorder (GAD) are lacking. The aim of the present study is to investigate language use in individuals with GAD and controls based on GAD and worry theoretical models. More specifically, the degree to which linguistic elements of the avoidance and intolerance of uncertainty worry models can predict diagnostic status was analysed. Participants were 19 women diagnosed with GAD and 22 control women (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  26.  43
    Understanding models understanding language.Anders Søgaard - 2022 - Synthese 200 (6):1-16.
    Landgrebe and Smith :2061–2081, 2021) present an unflattering diagnosis of recent advances in what they call language-centric artificial intelligence—perhaps more widely known as natural language processing: The models that are currently employed do not have sufficient expressivity, will not generalize, and are fundamentally unable to induce linguistic semantics, they say. The diagnosis is mainly derived from an analysis of the widely used Transformer architecture. Here I address a number of misunderstandings in their analysis, and present what I take (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  27.  13
    Large Language Models: A Historical and Sociocultural Perspective.Eugene Yu Ji - 2024 - Cognitive Science 48 (3):e13430.
    This letter explores the intricate historical and contemporary links between large language models (LLMs) and cognitive science through the lens of information theory, statistical language models, and socioanthropological linguistic theories. The emergence of LLMs highlights the enduring significance of information‐based and statistical learning theories in understanding human communication. These theories, initially proposed in the mid‐20th century, offered a visionary framework for integrating computational science, social sciences, and humanities, which nonetheless was not fully fulfilled at that time. The subsequent (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  28.  70
    A Model of Language Processing as Hierarchic Sequential Prediction.Marten van Schijndel, Andy Exley & William Schuler - 2013 - Topics in Cognitive Science 5 (3):522-540.
    Computational models of memory are often expressed as hierarchic sequence models, but the hierarchies in these models are typically fairly shallow, reflecting the tendency for memories of superordinate sequence states to become increasingly conflated. This article describes a broad-coverage probabilistic sentence processing model that uses a variant of a left-corner parsing strategy to flatten sentence processing operations in parsing into a similarly shallow hierarchy of learned sequences. The main result of this article is that a broad-coverage model with constraints on (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  29.  23
    Large language models in medical ethics: useful but not expert.Andrea Ferrario & Nikola Biller-Andorno - forthcoming - Journal of Medical Ethics.
    Large language models (LLMs) have now entered the realm of medical ethics. In a recent study, Balaset alexamined the performance of GPT-4, a commercially available LLM, assessing its performance in generating responses to diverse medical ethics cases. Their findings reveal that GPT-4 demonstrates an ability to identify and articulate complex medical ethical issues, although its proficiency in encoding the depth of real-world ethical dilemmas remains an avenue for improvement. Investigating the integration of LLMs into medical ethics decision-making appears to (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  30. Model theory of infinitary languages.M. A. Dickmann - 1970 - [Aarhus, Denmark,: Universitet, Matematisk institut].
     
    Export citation  
     
    Bookmark  
  31. Language: A biological model.Emma Borg - manuscript
    Ruth Garrett Millikan is one of the most important thinkers in philosophy of mind and language of the current generation. Across a number of seminal books, and in the company of theorists such as Jerry Fodor and Fred Dretske, she has championed a wholly naturalistic, scientific understanding of content, whether of thought or words. Many think that naturalism about meaning has found its most defensible form in her distinctively “teleological” approach, and in Language: A Biological Model she continues (...)
     
    Export citation  
     
    Bookmark   2 citations  
  32.  25
    The languages of relevant logic: a model-theoretic perspective.Guillermo Badia Hernandez - unknown
    A traditional aspect of model theory has been the interplay between formal languages and mathematical structures. This dissertation is concerned, in particular, with the relationship between the languages of relevant logic and Routley-Meyer models. One fundamental question is treated: what is the expressive power of relevant languages in the Routley-Meyer framework? In the case of finitary relevant propositional languages, two answers are provided. The first is that finitary propositional relevant languages are the fragments of first order logic preserved under relevant (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark  
  33.  10
    Language: A Biological Model.Ruth Garrett Millikan - 2005 - Oxford, GB: Clarendon Press.
    Guiding the work of most linguists and philosophers of language today is the assumption that language is governed by rules. This volume presents a different way of viewing the partial regularities that language displays, the way they express norms and conventions. It argues that the central norms applying to language are non-evaluative; they are more like those norms of function and behavior that account for the survival and proliferation of biological species. Specific linguistic forms survive and (...)
  34. Probabilistic models of language processing and acquisition.Nick Chater & Christopher D. Manning - 2006 - Trends in Cognitive Sciences 10 (7):335–344.
    Probabilistic methods are providing new explanatory approaches to fundamental cognitive science questions of how humans structure, process and acquire language. This review examines probabilistic models defined over traditional symbolic structures. Language comprehension and production involve probabilistic inference in such models; and acquisition involves choosing the best model, given innate constraints and linguistic and other input. Probabilistic models can account for the learning and processing of language, while maintaining the sophistication of symbolic models. A recent burgeoning of theoretical (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   45 citations  
  35.  9
    Models of computation and formal languages.Ralph Gregory Taylor - 1998 - New York: Oxford University Press.
    This unique book presents a comprehensive and rigorous treatment of the theory of computability which is introductory yet self-contained. It takes a novel approach by looking at the subject using computation models rather than a limitation orientation, and is the first book of its kind to include software. Accompanying software simulations of almost all computational models are available for use in conjunction with the text, and numerous examples are provided on disk in a user-friendly format. Its applications to computer science (...)
    Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  36. SDML: A multi-agent language for organizational modelling.Bruce Edmonds - manuscript
    The SDML programming language which is optimized for modelling multi-agent interaction within articulated social structures such as organizations is described with several examples of its functionality. SDML is a strictly declarative modelling language which has object-oriented features and corresponds to a fragment of strongly grounded autoepistemic logic. The virtues of SDML include the ease of building complex models and the facility for representing agents flexibly as models of cognition as well as modularity and code reusability.
    No categories
     
    Export citation  
     
    Bookmark   5 citations  
  37.  23
    Language polygenesis: A probabilistic model.David A. Freedman & William Wang - unknown
    Monogenesis of language is widely accepted, but the conventional argument seems to be mistaken; a simple probabilistic model shows that polygenesis is likely. Other prehistoric inventions are discussed, as are problems in tracing linguistic lineages. Language is a system of representations; within such a system, words can evoke complex and systematic responses. Along with its social functions, language is important to humans as a mental instrument. Indeed, the invention of language,that is the accumulation of symbols to (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  38. Machine Advisors: Integrating Large Language Models into Democratic Assemblies.Petr Špecián - manuscript
    Large language models (LLMs) represent the currently most relevant incarnation of artificial intelligence with respect to the future fate of democratic governance. Considering their potential, this paper seeks to answer a pressing question: Could LLMs outperform humans as expert advisors to democratic assemblies? While bearing the promise of enhanced expertise availability and accessibility, they also present challenges of hallucinations, misalignment, or value imposition. Weighing LLMs’ benefits and drawbacks compared to their human counterparts, I argue for their careful integration to (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  39.  40
    Models and Metaphors: Studies in Language and Philosophy.William Sacksteder - 1962 - Philosophy and Phenomenological Research 23 (2):289-290.
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   125 citations  
  40.  26
    Formal models of language learning.Steven Pinker - 1979 - Cognition 7 (3):217-283.
  41.  13
    Language mediated mentalization: A proposed model.Yair Neuman - 2019 - Semiotica 2019 (227):261-272.
    Mentalization describes the process through which we understand the mental states of oneself and others. In this paper, I present a computational semiotic model of mentalization and illustrate it through a worked-out example. The model draws on classical semiotic ideas, such as abductive inference and hypostatic abstraction, but pours them into new ideas and tools from natural language processing, machine learning, and neural networks, to form a novel model of language-mediated-mentalization.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  42.  32
    AUTOGEN: A Personalized Large Language Model for Academic Enhancement—Ethics and Proof of Principle.Sebastian Porsdam Mann, Brian D. Earp, Nikolaj Møller, Suren Vynn & Julian Savulescu - 2023 - American Journal of Bioethics 23 (10):28-41.
    Large language models (LLMs) such as ChatGPT or Google’s Bard have shown significant performance on a variety of text-based tasks, such as summarization, translation, and even the generation of new...
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   19 citations  
  43. Models, theories, and language.Jan Faye - 2007 - In Filosofia, scienza e bioetica nel dibattito contemporaneo. Rome: Poligrafico e Zecca dello Stato. pp. 823-838.
    The semantic view on theories has been much in vogue over four decades as the successor of the syntactic view. In the present paper, I take issue with this approach by arguing that theories and models must be separated and that a theory should be considered to be a linguistic systems consisting of a vocabulary and a set of rules for the use of that vocabulary.
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  44.  51
    New Models for Language Understanding and the Cognitive Approach to Legal Metaphors.Lucia Morra - 2010 - International Journal for the Semiotics of Law - Revue Internationale de Sémiotique Juridique 23 (4):387-405.
    The essay deals with the mechanism of interpretation for legal metaphorical expressions. Firstly, it points out the perspective the cognitive approach induced about legal metaphors; then it suggests that this perspective gains in plausibility when a new bilateral model of language understanding is endorsed. A possible sketch of the meaning-making procedure for legal metaphors, compatible with this new model, is then proposed, and illustrated with some examples built on concepts belonging to the Italian Civil Code. The insights the bilateral (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  45. Holding Large Language Models to Account.Ryan Miller - 2023 - In Berndt Müller (ed.), Proceedings of the AISB Convention. Society for the Study of Artificial Intelligence and the Simulation of Behaviour. pp. 7-14.
    If Large Language Models can make real scientific contributions, then they can genuinely use language, be systematically wrong, and be held responsible for their errors. AI models which can make scientific contributions thereby meet the criteria for scientific authorship.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  46.  2
    Model and Simulation of Maximum Entropy Phrase Reordering of English Text in Language Learning Machine.Weifang Wu - 2020 - Complexity 2020:1-9.
    This paper proposes a feature extraction algorithm based on the maximum entropy phrase reordering model in statistical machine translation in language learning machines. The algorithm can extract more accurate phrase reordering information, especially the feature information of reversed phrases, which solves the problem of imbalance of feature data during maximum entropy training in the original algorithm, and improves the accuracy of phrase reordering in translation. In the experiment, they were combined with linguistic features such as parts of speech, words, (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  47.  84
    From Models of God to a Model of Gods: How Whiteheadian Metaphysics Facilitates Western Language Discussion of Divine Multiplicity.Monica A. Coleman - 2007 - Philosophia 35 (3-4):329-340.
    In today’s society, models of God are challenged to account for more than the postmodern context in which Western Christianity finds itself; they should also consider the reality of religious pluralism. Non-monotheistic religions present a particular challenge to Western theological and philosophical God-modeling because they require a model of Gods. This paper uses an African traditional religion as a case study to problematize the effects of monotheism on philosophical models of God. The desire to uphold the image of a singular (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark  
  48.  53
    Connectionist Models and Linguistic Theory: Investigations of Stress Systems in Language.Prahlad Gupta & David S. Touretzky - 1994 - Cognitive Science 18 (1):1-50.
    We question the widespread assumption that linguistic theory should guide the formulation of mechanistic accounts of human language processing. We develop a pseudo‐linguistic theory for the domain of linguistic stress, based on observation of the learning behavior of a perceptron exposed to a variety of stress patterns. There are significant similarities between our analysis of perception stress learning and metrical phonology, the linguistic theory of human stress. Both approaches attempt to identify salient characteristics of the stress systems under examination (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  49. A model of language processing and spatial reasoning using skill acquisition to situate action.Scott A. Douglass & John R. Anderson - 2008 - In B. C. Love, K. McRae & V. M. Sloutsky (eds.), Proceedings of the 30th Annual Conference of the Cognitive Science Society. Cognitive Science Society. pp. 2281--2286.
  50. Models of language learning and their implications for social constructionist analyses of scientific belief.Donald T. Campbell - 1989 - In Steve Fuller (ed.), The Cognitive Turn: Sociological and Psychological Perspectives on Science. Kluwer Academic Publishers.
1 — 50 / 995