Results for 'Computational language models'

995 found
Order:
  1.  71
    Large Language Models and the Reverse Turing Test.Terrence Sejnowski - 2023 - Neural Computation 35 (3):309–342.
    Large Language Models (LLMs) have been transformative. They are pre-trained foundational models that are self-supervised and can be adapted with fine tuning to a wide range of natural language tasks, each of which previously would have required a separate network model. This is one step closer to the extraordinary versatility of human language. GPT-3 and more recently LaMDA can carry on dialogs with humans on many topics after minimal priming with a few examples. However, there (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  2.  58
    Large Language Models Demonstrate the Potential of Statistical Learning in Language.Pablo Contreras Kallens, Ross Deans Kristensen-McLachlan & Morten H. Christiansen - 2023 - Cognitive Science 47 (3):e13256.
    To what degree can language be acquired from linguistic input alone? This question has vexed scholars for millennia and is still a major focus of debate in the cognitive science of language. The complexity of human language has hampered progress because studies of language–especially those involving computational modeling–have only been able to deal with small fragments of our linguistic skills. We suggest that the most recent generation of Large Language Models (LLMs) might finally (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  3.  50
    Computational Topic Models for Theological Investigations.Mark Graves - 2022 - Theology and Science 20 (1):69-84.
    Sallie McFague’s theological models construct a tensive relationship between conceptual structures and symbolic, metaphorical language to interpret the defining and elusive aspects of theological phenomena and loci. Computational models of language can extend and formalize the conceptual structures of theological models to develop computer-augmented interpretations of theological texts. Previously unclear is whether computational models can retain the tensive symbolism essential for theological investigation. I demonstrate affirmatively by constructing a computational topic model (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  4. Are Language Models More Like Libraries or Like Librarians? Bibliotechnism, the Novel Reference Problem, and the Attitudes of LLMs.Harvey Lederman & Kyle Mahowald - forthcoming - Transactions of the Association for Computational Linguistics.
    Are LLMs cultural technologies like photocopiers or printing presses, which transmit information but cannot create new content? A challenge for this idea, which we call bibliotechnism, is that LLMs generate novel text. We begin with a defense of bibliotechnism, showing how even novel text may inherit its meaning from original human-generated text. We then argue that bibliotechnism faces an independent challenge from examples in which LLMs generate novel reference, using new names to refer to new entities. Such examples could be (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  5.  14
    Large Language Models: A Historical and Sociocultural Perspective.Eugene Yu Ji - 2024 - Cognitive Science 48 (3):e13430.
    This letter explores the intricate historical and contemporary links between large language models (LLMs) and cognitive science through the lens of information theory, statistical language models, and socioanthropological linguistic theories. The emergence of LLMs highlights the enduring significance of information‐based and statistical learning theories in understanding human communication. These theories, initially proposed in the mid‐20th century, offered a visionary framework for integrating computational science, social sciences, and humanities, which nonetheless was not fully fulfilled at that (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  6.  3
    Evaluating large language models’ ability to generate interpretive arguments.Zaid Marji & John Licato - forthcoming - Argument and Computation.
    In natural language understanding, a crucial goal is correctly interpreting open-textured phrases. In practice, disagreements over the meanings of open-textured phrases are often resolved through the generation and evaluation of interpretive arguments, arguments designed to support or attack a specific interpretation of an expression within a document. In this paper, we discuss some of our work towards the goal of automatically generating and evaluating interpretive arguments. We have curated a set of rules from the code of ethics of various (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  7.  21
    AI Enters Public Discourse: a Habermasian Assessment of the Moral Status of Large Language Models.Paolo Monti - 2024 - Ethics and Politics 61 (1):61-80.
    Large Language Models (LLMs) are generative AI systems capable of producing original texts based on inputs about topic and style provided in the form of prompts or questions. The introduction of the outputs of these systems into human discursive practices poses unprecedented moral and political questions. The article articulates an analysis of the moral status of these systems and their interactions with human interlocutors based on the Habermasian theory of communicative action. The analysis explores, among other things, Habermas's (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  8.  5
    Modern computational models of semantic discovery in natural language.Jan Žižka & Frantisek Darena (eds.) - 2015 - Hershey, PA: Information Science Reference.
    This book compiles and reviews the most prominent linguistic theories into a single source that serves as an essential reference for future solutions to one of the most important challenges of our age.
    Direct download  
     
    Export citation  
     
    Bookmark  
  9.  7
    Large language models in cryptocurrency securities cases: can a GPT model meaningfully assist lawyers?Arianna Trozze, Toby Davies & Bennett Kleinberg - forthcoming - Artificial Intelligence and Law:1-47.
    Large Language Models (LLMs) could be a useful tool for lawyers. However, empirical research on their effectiveness in conducting legal tasks is scant. We study securities cases involving cryptocurrencies as one of numerous contexts where AI could support the legal process, studying GPT-3.5’s legal reasoning and ChatGPT’s legal drafting capabilities. We examine whether a) GPT-3.5 can accurately determine which laws are potentially being violated from a fact pattern, and b) whether there is a difference in juror decision-making based (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  10.  16
    Getting it right: the limits of fine-tuning large language models.Jacob Browning - 2024 - Ethics and Information Technology 26 (2):1-9.
    The surge in interest in natural language processing in artificial intelligence has led to an explosion of new language models capable of engaging in plausible language use. But ensuring these language models produce honest, helpful, and inoffensive outputs has proved difficult. In this paper, I argue problems of inappropriate content in current, autoregressive language models—such as ChatGPT and Gemini—are inescapable; merely predicting the next word is incompatible with reliably providing appropriate outputs. The (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  11. A phenomenology and epistemology of large language models: Transparency, trust, and trustworthiness.Richard Heersmink, Barend de Rooij, María Jimena Clavel Vázquez & Matteo Colombo - forthcoming - Ethics and Information Technology.
    This paper analyses the phenomenology and epistemology of chatbots such as ChatGPT and Bard. The computational architecture underpinning these chatbots are large language models (LLMs), which are generative AI (Artificial Intelligence) systems trained on a massive dataset of text extracted from the Web. We conceptualise these LLMs as multifunctional computational cognitive artifacts, used for various cognitive tasks such as translating, summarizing, answering questions, information-seeking, and much more. Phenomenologically, LLMs can be experienced as a “quasi-other”; when that (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  12. A Computational Cognitive Model of Syntactic Priming.David Reitter, Frank Keller & Johanna D. Moore - 2011 - Cognitive Science 35 (4):587-637.
    The psycholinguistic literature has identified two syntactic adaptation effects in language production: rapidly decaying short-term priming and long-lasting adaptation. To explain both effects, we present an ACT-R model of syntactic priming based on a wide-coverage, lexicalized syntactic theory that explains priming as facilitation of lexical access. In this model, two well-established ACT-R mechanisms, base-level learning and spreading activation, account for long-term adaptation and short-term priming, respectively. Our model simulates incremental language production and in a series of modeling studies, (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   34 citations  
  13.  6
    Combining prompt-based language models and weak supervision for labeling named entity recognition on legal documents.Vitor Oliveira, Gabriel Nogueira, Thiago Faleiros & Ricardo Marcacini - forthcoming - Artificial Intelligence and Law:1-21.
    Named entity recognition (NER) is a very relevant task for text information retrieval in natural language processing (NLP) problems. Most recent state-of-the-art NER methods require humans to annotate and provide useful data for model training. However, using human power to identify, circumscribe and label entities manually can be very expensive in terms of time, money, and effort. This paper investigates the use of prompt-based language models (OpenAI’s GPT-3) and weak supervision in the legal domain. We apply both (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  14. Bigger Isn’t Better: The Ethical and Scientific Vices of Extra-Large Datasets in Language Models.Trystan S. Goetze & Darren Abramson - 2021 - WebSci '21: Proceedings of the 13th Annual ACM Web Science Conference (Companion Volume).
    The use of language models in Web applications and other areas of computing and business have grown significantly over the last five years. One reason for this growth is the improvement in performance of language models on a number of benchmarks — but a side effect of these advances has been the adoption of a “bigger is always better” paradigm when it comes to the size of training, testing, and challenge datasets. Drawing on previous criticisms of (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  15.  17
    Can AI Language Models Improve Human Sciences Research? A Phenomenological Analysis and Future Directions.Marika D'Oria - 2023 - ENCYCLOPAIDEIA 27 (66):77-92.
    The article explores the use of the “ChatGPT” artificial intelligence language model in the Human Sciences field. ChatGPT uses natural language processing techniques to imitate human language and engage in artificial conversations. While the platform has gained attention from the scientific community, opinions on its usage are divided. The article presents some conversations with ChatGPT to examine ethical, relational and linguistic issues related to human-computer interaction (HCI) and assess its potential for Human Sciences research. The interaction with (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  16.  16
    Embodied human language models vs. Large Language Models, or why Artificial Intelligence cannot explain the modal be able to.Sergio Torres-Martínez - 2024 - Biosemiotics 17 (1):185-209.
    This paper explores the challenges posed by the rapid advancement of artificial intelligence specifically Large Language Models (LLMs). I show that traditional linguistic theories and corpus studies are being outpaced by LLMs’ computational sophistication and low perplexity levels. In order to address these challenges, I suggest a focus on language as a cognitive tool shaped by embodied-environmental imperatives in the context of Agentive Cognitive Construction Grammar. To that end, I introduce an Embodied Human Language Model (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  17.  9
    Models of computation and formal languages.Ralph Gregory Taylor - 1998 - New York: Oxford University Press.
    This unique book presents a comprehensive and rigorous treatment of the theory of computability which is introductory yet self-contained. It takes a novel approach by looking at the subject using computation models rather than a limitation orientation, and is the first book of its kind to include software. Accompanying software simulations of almost all computational models are available for use in conjunction with the text, and numerous examples are provided on disk in a user-friendly format. Its applications (...)
    Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  18. Personhood and AI: Why large language models don’t understand us.Jacob Browning - forthcoming - AI and Society:1-8.
    Recent artificial intelligence advances, especially those of large language models (LLMs), have increasingly shown glimpses of human-like intelligence. This has led to bold claims that these systems are no longer a mere “it” but now a “who,” a kind of person deserving respect. In this paper, I argue that this view depends on a Cartesian account of personhood, on which identifying someone as a person is based on their cognitive sophistication and ability to address common-sense reasoning problems. I (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  19.  1
    Scrutinizing the foundations: could large language models be solipsistic?Andreea Esanu - 2024 - Synthese 203 (5):1-20.
    In artificial intelligence literature, “delusions” are characterized as the generation of unfaithful output from reliable source content. There is an extensive literature on computer-generated delusions, ranging from visual hallucinations, like the production of nonsensical images in Computer Vision, to nonsensical text generated by (natural) language models, but this literature is predominantly taxonomic. In a recent research paper, however, a group of scientists from DeepMind successfully presented a formal treatment of an entire class of delusions in generative AI (...) (i.e., models based on a transformer architecture, both with and without RLHF—reinforcement learning with human feedback, such as BERT, GPT-3 or the more recent GPT-3.5), referred to as auto-suggestive delusions. Auto-suggestive delusions are not mere unfaithful output, but are self-induced by the transformer models themselves. Typically, these delusions have been subsumed under the concept of exposure bias, but exposure bias alone does not elucidate their nature. In order to address their nature, I will introduce a formal framework that clarifies the probabilistic delusions capable of explaining exposure bias in a broad manner. This will serve as the foundation for exploring auto-suggestive delusions in language models. Next, an examination of self- or auto-suggestive delusions will be undertaken, by drawing an analogy with the rule-following problematic from the philosophy of mind and language. Finally, I will argue that this comprehensive approach leads to the suggestion that transformers, large language models in particular, may develop in a manner that touches upon solipsism and the emergence of a private language, in a weak sense. (shrink)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  20.  4
    Predicting Age of Acquisition for Children's Early Vocabulary in Five Languages Using Language Model Surprisal.Eva Portelance, Yuguang Duan, Michael C. Frank & Gary Lupyan - 2023 - Cognitive Science 47 (9):e13334.
    What makes a word easy to learn? Early‐learned words are frequent and tend to name concrete referents. But words typically do not occur in isolation. Some words are predictable from their contexts; others are less so. Here, we investigate whether predictability relates to when children start producing different words (age of acquisition; AoA). We operationalized predictability in terms of a word's surprisal in child‐directed speech, computed using n‐gram and long‐short‐term‐memory (LSTM) language models. Predictability derived from LSTMs was generally (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  21.  52
    A context-based computational model of language acquisition by infants and children.Steven Walczak - 2002 - Foundations of Science 7 (4):393-411.
    This research attempts to understand howchildren learn to use language. Instead ofusing syntax-based grammar rules to model thedifferences between children''s language andadult language, as has been done in the past, anew model is proposed. In the new researchmodel, children acquire language by listeningto the examples of speech that they hear intheir environment and subsequently use thespeech examples that have been previously heardin similar contextual situations. A computermodel is generated to simulate this new modelof language acquisition. (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  22.  38
    A computational model of the cultural co-evolution of language and mindreading.Marieke Woensdregt, Chris Cummins & Kenny Smith - 2020 - Synthese 199 (1-2):1347-1385.
    Several evolutionary accounts of human social cognition posit that language has co-evolved with the sophisticated mindreading abilities of modern humans. It has also been argued that these mindreading abilities are the product of cultural, rather than biological, evolution. Taken together, these claims suggest that the evolution of language has played an important role in the cultural evolution of human social cognition. Here we present a new computational model which formalises the assumptions that underlie this hypothesis, in order (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  23.  34
    The Importance of Understanding Language in Large Language Models.Alaa Youssef, Samantha Stein, Justin Clapp & David Magnus - 2023 - American Journal of Bioethics 23 (10):6-7.
    Recent advancements in large language models (LLMs) have ushered in a transformative phase in artificial intelligence (AI). Unlike conventional AI, LLMs excel in facilitating fluid human–computer d...
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  24.  4
    InstructPatentGPT: training patent language models to follow instructions with human feedback.Jieh-Sheng Lee - forthcoming - Artificial Intelligence and Law:1-44.
    In this research, patent prosecution is conceptualized as a system of reinforcement learning from human feedback. The objective of the system is to increase the likelihood for a language model to generate patent claims that have a higher chance of being granted. To showcase the controllability of the language model, the system learns from granted patents and pre-grant applications with different rewards. The status of “granted” and “pre-grant” are perceived as labeled human feedback implicitly. In addition, specific to (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  25.  16
    Canalization of Language Structure From Environmental Constraints: A Computational Model of Word Learning From Multiple Cues.Padraic Monaghan - 2016 - Topics in Cognitive Science 8 (4).
    There is substantial variation in language experience, yet there is surprising similarity in the language structure acquired. Constraints on language structure may be external modulators that result in this canalization of language structure, or else they may derive from the broader, communicative environment in which language is acquired. In this paper, the latter perspective is tested for its adequacy in explaining robustness of language learning to environmental variation. A computational model of word learning (...)
    Direct download  
     
    Export citation  
     
    Bookmark   4 citations  
  26. Cross-genre argument mining: Can language models automatically fill in missing discourse markers?Gil Rocha, Henrique Lopes Cardoso, Jonas Belouadi & Steffen Eger - forthcoming - Argument and Computation:1-41.
    Available corpora for Argument Mining differ along several axes, and one of the key differences is the presence (or absence) of discourse markers to signal argumentative content. Exploring effective ways to use discourse markers has received wide attention in various discourse parsing tasks, from which it is well-known that discourse markers are strong indicators of discourse relations. To improve the robustness of Argument Mining systems across different genres, we propose to automatically augment a given text with discourse markers such that (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  27.  23
    Canalization of Language Structure From Environmental Constraints: A Computational Model of Word Learning From Multiple Cues.Padraic Monaghan - 2017 - Topics in Cognitive Science 9 (1):21-34.
    There is substantial variation in language experience, yet there is surprising similarity in the language structure acquired. Constraints on language structure may be external modulators that result in this canalization of language structure, or else they may derive from the broader, communicative environment in which language is acquired. In this paper, the latter perspective is tested for its adequacy in explaining robustness of language learning to environmental variation. A computational model of word learning (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  28.  90
    Playing Games with Ais: The Limits of GPT-3 and Similar Large Language Models.Adam Sobieszek & Tadeusz Price - 2022 - Minds and Machines 32 (2):341-364.
    This article contributes to the debate around the abilities of large language models such as GPT-3, dealing with: firstly, evaluating how well GPT does in the Turing Test, secondly the limits of such models, especially their tendency to generate falsehoods, and thirdly the social consequences of the problems these models have with truth-telling. We start by formalising the recently proposed notion of reversible questions, which Floridi & Chiriatti propose allow one to ‘identify the nature of the (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  29. Computer Models On Mind: Computational Approaches In Theoretical Psychology.Margaret A. Boden - 1988 - Cambridge University Press.
    What is the mind? How does it work? How does it influence behavior? Some psychologists hope to answer such questions in terms of concepts drawn from computer science and artificial intelligence. They test their theories by modeling mental processes in computers. This book shows how computer models are used to study many psychological phenomena--including vision, language, reasoning, and learning. It also shows that computer modeling involves differing theoretical approaches. Computational psychologists disagree about some basic questions. For instance, (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   67 citations  
  30.  9
    Automatic semantic interpretation: a computer model of understanding natural language.Jan van Bakel - 1984 - Cinnaminson, U.S.A.: Foris Publications.
  31.  30
    The great Transformer: Examining the role of large language models in the political economy of AI.Wiebke Denkena & Dieuwertje Luitse - 2021 - Big Data and Society 8 (2).
    In recent years, AI research has become more and more computationally demanding. In natural language processing, this tendency is reflected in the emergence of large language models like GPT-3. These powerful neural network-based models can be used for a range of NLP tasks and their language generation capacities have become so sophisticated that it can be very difficult to distinguish their outputs from human language. LLMs have raised concerns over their demonstrable biases, heavy environmental (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  32.  37
    Truth machines: synthesizing veracity in AI language models.Luke Munn, Liam Magee & Vanicka Arora - forthcoming - AI and Society:1-15.
    As AI technologies are rolled out into healthcare, academia, human resources, law, and a multitude of other domains, they become de-facto arbiters of truth. But truth is highly contested, with many different definitions and approaches. This article discusses the struggle for truth in AI systems and the general responses to date. It then investigates the production of truth in InstructGPT, a large language model, highlighting how data harvesting, model architectures, and social feedback mechanisms weave together disparate understandings of veracity. (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  33.  23
    Friend or foe? Exploring the implications of large language models on the science system.Benedikt Fecher, Marcel Hebing, Melissa Laufer, Jörg Pohle & Fabian Sofsky - forthcoming - AI and Society:1-13.
    The advent of ChatGPT by OpenAI has prompted extensive discourse on its potential implications for science and higher education. While the impact on education has been a primary focus, there is limited empirical research on the effects of large language models (LLMs) and LLM-based chatbots on science and scientific practice. To investigate this further, we conducted a Delphi study involving 72 researchers specializing in AI and digitization. The study focused on applications and limitations of LLMs, their effects on (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  34.  28
    Computational Investigations of Multiword Chunks in Language Learning.Stewart M. McCauley & Morten H. Christiansen - 2017 - Topics in Cognitive Science 9 (3):637-652.
    Second-language learners rarely arrive at native proficiency in a number of linguistic domains, including morphological and syntactic processing. Previous approaches to understanding the different outcomes of first- versus second-language learning have focused on cognitive and neural factors. In contrast, we explore the possibility that children and adults may rely on different linguistic units throughout the course of language learning, with specific focus on the granularity of those units. Following recent psycholinguistic evidence for the role of multiword chunks (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  35.  55
    Model-Based Reasoning in Science and Technology: Inferential Models for Logic, Language, Cognition and Computation.Matthieu Fontaine, Cristina Barés-Gómez, Francisco Salguero-Lamillar, Lorenzo Magnani & Ángel Nepomuceno-Fernández (eds.) - 2019 - Springer Verlag.
    This book discusses how scientific and other types of cognition make use of models, abduction, and explanatory reasoning in order to produce important and innovative changes in theories and concepts. Gathering revised contributions presented at the international conference on Model-Based Reasoning, held on October 24–26 2018 in Seville, Spain, the book is divided into three main parts. The first focuses on models, reasoning, and representation. It highlights key theoretical concepts from an applied perspective, and addresses issues concerning information (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  36.  24
    Bringing legal knowledge to the public by constructing a legal question bank using large-scale pre-trained language model.Mingruo Yuan, Ben Kao, Tien-Hsuan Wu, Michael M. K. Cheung, Henry W. H. Chan, Anne S. Y. Cheung, Felix W. H. Chan & Yongxi Chen - forthcoming - Artificial Intelligence and Law:1-37.
    Access to legal information is fundamental to access to justice. Yet accessibility refers not only to making legal documents available to the public, but also rendering legal information comprehensible to them. A vexing problem in bringing legal information to the public is how to turn formal legal documents such as legislation and judgments, which are often highly technical, to easily navigable and comprehensible knowledge to those without legal education. In this study, we formulate a three-step approach for bringing legal knowledge (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  37.  9
    The extimate core of understanding: absolute metaphors, psychosis and large language models.Marc Heimann & Anne-Friederike Hübener - forthcoming - AI and Society:1-12.
    This paper delves into the striking parallels between the linguistic patterns of Large Language Models (LLMs) and the concepts of psychosis in Lacanian psychoanalysis. Lacanian theory, with its focus on the formal and logical underpinnings of psychosis, provides a compelling lens to juxtapose human cognition and AI mechanisms. LLMs, such as GPT-4, appear to replicate the intricate metaphorical and metonymical frameworks inherent in human language. Although grounded in mathematical logic and probabilistic analysis, the outputs of LLMs echo (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  38. Psychological and Computational Models of Language Comprehension: In Defense of the Psychological Reality of Syntax.David Pereplyotchik - 2011 - Croatian Journal of Philosophy 11 (1):31-72.
    In this paper, I argue for a modified version of what Devitt calls the Representational Thesis. According to RT, syntactic rules or principles are psychologically real, in the sense that they are represented in the mind/brain of every linguistically competent speaker/hearer. I present a range of behavioral and neurophysiological evidence for the claim that the human sentence processing mechanism constructs mental representations of the syntactic properties of linguistic stimuli. I then survey a range of psychologically plausible computational models (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  39.  5
    Bringing order into the realm of Transformer-based language models for artificial intelligence and law.Candida M. Greco & Andrea Tagarelli - forthcoming - Artificial Intelligence and Law:1-148.
    Transformer-based language models (TLMs) have widely been recognized to be a cutting-edge technology for the successful development of deep-learning-based solutions to problems and applications that require natural language processing and understanding. Like for other textual domains, TLMs have indeed pushed the state-of-the-art of AI approaches for many tasks of interest in the legal domain. Despite the first Transformer model being proposed about six years ago, there has been a rapid progress of this technology at an unprecedented rate, (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  40. Computer models of thought and language.Leonard Uhr - 1975 - Artificial Intelligence 6 (3):289-292.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  41.  13
    Language mediated mentalization: A proposed model.Yair Neuman - 2019 - Semiotica 2019 (227):261-272.
    Mentalization describes the process through which we understand the mental states of oneself and others. In this paper, I present a computational semiotic model of mentalization and illustrate it through a worked-out example. The model draws on classical semiotic ideas, such as abductive inference and hypostatic abstraction, but pours them into new ideas and tools from natural language processing, machine learning, and neural networks, to form a novel model of language-mediated-mentalization.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  42.  6
    A computer model of child language learning.Mallory Selfridge - 1986 - Artificial Intelligence 29 (2):171-216.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  43.  22
    Computational models of language processing.Edward P. Stabler - 1986 - Behavioral and Brain Sciences 9 (3):550-551.
  44. Computational Learning Theory and Language Acquisition.Alexander Clark - unknown
    Computational learning theory explores the limits of learnability. Studying language acquisition from this perspective involves identifying classes of languages that are learnable from the available data, within the limits of time and computational resources available to the learner. Different models of learning can yield radically different learnability results, where these depend on the assumptions of the model about the nature of the learning process, and the data, time, and resources that learners have access to. To the (...)
     
    Export citation  
     
    Bookmark   2 citations  
  45.  87
    Computational complexity of some Ramsey quantifiers in finite models.Marcin Mostowski & Jakub Szymanik - 2007 - Bulletin of Symbolic Logic 13:281--282.
    The problem of computational complexity of semantics for some natural language constructions – considered in [M. Mostowski, D. Wojtyniak 2004] – motivates an interest in complexity of Ramsey quantifiers in finite models. In general a sentence with a Ramsey quantifier R of the following form Rx, yH(x, y) is interpreted as ∃A(A is big relatively to the universe ∧A2 ⊆ H). In the paper cited the problem of the complexity of the Hintikka sentence is reduced to the (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  46.  64
    Bridging computational, formal and psycholinguistic approaches to language.Shimon Edelman - unknown
    We compare our model of unsupervised learning of linguistic structures, ADIOS [1, 2, 3], to some recent work in computational linguistics and in grammar theory. Our approach resembles the Construction Grammar in its general philosophy (e.g., in its reliance on structural generalizations rather than on syntax projected by the lexicon, as in the current generative theories), and the Tree Adjoining Grammar in its computational characteristics (e.g., in its apparent affinity with Mildly Context Sensitive Languages). The representations learned by (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  47.  63
    A Probabilistic Computational Model of Cross-Situational Word Learning.Afsaneh Fazly, Afra Alishahi & Suzanne Stevenson - 2010 - Cognitive Science 34 (6):1017-1063.
    Words are the essence of communication: They are the building blocks of any language. Learning the meaning of words is thus one of the most important aspects of language acquisition: Children must first learn words before they can combine them into complex utterances. Many theories have been developed to explain the impressive efficiency of young children in acquiring the vocabulary of their language, as well as the developmental patterns observed in the course of lexical acquisition. A major (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   26 citations  
  48.  29
    Computational Models and Virtual Reality. New Perspectives of Research in Chemistry.Klaus Mainzer - 1999 - Hyle 5 (2):135 - 144.
    Molecular models are typical topics of chemical research depending on the technical standards of observation, computation, and representation. Mathematically, molecular structures have been represented by means of graph theory, topology, differential equations, and numerical procedures. With the increasing capabilities of computer networks, computational models and computer-assisted visualization become an essential part of chemical research. Object-oriented programming languages create a virtual reality of chemical structures opening new avenues of exploration and collaboration in chemistry. From an epistemic point of (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  49.  5
    Introducing Meta‐analysis in the Evaluation of Computational Models of Infant Language Development.María Andrea Cruz Blandón, Alejandrina Cristia & Okko Räsänen - 2023 - Cognitive Science 47 (7):e13307.
    Computational models of child language development can help us understand the cognitive underpinnings of the language learning process, which occurs along several linguistic levels at once (e.g., prosodic and phonological). However, in light of the replication crisis, modelers face the challenge of selecting representative and consolidated infant data. Thus, it is desirable to have evaluation methodologies that could account for robust empirical reference data, across multiple infant capabilities. Moreover, there is a need for practices that can (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  50. The Computer Revolution in Philosophy: Philosophy, Science, and Models of Mind.Aaron Sloman - 1978 - Hassocks UK: Harvester Press.
    Extract from Hofstadter's revew in Bulletin of American Mathematical Society : http://www.ams.org/journals/bull/1980-02-02/S0273-0979-1980-14752-7/S0273-0979-1980-14752-7.pdf -/- "Aaron Sloman is a man who is convinced that most philosophers and many other students of mind are in dire need of being convinced that there has been a revolution in that field happening right under their noses, and that they had better quickly inform themselves. The revolution is called "Artificial Intelligence" (Al)-and Sloman attempts to impart to others the "enlighten- ment" which he clearly regrets not having (...)
    Direct download (7 more)  
     
    Export citation  
     
    Bookmark   141 citations  
1 — 50 / 995