Results for 'Artificial language learning'

999 found
Order:
  1.  17
    Sentence processing in an artificial language: Learning and using combinatorial constraints.Michael S. Amato & Maryellen C. MacDonald - 2010 - Cognition 116 (1):143-148.
    Direct download (7 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  2.  55
    A Bayesian Model of Biases in Artificial Language Learning: The Case of a Word‐Order Universal.Jennifer Culbertson & Paul Smolensky - 2012 - Cognitive Science 36 (8):1468-1498.
    In this article, we develop a hierarchical Bayesian model of learning in a general type of artificial languagelearning experiment in which learners are exposed to a mixture of grammars representing the variation present in real learners’ input, particularly at times of language change. The modeling goal is to formalize and quantify hypothesized learning biases. The test case is an experiment (Culbertson, Smolensky, & Legendre, 2012) targeting the learning of word‐order patterns in the nominal (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   11 citations  
  3.  38
    The metamorphosis of the statistical segmentation output: Lexicalization during artificial language learning.Tânia Fernandes, Régine Kolinsky & Paulo Ventura - 2009 - Cognition 112 (3):349-366.
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  4.  13
    Cross-linguistic frequency and the learnability of semantics: Artificial language learning studies of evidentiality.Dionysia Saratsli, Stefan Bartell & Anna Papafragou - 2020 - Cognition 197 (C):104194.
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  5.  9
    12. The use of formal language theory in studies of artificial language learning: A proposal for distinguishing the differences between human and nonhuman animal learners.James Rogers & Marc D. Hauser - 2010 - In Harry van der Hulst (ed.), Recursion and Human Language. De Gruyter Mouton. pp. 213-232.
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   3 citations  
  6.  37
    The Relationship Between Artificial and Second Language Learning.Marc Ettlinger, Kara Morgan-Short, Mandy Faretta-Stutenberg & Patrick C. M. Wong - 2016 - Cognitive Science 40 (4):822-847.
    Artificial language learning experiments have become an important tool in exploring principles of language and language learning. A persistent question in all of this work, however, is whether ALL engages the linguistic system and whether ALL studies are ecologically valid assessments of natural language ability. In the present study, we considered these questions by examining the relationship between performance in an ALL task and second language learning ability. Participants enrolled in a (...)
    No categories
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  7. Input Complexity Affects Long-Term Retention of Statistically Learned Regularities in an Artificial Language Learning Task.Ethan Jost, Katherine Brill-Schuetz, Kara Morgan-Short & Morten H. Christiansen - 2019 - Frontiers in Human Neuroscience 13.
  8. Influence of Perceptual Saliency Hierarchy on Learning of Language Structures: An Artificial Language Learning Experiment.Tao Gong, Yau W. Lam & Lan Shuai - 2016 - Frontiers in Psychology 7.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  9.  55
    All Together Now: Concurrent Learning of Multiple Structures in an Artificial Language.Alexa R. Romberg & Jenny R. Saffran - 2013 - Cognitive Science 37 (7):1290-1320.
    Natural languages contain many layers of sequential structure, from the distribution of phonemes within words to the distribution of phrases within utterances. However, most research modeling language acquisition using artificial languages has focused on only one type of distributional structure at a time. In two experiments, we investigated adult learning of an artificial language that contains dependencies between both adjacent and non-adjacent words. We found that learners rapidly acquired both types of regularities and that the (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   14 citations  
  10. Language Learning and Control in Monolinguals and Bilinguals.James Bartolotti & Viorica Marian - 2012 - Cognitive Science 36 (6):1129-1147.
    Parallel language activation in bilinguals leads to competition between languages. Experience managing this interference may aid novel language learning by improving the ability to suppress competition from known languages. To investigate the effect of bilingualism on the ability to control native-language interference, monolinguals and bilinguals were taught an artificial language designed to elicit between-language competition. Partial activation of interlingual competitors was assessed with eye-tracking and mouse-tracking during a word recognition task in the novel (...)
    Direct download (9 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  11. How Many Mechanisms Are Needed to Analyze Speech? A Connectionist Simulation of Structural Rule Learning in Artificial Language Acquisition.Aarre Laakso & Paco Calvo - 2011 - Cognitive Science 35 (7):1243-1281.
    Some empirical evidence in the artificial language acquisition literature has been taken to suggest that statistical learning mechanisms are insufficient for extracting structural information from an artificial language. According to the more than one mechanism (MOM) hypothesis, at least two mechanisms are required in order to acquire language from speech: (a) a statistical mechanism for speech segmentation; and (b) an additional rule-following mechanism in order to induce grammatical regularities. In this article, we present a (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  12.  31
    Exploiting Multiple Sources of Information in Learning an Artificial Language: Human Data and Modeling.Pierre Perruchet & Barbara Tillmann - 2010 - Cognitive Science 34 (2):255-285.
    This study investigates the joint influences of three factors on the discovery of new word‐like units in a continuous artificial speech stream: the statistical structure of the ongoing input, the initial word‐likeness of parts of the speech flow, and the contextual information provided by the earlier emergence of other word‐like units. Results of an experiment conducted with adult participants show that these sources of information have strong and interactive influences on word discovery. The authors then examine the ability of (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  13.  66
    Rules and similarity processes in artificial grammar and natural second language learning: What is the “default”?Peter Robinson - 2005 - Behavioral and Brain Sciences 28 (1):32-33.
    Are rules processes or similarity processes the default for acquisition of grammatical knowledge during natural second language acquisition? Whereas Pothos argues similarity processes are the default in the many areas he reviews, including artificial grammar learning and first language development, I suggest, citing evidence, that in second language acquisition of grammatical morphology “rules processes” may be the default.
    Direct download (7 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  14.  16
    Co‐Occurrence, Extension, and Social Salience: The Emergence of Indexicality in an Artificial Language.Aini Li & Gareth Roberts - 2023 - Cognitive Science 47 (5):e13290.
    We investigated the emergence of sociolinguistic indexicality using an artificial-language-learning paradigm. Sociolinguistic indexicality involves the association of linguistic variants with nonlinguistic social or contextual features. Any linguistic variant can acquire “constellations” of such indexical meanings, though they also exhibit an ordering, with first-order indices associated with particular speaker groups and higher-order indices targeting stereotypical attributes of those speakers. Much natural-language research has been conducted on this phenomenon, but little experimental work has focused on how indexicality emerges. (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  15.  16
    Experience With a Linguistic Variant Affects the Acquisition of Its Sociolinguistic Meaning: An Alien‐LanguageLearning Experiment.Wei Lai, Péter Rácz & Gareth Roberts - 2020 - Cognitive Science 44 (4):e12832.
    How do speakers learn the social meaning of different linguistic variants, and what factors influence how likely a particular social–linguistic association is to be learned? It has been argued that the social meaning of more salient variants should be learned faster, and that learners' pre‐existing experience of a variant will influence its salience. In this paper, we report two artificiallanguagelearning experiments investigating this. Each experiment involved two languagelearning stages followed by a test. The first stage (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  16.  27
    Under What Conditions Can Recursion Be Learned? Effects of Starting Small in Artificial Grammar Learning of Center‐Embedded Structure.Fenna H. Poletiek, Christopher M. Conway, Michelle R. Ellefson, Jun Lai, Bruno R. Bocanegra & Morten H. Christiansen - 2018 - Cognitive Science 42 (8):2855-2889.
    It has been suggested that external and/or internal limitations paradoxically may lead to superior learning, that is, the concepts of starting small and less is more (Elman, ; Newport, ). In this paper, we explore the type of incremental ordering during training that might help learning, and what mechanism explains this facilitation. We report four artificial grammar learning experiments with human participants. In Experiments 1a and 1b we found a beneficial effect of starting small using two (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  17. Evolutionary consequences of language learning.Partha Niyogi & Robert C. Berwick - 1997 - Linguistics and Philosophy 20 (6):697-719.
    Linguists intuitions about language change can be captured by adynamical systems model derived from the dynamics of language acquisition.Rather than having to posit a separate model for diachronic change, as hassometimes been done by drawing on assumptions from population biology (cf.Cavalli-Sforza and Feldman, 1973; 1981; Kroch, 1990), this new modeldispenses with these independent assumptions by showing how the behavior ofindividual language learners leads to emergent, global populationcharacteristics of linguistic communities over several generations. As thesimplest case, we formalize (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  18.  2
    Predictability and Variation in Language Are Differentially Affected by Learning and Production.Aislinn Keogh, Simon Kirby & Jennifer Culbertson - 2024 - Cognitive Science 48 (4):e13435.
    General principles of human cognition can help to explain why languages are more likely to have certain characteristics than others: structures that are difficult to process or produce will tend to be lost over time. One aspect of cognition that is implicated in language use is working memory—the component of short‐term memory used for temporary storage and manipulation of information. In this study, we consider the relationship between working memory and regularization of linguistic variation. Regularization is a well‐documented process (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  19.  39
    Relationships Between Language Structure and Language Learning: The Suffixing Preference and Grammatical Categorization.Michelle C. St Clair, Padraic Monaghan & Michael Ramscar - 2009 - Cognitive Science 33 (7):1317-1329.
    It is a reasonable assumption that universal properties of natural languages are not accidental. They occur either because they are underwritten by genetic code, because they assist in language processing or language learning, or due to some combination of the two. In this paper we investigate one such language universal: the suffixing preference across the world’s languages, whereby inflections tend to be added to the end of words. A corpus analysis of child‐directed speech in English found (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   23 citations  
  20.  7
    Drift as a Driver of Language Change: An Artificial Language Experiment.Rafael Ventura, Joshua B. Plotkin & Gareth Roberts - 2022 - Cognitive Science 46 (9):e13197.
    Over half a century ago, George Zipf observed that more frequent words tend to be older. Corpus studies since then have confirmed this pattern, with more frequent words being replaced and regularized less often than less frequent words. Two main hypotheses have been proposed to explain this: that frequent words change less because selection against innovation is stronger at higher frequencies, or that they change less because stochastic drift is stronger at lower frequencies. Here, we report the first experimental test (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  21.  17
    Behavioral and Imaging Studies of Infant Artificial Grammar Learning.Judit Gervain, Irene la Cruz-Pavía & LouAnn Gerken - 2020 - Topics in Cognitive Science 12 (3):815-827.
    Gervain et al. discuss both behavioral and neurophysiological AGL studies that investigate rule and structure learning processes in infants. The paper provides an overview of all the major AGL paradigms used to date to investigate infant learning abilities at the level of morpho‐phonology and syntax from a very early age onwards. Gervain et al. also discuss the implications of the results for a general theory of natural language acquisition.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  22.  15
    Five Ways in Which Computational Modeling Can Help Advance Cognitive Science: Lessons From Artificial Grammar Learning.Willem Zuidema, Robert M. French, Raquel G. Alhama, Kevin Ellis, Timothy J. O'Donnell, Tim Sainburg & Timothy Q. Gentner - 2020 - Topics in Cognitive Science 12 (3):925-941.
    Zuidema et al. illustrate how empirical AGL studies can benefit from computational models and techniques. Computational models can help clarifying theories, and thus in delineating research questions, but also in facilitating experimental design, stimulus generation, and data analysis. The authors show, with a series of examples, how computational modeling can be integrated with empirical AGL approaches, and how model selection techniques can indicate the most likely model to explain experimental outcomes.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  23.  12
    Behavioral and Imaging Studies of Infant Artificial Grammar Learning.Judit Gervain, Irene de la Cruz-Pavía & LouAnn Gerken - 2018 - Topics in Cognitive Science 12 (3):815-827.
    Gervain et al. discuss both behavioral and neurophysiological AGL studies that investigate rule and structure learning processes in infants. The paper provides an overview of all the major AGL paradigms used to date to investigate infant learning abilities at the level of morpho‐phonology and syntax from a very early age onwards. Gervain et al. also discuss the implications of the results for a general theory of natural language acquisition.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  24.  22
    Individual Differences in Learning Abilities Impact Structure Addition: Better Learners Create More Structured Languages.Tamar Johnson, Noam Siegelman & Inbal Arnon - 2020 - Cognitive Science 44 (8):e12877.
    Over the last decade, iterated learning studies have provided compelling evidence for the claim that linguistic structure can emerge from non‐structured input, through the process of transmission. However, it is unclear whether individuals differ in their tendency to add structure, an issue with implications for understanding who are the agents of change. Here, we identify and test two contrasting predictions: The first sees learning as a pre‐requisite for structure addition, and predicts a positive correlation between learning accuracy (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  25.  5
    A model of language learning with semantics and meaning-preserving corrections.Dana Angluin & Leonor Becerra-Bonache - 2017 - Artificial Intelligence 242:23-51.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  26.  42
    Adult Learning and Language Simplification.Mark Atkinson, Kenny Smith & Simon Kirby - 2018 - Cognitive Science 42 (8):2818-2854.
    Languages spoken in larger populations are relatively simple. A possible explanation for this is that languages with a greater number of speakers tend to also be those with higher proportions of non‐native speakers, who may simplify language during learning. We assess this explanation for the negative correlation between population size and linguistic complexity in three experiments, using artificial language learning techniques to investigate both the simplifications made by individual adult learners and the potential for such (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  27.  33
    Reinforcement learning and artificial agency.Patrick Butlin - 2024 - Mind and Language 39 (1):22-38.
    There is an apparent connection between reinforcement learning and agency. Artificial entities controlled by reinforcement learning algorithms are standardly referred to as agents, and the mainstream view in the psychology and neuroscience of agency is that humans and other animals are reinforcement learners. This article examines this connection, focusing on artificial reinforcement learning systems and assuming that there are various forms of agency. Artificial reinforcement learning systems satisfy plausible conditions for minimal agency, and (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  28. Learning a Generative Probabilistic Grammar of Experience: A Process‐Level Model of Language Acquisition.Oren Kolodny, Arnon Lotem & Shimon Edelman - 2014 - Cognitive Science 38 (4):227-267.
    We introduce a set of biologically and computationally motivated design choices for modeling the learning of language, or of other types of sequential, hierarchically structured experience and behavior, and describe an implemented system that conforms to these choices and is capable of unsupervised learning from raw natural-language corpora. Given a stream of linguistic input, our model incrementally learns a grammar that captures its statistical patterns, which can then be used to parse or generate new data. The (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  29.  15
    Structured Sequence Learning: Animal Abilities, Cognitive Operations, and Language Evolution.Christopher I. Petkov & Carel ten Cate - 2020 - Topics in Cognitive Science 12 (3):828-842.
    Human language is a salient example of a neurocognitive system that is specialized to process complex dependencies between sensory events distributed in time, yet how this system evolved and specialized remains unclear. Artificial Grammar Learning (AGL) studies have generated a wealth of insights into how human adults and infants process different types of sequencing dependencies of varying complexity. The AGL paradigm has also been adopted to examine the sequence processing abilities of nonhuman animals. We critically evaluate this (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  30.  26
    Second Language Experience Facilitates Statistical Learning of Novel Linguistic Materials.Christine E. Potter, Tianlin Wang & Jenny R. Saffran - 2017 - Cognitive Science 41 (S4):913-927.
    Recent research has begun to explore individual differences in statistical learning, and how those differences may be related to other cognitive abilities, particularly their effects on language learning. In this research, we explored a different type of relationship between language learning and statistical learning: the possibility that learning a new language may also influence statistical learning by changing the regularities to which learners are sensitive. We tested two groups of participants, Mandarin (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  31.  9
    Individual differences in artificial and natural language statistical learning.Erin S. Isbilen, Stewart M. McCauley & Morten H. Christiansen - 2022 - Cognition 225 (C):105123.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  32.  9
    The Role of Feedback in the Statistical Learning of Language‐Like Regularities.Felicity F. Frinsel, Fabio Trecca & Morten H. Christiansen - 2024 - Cognitive Science 48 (3):e13419.
    In language learning, learners engage with their environment, incorporating cues from different sources. However, in lab‐based experiments, using artificial languages, many of the cues and features that are part of real‐world language learning are stripped away. In three experiments, we investigated the role of positive, negative, and mixed feedback on the gradual learning of language‐like statistical regularities within an active guessing game paradigm. In Experiment 1, participants received deterministic feedback (100%), whereas probabilistic feedback (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  33.  13
    The role of psycholinguistics for language learning in teaching based on formulaic sequence use and oral fluency.Yue Yu - 2022 - Frontiers in Psychology 13.
    Psycholinguistics has provided numerous theories that explain how a person acquires a language, produces and perceives both spoken and written language, including theories of proceduralization. Learners of English as a foreign language often find it difficult to achieve oral fluency, a key construct closely related to the mental state or even mental health of learners. According to previous research, this problem could be addressed by the mastery of formulaic sequences, since the employment of formulaic sequences could often (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  34. Language and Learning: The Debate Between Jean Piaget and Noam Chomsky.Massimo Piattelli-Palmarini (ed.) - 1980 - Harvard University Press.
    Introduction: How hard is the "hard core" of a scientific program? / Massimo Piattelli-Palmarini -- pt. 1. The debate: 1. Opening the debate: The psychogenesis of knowledge and its epistemological significance / Jean Piaget -- On cognitive structures and their development: a reply to Piaget / Noam Chomsky -- 2. About the fixed nucleus and its innateness: Introductory remarks / Jean Piaget -- Cognitive strategies in problem solving / Guy Cellerier -- Some clarifications on innatism and constructivism / Guy Cellerier (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   151 citations  
  35.  52
    Large Language Models Demonstrate the Potential of Statistical Learning in Language.Pablo Contreras Kallens, Ross Deans Kristensen-McLachlan & Morten H. Christiansen - 2023 - Cognitive Science 47 (3):e13256.
    To what degree can language be acquired from linguistic input alone? This question has vexed scholars for millennia and is still a major focus of debate in the cognitive science of language. The complexity of human language has hampered progress because studies of language–especially those involving computational modeling–have only been able to deal with small fragments of our linguistic skills. We suggest that the most recent generation of Large Language Models (LLMs) might finally provide the (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  36.  13
    Learning a Generative Probabilistic Grammar of Experience: A Process-Level Model of Language Acquisition.Oren Kolodny, Arnon Lotem & Shimon Edelman - 2015 - Cognitive Science 39 (2):227-267.
    We introduce a set of biologically and computationally motivated design choices for modeling the learning of language, or of other types of sequential, hierarchically structured experience and behavior, and describe an implemented system that conforms to these choices and is capable of unsupervised learning from raw natural-language corpora. Given a stream of linguistic input, our model incrementally learns a grammar that captures its statistical patterns, which can then be used to parse or generate new data. The (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  37.  5
    A computer model of child language learning.Mallory Selfridge - 1986 - Artificial Intelligence 29 (2):171-216.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  38.  88
    Does Artificial Intelligence Use Private Language?Ryan Miller - forthcoming - In Proceedings of the International Ludwig Wittgenstein Symposium 2021. Vienna: Lit Verlag.
    Wittgenstein’s Private Language Argument holds that language requires rule-following, rule following requires the possibility of error, error is precluded in pure introspection, and inner mental life is known only by pure introspection, thus language cannot exist entirely within inner mental life. Fodor defends his Language of Thought program against the Private Language Argument with a dilemma: either privacy is so narrow that internal mental life can be known outside of introspection, or so broad that computer (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  39. Christian Mannes.Learning Sensory-Motor Coordination Experimentation - 1990 - In G. Dorffner (ed.), Konnektionismus in Artificial Intelligence Und Kognitionsforschung. Berlin: Springer-Verlag. pp. 95.
     
    Export citation  
     
    Bookmark  
  40.  17
    Sequential learning and the interaction between biological and linguistic adaptation in language evolution.Florencia Reali & Morten H. Christiansen - 2009 - Interaction Studies 10 (1):5-30.
    It is widely assumed that language in some form or other originated by piggybacking on pre-existing learning mechanism not dedicated to language. Using evolutionary connectionist simulations, we explore the implications of such assumptions by determining the effect of constraints derived from an earlier evolved mechanism for sequential learning on the interaction between biological and linguistic adaptation across generations of language learners. Artificial neural networks were initially allowed to evolve “biologically” to improve their sequential (...) abilities, after which language was introduced into the population. We compared the relative contribution of biological and linguistic adaptation by allowing both networks and language to change over time. The simulation results support two main conclusions: First, over generations, a consistent head-ordering emerged due to linguistic adaptation. This is consistent with previous studies suggesting that some apparently arbitrary aspects of linguistic structure may arise from cognitive constraints on sequential learning. Second, when networks were selected to maintain a good level of performance on the sequential learning task, language learnability is significantly improved by linguistic adaptation but not by biological adaptation. Indeed, the pressure toward maintaining a high level of sequential learning performance prevented biological assimilation of linguistic-specific knowledge from occurring. (shrink)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  41.  9
    Order Matters! Influences of Linear Order on Linguistic Category Learning.Dorothée B. Hoppe, Jacolien Rij, Petra Hendriks & Michael Ramscar - 2020 - Cognitive Science 44 (11):e12910.
    Linguistic category learning has been shown to be highly sensitive to linear order, and depending on the task, differentially sensitive to the information provided by preceding category markers (premarkers, e.g., gendered articles) or succeeding category markers (postmarkers, e.g., gendered suffixes). Given that numerous systems for marking grammatical categories exist in natural languages, it follows that a better understanding of these findings can shed light on the factors underlying this diversity. In two discriminative learning simulations and an artificial (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  42.  9
    Order Matters! Influences of Linear Order on Linguistic Category Learning.Dorothée B. Hoppe, Jacolien van Rij, Petra Hendriks & Michael Ramscar - 2020 - Cognitive Science 44 (11):e12910.
    Linguistic category learning has been shown to be highly sensitive to linear order, and depending on the task, differentially sensitive to the information provided by preceding category markers (premarkers, e.g., gendered articles) or succeeding category markers (postmarkers, e.g., gendered suffixes). Given that numerous systems for marking grammatical categories exist in natural languages, it follows that a better understanding of these findings can shed light on the factors underlying this diversity. In two discriminative learning simulations and an artificial (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  43.  37
    Artificial Intelligence, Language, and the Study of Knowledge*,†.Ira Goldstein & Seymour Papert - 1977 - Cognitive Science 1 (1):84-123.
    This paper studies the relationship of Artificial Intelligence to the study of language and the representation of the underlying knowledge which supports the comprehension process. It develops the view that intelligence is based on the ability to use large amounts of diverse kinds of knowledge in procedural ways, rather than on the possession of a few general and uniform principles. The paper also provides a unifying thread to a variety of recent approaches to natural language comprehension. We (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  44.  26
    Sequential learning and the interaction between biological and linguistic adaptation in language evolution.Florencia Reali & Morten H. Christiansen - 2009 - Interaction Studies. Social Behaviour and Communication in Biological and Artificial Systemsinteraction Studies / Social Behaviour and Communication in Biological and Artificial Systemsinteraction Studies 10 (1):5-30.
    It is widely assumed that language in some form or other originated by piggybacking on pre-existing learning mechanism not dedicated to language. Using evolutionary connectionist simulations, we explore the implications of such assumptions by determining the effect of constraints derived from an earlier evolved mechanism for sequential learning on the interaction between biological and linguistic adaptation across generations of language learners. Artificial neural networks were initially allowed to evolve “biologically” to improve their sequential (...) abilities, after which language was introduced into the population. We compared the relative contribution of biological and linguistic adaptation by allowing both networks and language to change over time. The simulation results support two main conclusions: First, over generations, a consistent head-ordering emerged due to linguistic adaptation. This is consistent with previous studies suggesting that some apparently arbitrary aspects of linguistic structure may arise from cognitive constraints on sequential learning. Second, when networks were selected to maintain a good level of performance on the sequential learning task, language learnability is significantly improved by linguistic adaptation but not by biological adaptation. Indeed, the pressure toward maintaining a high level of sequential learning performance prevented biological assimilation of linguistic-specific knowledge from occurring. (shrink)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  45.  15
    Investigation of the Influence of Artificial Intelligence Markup Language-Based LINE ChatBot in Contextual English Learning.Yu-Cheng Chien, Ting-Ting Wu, Chia-Hung Lai & Yueh-Min Huang - 2022 - Frontiers in Psychology 13.
    This study is intended to create an innovative contextual English learning environment making use of the widely used communication software, LINE ChatBot, based on the Artificial Intelligence Markup Language, in order to improve speaking and listening ability among learners. A total of 73 students were invited to participate in learning activities involving a 4-week English conversation exercise including both speaking and listening. Additionally, in order to explore the influence of competition on language acquisition, we added (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  46.  24
    Spontaneous emergence of language-like and music-like vocalizations from an artificial protolanguage.Weiyi Ma, Anna Fiveash & William Forde Thompson - 2019 - Semiotica 2019 (229):1-23.
    How did human vocalizations come to acquire meaning in the evolution of our species? Charles Darwin proposed that language and music originated from a common emotional signal system based on the imitation and modification of sounds in nature. This protolanguage is thought to have diverged into two separate systems, with speech prioritizing referential functionality and music prioritizing emotional functionality. However, there has never been an attempt to empirically evaluate the hypothesis that a single communication system can split into two (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  47.  5
    Multi-language transfer learning for low-resource legal case summarization.Gianluca Moro, Nicola Piscaglia, Luca Ragazzi & Paolo Italiani - forthcoming - Artificial Intelligence and Law:1-29.
    Analyzing and evaluating legal case reports are labor-intensive tasks for judges and lawyers, who usually base their decisions on report abstracts, legal principles, and commonsense reasoning. Thus, summarizing legal documents is time-consuming and requires excellent human expertise. Moreover, public legal corpora of specific languages are almost unavailable. This paper proposes a transfer learning approach with extractive and abstractive techniques to cope with the lack of labeled legal summarization datasets, namely a low-resource scenario. In particular, we conducted extensive multi- and (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  48.  54
    Evolutionary psychology, learning, and belief signaling: design for natural and artificial systems.Eric Funkhouser - 2021 - Synthese 199 (5-6):14097-14119.
    Recent work in the cognitive sciences has argued that beliefs sometimes acquire signaling functions in virtue of their ability to reveal information that manipulates “mindreaders.” This paper sketches some of the evolutionary and design considerations that could take agents from solipsistic goal pursuit to beliefs that serve as social signals. Such beliefs will be governed by norms besides just the traditional norms of epistemology. As agents become better at detecting the agency of others, either through evolutionary history or individual (...), the candidate pool for signaling expands. This logic holds for natural and artificial agents that find themselves in recurring social situations that reward the sharing of one’s thoughts. (shrink)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  49.  23
    Do Humans Really Learn A n B n Artificial Grammars From Exemplars?Jean-Rémy Hochmann, Mahan Azadpour & Jacques Mehler - 2008 - Cognitive Science 32 (6):1021-1036.
    Direct download  
     
    Export citation  
     
    Bookmark   6 citations  
  50.  25
    Incremental Bayesian Category Learning From Natural Language.Lea Frermann & Mirella Lapata - 2016 - Cognitive Science 40 (6):1333-1381.
    Models of category learning have been extensively studied in cognitive science and primarily tested on perceptual abstractions or artificial stimuli. In this paper, we focus on categories acquired from natural language stimuli, that is, words. We present a Bayesian model that, unlike previous work, learns both categories and their features in a single process. We model category induction as two interrelated subproblems: the acquisition of features that discriminate among categories, and the grouping of concepts into categories based (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   3 citations  
1 — 50 / 999