Results for 'Biologically plausible spiking neural networks'

1000+ found
Order:
  1.  9
    Biologically Plausible, Human‐Scale Knowledge Representation.Eric Crawford, Matthew Gingerich & Chris Eliasmith - 2016 - Cognitive Science 40 (4):782-821.
    Several approaches to implementing symbol-like representations in neurally plausible models have been proposed. These approaches include binding through synchrony, “mesh” binding, and conjunctive binding. Recent theoretical work has suggested that most of these methods will not scale well, that is, that they cannot encode structured representations using any of the tens of thousands of terms in the adult lexicon without making implausible resource assumptions. Here, we empirically demonstrate that the biologically plausible structured representations employed in the Semantic (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   4 citations  
  2.  16
    Connecting Biological Detail With Neural Computation: Application to the Cerebellar Granule–Golgi Microcircuit.Andreas Stöckel, Terrence C. Stewart & Chris Eliasmith - 2021 - Topics in Cognitive Science 13 (3):515-533.
    We present techniques for integrating low‐level neurobiological constraints into high‐level, functional cognitive models. In particular, we use these techniques to construct a model of eyeblink conditioning in the cerebellum based on temporal representations in the recurrent Granule‐Golgi microcircuit.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  3.  19
    On the biological plausibility of grandmother cells: Implications for neural network theories in psychology and neuroscience.Jeffrey S. Bowers - 2009 - Psychological Review 116 (1):220-251.
    A fundamental claim associated with parallel distributed processing theories of cognition is that knowledge is coded in a distributed manner in mind and brain. This approach rejects the claim that knowledge is coded in a localist fashion, with words, objects, and simple concepts, that is, coded with their own dedicated representations. One of the putative advantages of this approach is that the theories are biologically plausible. Indeed, advocates of the PDP approach often highlight the close parallels between distributed (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   26 citations  
  4.  8
    Information integration based predictions about the conscious states of a spiking neural network.David Gamez - 2010 - Consciousness and Cognition 19 (1):294-310.
    This paper describes how Tononi’s information integration theory of consciousness was used to make detailed predictions about the distribution of phenomenal states in a spiking neural network. This network had approximately 18,000 neurons and 700,000 connections and it used models of emotion and imagination to control the eye movements of a virtual robot and avoid ‘negative’ stimuli. The first stage in the analysis was the development of a formal definition of Tononi’s theory of consciousness. The network was then (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark  
  5.  34
    Deep problems with neural network models of human vision.Jeffrey S. Bowers, Gaurav Malhotra, Marin Dujmović, Milton Llera Montero, Christian Tsvetkov, Valerio Biscione, Guillermo Puebla, Federico Adolfi, John E. Hummel, Rachel F. Heaton, Benjamin D. Evans, Jeffrey Mitchell & Ryan Blything - 2023 - Behavioral and Brain Sciences 46:e385.
    Deep neural networks (DNNs) have had extraordinary successes in classifying photographic images of objects and are often described as the best models of biological vision. This conclusion is largely based on three sets of findings: (1) DNNs are more accurate than any other model in classifying images taken from various datasets, (2) DNNs do the best job in predicting the pattern of human errors in classifying objects taken from various behavioral datasets, and (3) DNNs do the best job (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  6.  4
    Dynamic thresholds for controlling encoding and retrieval operations in localist (or distributed) neural networks: The need for biologically plausible implementations.Alan D. Pickering - 2000 - Behavioral and Brain Sciences 23 (4):488-489.
    A dynamic threshold, which controls the nature and course of learning, is a pivotal concept in Page's general localist framework. This commentary addresses various issues surrounding biologically plausible implementations for such thresholds. Relevant previous research is noted and the particular difficulties relating to the creation of so-called instance representations are highlighted. It is stressed that these issues also apply to distributed models.
    Direct download (7 more)  
     
    Export citation  
     
    Bookmark  
  7. Toward biologically plausible artificial vision.Mason Westfall - 2023 - Behavioral and Brain Sciences 46:e290.
    Quilty-Dunn et al. argue that deep convolutional neural networks (DCNNs) optimized for image classification exemplify structural disanalogies to human vision. A different kind of artificial vision – found in reinforcement-learning agents navigating artificial three-dimensional environments – can be expected to be more human-like. Recent work suggests that language-like representations substantially improves these agents’ performance, lending some indirect support to the language-of-thought hypothesis (LoTH).
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  8.  4
    On bifurcations and chaos in random neural networks.B. Doyon, B. Cessac, M. Quoy & M. Samuelides - 1994 - Acta Biotheoretica 42 (2-3):215-225.
    Chaos in nervous system is a fascinating but controversial field of investigation. To approach the role of chaos in the real brain, we theoretically and numerically investigate the occurrence of chaos inartificial neural networks. Most of the time, recurrent networks (with feedbacks) are fully connected. This architecture being not biologically plausible, the occurrence of chaos is studied here for a randomly diluted architecture. By normalizing the variance of synaptic weights, we produce a bifurcation parameter, dependent (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  9.  3
    Additional tests of Amit's attractor neural networks.Ralph E. Hoffman - 1995 - Behavioral and Brain Sciences 18 (4):634-635.
    Further tests of Amit's model are indicated. One strategy is to use the apparent coding sparseness of the model to make predictions about coding sparseness in Miyashita's network. A second approach is to use memory overload to induce false positive responses in modules and biological systems. In closing, the importance of temporal coding and timing requirements in developing biologically plausible attractor networks is mentioned.
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  10.  25
    Improving With Practice: A Neural Model of Mathematical Development.Sean Aubin, Aaron R. Voelker & Chris Eliasmith - 2016 - Topics in Cognitive Science 9 (1):6-20.
    The ability to improve in speed and accuracy as a result of repeating some task is an important hallmark of intelligent biological systems. Although gradual behavioral improvements from practice have been modeled in spiking neural networks, few such models have attempted to explain cognitive development of a task as complex as addition. In this work, we model the progression from a counting-based strategy for addition to a recall-based strategy. The model consists of two networks working in (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  11.  7
    A Neural Model of Rule Generation in Inductive Reasoning.Daniel Rasmussen & Chris Eliasmith - 2011 - Topics in Cognitive Science 3 (1):140-153.
    Inductive reasoning is a fundamental and complex aspect of human intelligence. In particular, how do subjects, given a set of particular examples, generate general descriptions of the rules governing that set? We present a biologically plausible method for accomplishing this task and implement it in a spiking neuron model. We demonstrate the success of this model by applying it to the problem domain of Raven's Progressive Matrices, a widely used tool in the field of intelligence testing. The (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   10 citations  
  12.  14
    Neural networks, nativism, and the plausibility of constructivism.Steven R. Quartz - 1993 - Cognition 48 (3):223-242.
    No categories
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   42 citations  
  13.  30
    The emergence of polychronization and feature binding in a spiking neural network model of the primate ventral visual system.Akihiro Eguchi, James B. Isbister, Nasir Ahmad & Simon Stringer - 2018 - Psychological Review 125 (4):545-571.
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  14.  3
    Artificial Neural Networks in Medicine and Biology.Helge Malmgren - unknown
    Artificial neural networks (ANNs) are new mathematical techniques which can be used for modelling real neural networks, but also for data categorisation and inference tasks in any empirical science. This means that they have a twofold interest for the philosopher. First, ANN theory could help us to understand the nature of mental phenomena such as perceiving, thinking, remembering, inferring, knowing, wanting and acting. Second, because ANNs are such powerful instruments for data classification and inference, their use (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  15.  14
    Vector subtraction implemented neurally: A neurocomputational model of some sequential cognitive and conscious processes.John Bickle, Cindy Worley & Marica Bernstein - 2000 - Consciousness and Cognition 9 (1):117-144.
    Although great progress in neuroanatomy and physiology has occurred lately, we still cannot go directly to those levels to discover the neural mechanisms of higher cognition and consciousness. But we can use neurocomputational methods based on these details to push this project forward. Here we describe vector subtraction as an operation that computes sequential paths through high-dimensional vector spaces. Vector-space interpretations of network activity patterns are a fruitful resource in recent computational neuroscience. Vector subtraction also appears to be implemented (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  16.  8
    Front Waves of Chemical Reactions and Travelling Waves of Neural Activity.Yidi Zhang, Shan Guo, Mingzhu Sun, Lucio Mariniello, Arturo Tozzi & Xin Zhao - 2022 - Journal of Neurophilosophy 1 (2).
    Travelling waves crossing the nervous networks at mesoscopic/macroscopic scales have been correlated with different brain functions, from long-term memory to visual stimuli. Here we investigate a feasible relationship between wave generation/propagation in recurrent nervous networks and a physical/chemical model, namely the Belousov–Zhabotinsky reaction. Since BZ’s nonlinear, chaotic chemical process generates concentric/intersecting waves that closely resemble the diffusive nonlinear/chaotic oscillatory patterns crossing the nervous tissue, we aimed to investigate whether wave propagation of brain oscillations could be described in terms (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  17. AISC 17 Talk: The Explanatory Problems of Deep Learning in Artificial Intelligence and Computational Cognitive Science: Two Possible Research Agendas.Antonio Lieto - 2018 - In Proceedings of AISC 2017.
    Endowing artificial systems with explanatory capacities about the reasons guiding their decisions, represents a crucial challenge and research objective in the current fields of Artificial Intelligence (AI) and Computational Cognitive Science [Langley et al., 2017]. Current mainstream AI systems, in fact, despite the enormous progresses reached in specific tasks, mostly fail to provide a transparent account of the reasons determining their behavior (both in cases of a successful or unsuccessful output). This is due to the fact that the classical problem (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  18. Brain inspired cognitive systems (BICS).Ron Chrisley - unknown
    This Neurocomputing special issue is based on selected, expanded and significantly revised versions of papers presented at the Second International Conference on Brain Inspired Cognitive Systems (BICS 2006) held at Lesvos, Greece, from 10 to 14 October 2006. The aim of BICS 2006, which followed the very successful first BICS 2004 held at Stirling, Scotland, was to bring together leading scientists and engineers who use analytic, syntactic and computational methods both to understand the prodigious processing properties of biological systems and, (...)
     
    Export citation  
     
    Bookmark  
  19.  36
    A Neural Network Framework for Cognitive Bias.Johan E. Korteling, Anne-Marie Brouwer & Alexander Toet - 2018 - Frontiers in Psychology 9:358644.
    Human decision making shows systematic simplifications and deviations from the tenets of rationality (‘heuristics’) that may lead to suboptimal decisional outcomes (‘cognitive biases’). There are currently three prevailing theoretical perspectives on the origin of heuristics and cognitive biases: a cognitive-psychological, an ecological and an evolutionary perspective. However, these perspectives are mainly descriptive and none of them provides an overall explanatory framework for the underlying mechanisms of cognitive biases. To enhance our understanding of cognitive heuristics and biases we propose a (...) network framework for cognitive biases, which explains why our brain systematically tends to default to heuristic (‘Type 1’) decision making. We argue that many cognitive biases arise from intrinsic brain mechanisms that are fundamental for the working of biological neural networks. In order to substantiate our viewpoint, we discern and explain four basic neural network principles: (1) Association, (2) Compatibility (3) Retainment, and (4) Focus. These principles are inherent to (all) neural networks which were originally optimized to perform concrete biological, perceptual, and motor functions. They form the basis for our inclinations to associate and combine (unrelated) information, to prioritize information that is compatible with our present state (such as knowledge, opinions and expectations), to retain given information that sometimes could better be ignored, and to focus on dominant information while ignoring relevant information that is not directly activated. The supposed mechanisms are complementary and not mutually exclusive. For different cognitive biases they may all contribute in varying degrees to distortion of information. The present viewpoint not only complements the earlier three viewpoints, but also provides a unifying and binding framework for many cognitive bias phenomena. (shrink)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  20.  5
    A coupled attractor model of the rodent head direction system.Adam Elga - unknown
    Head direction (HD) cells, abundant in the rat postsubiculum and anterior thalamic nuclei, fire maximally when the rat’s head is facing a particular direction. The activity of a population of these cells forms a distributed representation of the animal’s current heading. We describe a neural network model that creates a stable, distributed representation of head direction and updates that representation in response to angular velocity information. In contrast to earlier models, our model of the head direction system accurately tracks (...)
    Direct download  
     
    Export citation  
     
    Bookmark   2 citations  
  21.  20
    The Cognitive Philosophy of Communication.Trond A. Tjøstheim, Andreas Stephens, Andrey Anikin & Arthur Schwaninger - 2020 - Philosophies 5 (4):39.
    Numerous species use different forms of communication in order to successfully interact in their respective environment. This article seeks to elucidate limitations of the classical conduit metaphor by investigating communication from the perspectives of biology and artificial neural networks. First, communication is a biological natural phenomenon, found to be fruitfully grounded in an organism’s embodied structures and memory system, where specific abilities are tied to procedural, semantic, and episodic long-term memory as well as to working memory. Second, the (...)
    No categories
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark  
  22. Biological neural networks in invertebrate neuroethology and robotics.Randall D. Beer, Roy E. Ritzmann & Thomas McKenna - 1994 - Bioessays 16 (11):857.
     
    Export citation  
     
    Bookmark  
  23.  96
    Recurrent Convolutional Neural Networks: A Better Model of Biological Object Recognition.Courtney J. Spoerer, Patrick McClure & Nikolaus Kriegeskorte - 2017 - Frontiers in Psychology 8.
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  24.  15
    A Biologically Inspired Neural Network Model to Gain Insight Into the Mechanisms of Post-Traumatic Stress Disorder and Eye Movement Desensitization and Reprocessing Therapy.Andrea Mattera, Alessia Cavallo, Giovanni Granato, Gianluca Baldassarre & Marco Pagani - 2022 - Frontiers in Psychology 13.
    Eye movement desensitization and reprocessing therapy is a well-established therapeutic method to treat post-traumatic stress disorder. However, how EMDR exerts its therapeutic action has been studied in many types of research but still needs to be completely understood. This is in part due to limited knowledge of the neurobiological mechanisms underlying EMDR, and in part to our incomplete understanding of PTSD. In order to model PTSD, we used a biologically inspired computational model based on firing rate units, encompassing the (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  25.  92
    Cultural Exaptation and Cultural Neural Reuse: A Mechanism for the Emergence of Modern Culture and Behavior.Francesco D’Errico & Ivan Colagè - 2018 - Biological Theory 13 (4):213-227.
    On the basis of recent advancements in both neuroscience and archaeology, we propose a plausible biocultural mechanism at the basis of cultural evolution. The proposed mechanism, which relies on the notions of cultural exaptation and cultural neural reuse, may account for the asynchronous, discontinuous, and patchy emergence of innovations around the globe. Cultural exaptation refers to the reuse of previously devised cultural features for new purposes. Cultural neural reuse refers to cases in which exposure to cultural practices (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   12 citations  
  26.  10
    Mental Recognition of Objects via Ramsey Sentences.Arturo Tozzi - 2023 - Journal of Neurophilosophy 2 (2).
    Dogs display vast phenotypic diversity, including differences in height, skull shape, tail, etc. Yet, humans are almost always able to quickly recognize a dog, despite no single feature or group of features are critical to distinguish dogs from other objects/animals. In search of the mental activities leading human individuals to state “I see a dog”, we hypothesize that the brain might extract meaningful information from the environment using Ramsey sentences-like procedures. To turn the proposition “I see a dog” in a (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  27.  89
    A Brief Review of Neural Networks Based Learning and Control and Their Applications for Robots.Yiming Jiang, Chenguang Yang, Jing Na, Guang Li, Yanan Li & Junpei Zhong - 2017 - Complexity:1-14.
    As an imitation of the biological nervous systems, neural networks, which have been characterized as powerful learning tools, are employed in a wide range of applications, such as control of complex nonlinear systems, optimization, system identification, and patterns recognition. This article aims to bring a brief review of the state-of-the-art NNs for the complex nonlinear systems by summarizing recent progress of NNs in both theory and practical applications. Specifically, this survey also reviews a number of NN based robot (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  28.  17
    Corrigendum: Recurrent Convolutional Neural Networks: A Better Model of Biological Object Recognition.Courtney J. Spoerer, Patrick McClure & Nikolaus Kriegeskorte - 2018 - Frontiers in Psychology 9.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  29.  31
    The brain, the artificial neural network and the snake: why we see what we see.Carloalberto Treccani - forthcoming - AI and Society:1-9.
    For millions of years, biological creatures have dealt with the world without being able to see it; however, the change in the atmospheric condition during the Cambrian period and the subsequent increase of light, triggered the sudden evolution of vision and the consequent evolutionary benefits. Nevertheless, how from simple organisms to more complex animals have been able to generate meaning from the light who fell in their eyes and successfully engage the visual world remains unknown. As shown by many psychophysical (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  30.  21
    A solution to the tag-assignment problem for neural networks.Gary W. Strong & Bruce A. Whitehead - 1989 - Behavioral and Brain Sciences 12 (3):381-397.
    Purely parallel neural networks can model object recognition in brief displays – the same conditions under which illusory conjunctions have been demonstrated empirically. Correcting errors of illusory conjunction is the “tag-assignment” problem for a purely parallel processor: the problem of assigning a spatial tag to nonspatial features, feature combinations, and objects. This problem must be solved to model human object recognition over a longer time scale. Our model simulates both the parallel processes that may underlie illusory conjunctions and (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   175 citations  
  31.  6
    Neural networks need real-world behavior.Aedan Y. Li & Marieke Mur - 2023 - Behavioral and Brain Sciences 46:e398.
    Bowers et al. propose to use controlled behavioral experiments when evaluating deep neural networks as models of biological vision. We agree with the sentiment and draw parallels to the notion that “neuroscience needs behavior.” As a promising path forward, we suggest complementing image recognition tasks with increasingly realistic and well-controlled task environments that engage real-world object recognition behavior.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  32. The Grossberg Code: Universal Neural Network Signatures of Perceptual Experience.Birgitta Dresp-Langley - 2023 - Information 14 (2):1-82.
    Two universal functional principles of Grossberg’s Adaptive Resonance Theory decipher the brain code of all biological learning and adaptive intelligence. Low-level representations of multisensory stimuli in their immediate environmental context are formed on the basis of bottom-up activation and under the control of top-down matching rules that integrate high-level, long-term traces of contextual configuration. These universal coding principles lead to the establishment of lasting brain signatures of perceptual experience in all living species, from aplysiae to primates. They are re-visited in (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  33.  8
    Natural Ethical Facts: Evolution, Connectionism, and Moral Cognition.William D. Casebeer - 2003 - Bradford.
    In Natural Ethical Facts William Casebeer argues that we can articulate a fully naturalized ethical theory using concepts from evolutionary biology and cognitive science, and that we can study moral cognition just as we study other forms of cognition. His goal is to show that we have "softly fixed" human natures, that these natures are evolved, and that our lives go well or badly depending on how we satisfy the functional demands of these natures. Natural Ethical Facts is a comprehensive (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   40 citations  
  34.  8
    Natural Ethical Facts: Evolution, Connectionism, and Moral Cognition.William D. Casebeer - 2003 - Bradford.
    In Natural Ethical Facts William Casebeer argues that we can articulate a fully naturalized ethical theory using concepts from evolutionary biology and cognitive science, and that we can study moral cognition just as we study other forms of cognition. His goal is to show that we have "softly fixed" human natures, that these natures are evolved, and that our lives go well or badly depending on how we satisfy the functional demands of these natures. Natural Ethical Facts is a comprehensive (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   36 citations  
  35.  10
    Biologically applied neural networks may foster the coevolution of neurobiology and Cognitive psychology.Bill Baird - 1987 - Behavioral and Brain Sciences 10 (3):436-437.
  36.  75
    Even deeper problems with neural network models of language.Thomas G. Bever, Noam Chomsky, Sandiway Fong & Massimo Piattelli-Palmarini - 2023 - Behavioral and Brain Sciences 46:e387.
    We recognize today's deep neural network (DNN) models of language behaviors as engineering achievements. However, what we know intuitively and scientifically about language shows that what DNNs are and how they are trained on bare texts, makes them poor models of mind and brain for language organization, as it interacts with infant biology, maturation, experience, unique principles, and natural law.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  37. The Grossberg Code: Universal Neural Network Signatures of Perceptual Experience.Birgitta Dresp-Langley - 2023 - Information 14 (2):e82 1-17..
    Two universal functional principles of Grossberg’s Adaptive Resonance Theory [19] decipher the brain code of all biological learning and adaptive intelligence. Low-level representations of multisensory stimuli in their immediate environmental context are formed on the basis of bottom-up activation and under the control of top-down matching rules that integrate high-level long-term traces of contextual configuration. These universal coding principles lead to the establishment of lasting brain signatures of perceptual experience in all living species, from aplysiae to primates. They are re-visited (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  38.  2
    Evaluating Explanations in Law, Science, and Everyday Life.Paul Thagard - unknown
    ��This article reviews a theory of explanatory coherence that provides a psychologically plausible account of how people evaluate competing explanations. The theory is implemented in a computational model that uses simple artificial neural networks to simulate many important cases of scientific and legal reasoning. Current research directions include extensions to emotional thinking and implementation in more biologically realistic neural networks.
    Direct download  
     
    Export citation  
     
    Bookmark   9 citations  
  39. Human Symmetry Uncertainty Detected by a Self-Organizing Neural Network Map.Birgitta Dresp-Langley - 2021 - Symmetry 13:299.
    Symmetry in biological and physical systems is a product of self-organization driven by evolutionary processes, or mechanical systems under constraints. Symmetry-based feature extraction or representation by neural networks may unravel the most informative contents in large image databases. Despite significant achievements of artificial intelligence in recognition and classification of regular patterns, the problem of uncertainty remains a major challenge in ambiguous data. In this study, we present an artificial neural network that detects symmetry uncertainty states in human (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  40.  5
    A general account of selection: Biology, immunology, and behavior-Open Peer Commentary-A neural-network interpretation of selection in learning and behavior.D. L. Hull, R. E. Langman, S. S. Glenn & J. E. Burgos - 2001 - Behavioral and Brain Sciences 24 (3):531-532.
    In their account of learning and behavior, the authors define an interactor as emitted behavior that operates on the environment, which excludes Pavlovian learning. A unified neural-network account of the operant-Pavlovian dichotomy favors interpreting neurons as interactors and synaptic efficacies as replicators. The latter interpretation implies that single-synapse change is inherently Lamarckian.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  41.  16
    Emergent Quantumness in Neural Networks.Mikhail I. Katsnelson & Vitaly Vanchurin - 2021 - Foundations of Physics 51 (5):1-20.
    It was recently shown that the Madelung equations, that is, a hydrodynamic form of the Schrödinger equation, can be derived from a canonical ensemble of neural networks where the quantum phase was identified with the free energy of hidden variables. We consider instead a grand canonical ensemble of neural networks, by allowing an exchange of neurons with an auxiliary subsystem, to show that the free energy must also be multivalued. By imposing the multivaluedness condition on the (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  42.  18
    Adaptive Orthogonal Characteristics of Bio-Inspired Neural Networks.Naohiro Ishii, Toshinori Deguchi, Masashi Kawaguchi, Hiroshi Sasaki & Tokuro Matsuo - 2022 - Logic Journal of the IGPL 30 (4):578-598.
    In recent years, neural networks have attracted much attention in the machine learning and the deep learning technologies. Bio-inspired functions and intelligence are also expected to process efficiently and improve existing technologies. In the visual pathway, the prominent features consist of nonlinear characteristics of squaring and rectification functions observed in the retinal and visual cortex networks, respectively. Further, adaptation is an important feature to activate the biological systems, efficiently. Recently, to overcome short-comings of the deep learning techniques, (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  43.  5
    Phenomenology, dynamical neural networks and brain function.Donald Borrett, Sean D. Kelly & Hon Kwan - 2000 - Philosophical Psychology 13 (2):213-228.
    Current cognitive science models of perception and action assume that the objects that we move toward and perceive are represented as determinate in our experience of them. A proper phenomenology of perception and action, however, shows that we experience objects indeterminately when we are perceiving them or moving toward them. This indeterminacy, as it relates to simple movement and perception, is captured in the proposed phenomenologically based recurrent network models of brain function. These models provide a possible foundation from which (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  44.  25
    The Handbook of Brain Theory and Neural Networks.Michael A. Arbib (ed.) - 1998 - MIT Press.
    Choice Outstanding Academic Title, 1996. In hundreds of articles by experts from around the world, and in overviews and "road maps" prepared by the editor, The Handbook of Brain Theory and Neural Networks charts the immense progress made in recent years in many specific areas related to great questions: How does the brain work? How can we build intelligent machines? While many books discuss limited aspects of one subfield or another of brain theory and neural networks, (...)
    Direct download  
     
    Export citation  
     
    Bookmark   16 citations  
  45.  12
    Dynamical learning algorithms for neural networks and neural constructivism.Enrico Blanzieri - 1997 - Behavioral and Brain Sciences 20 (4):559-559.
    The present commentary addresses the Quartz & Sejnowski (Q&S) target article from the point of view of the dynamical learning algorithm for neural networks. These techniques implicitly adopt Q&S's neural constructivist paradigm. Their approach hence receives support from the biological and psychological evidence. Limitations of constructive learning for neural networks are discussed with an emphasis on grammar learning.
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark  
  46.  25
    Connecting Twenty-First Century Connectionism and Wittgenstein.Charles W. Lowney, Simon D. Levy, William Meroney & Ross W. Gayler - 2020 - Philosophia 48 (2):643-671.
    By pointing to deep philosophical confusions endemic to cognitive science, Wittgenstein might seem an enemy of computational approaches. We agree that while Wittgenstein would reject the classicist’s symbols and rules approach, his observations align well with connectionist or neural network approaches. While many connectionisms that dominated the later twentieth century could fall prey to criticisms of biological, pedagogical, and linguistic implausibility, current connectionist approaches can resolve those problems in a Wittgenstein-friendly manner. We present the basics of a Vector Symbolic (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  47.  1
    Localist representations are a desirable emergent property of neurologically plausible neural networks.Colin Martindale - 2000 - Behavioral and Brain Sciences 23 (4):485-486.
    Page has done connectionist researchers a valuable service in this target article. He points out that connectionist models using localized representations often work as well or better than models using distributed representations. I point out that models using distributed representations are difficult to understand and often lack parsimony and plausibility. In conclusion, I give an example – the case of the missing fundamental in music – that can easily be explained by a model using localist representations but can be explained (...)
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark  
  48. Neurobiological Modeling and Analysis-An Electromechanical Neural Network Robotic Model of the Human Body and Brain: Sensory-Motor Control by Reverse Engineering Biological Somatic Sensors.Alan Rosen & David B. Rosen - 2006 - In O. Stock & M. Schaerf (eds.), Lecture Notes In Computer Science. Springer Verlag. pp. 4232--105.
  49.  10
    Psychophysics may be the game-changer for deep neural networks (DNNs) to imitate the human vision.Keerthi S. Chandran, Amrita Mukherjee Paul, Avijit Paul & Kuntal Ghosh - 2023 - Behavioral and Brain Sciences 46:e388.
    Psychologically faithful deep neural networks (DNNs) could be constructed by training with psychophysics data. Moreover, conventional DNNs are mostly monocular vision based, whereas the human brain relies mainly on binocular vision. DNNs developed as smaller vision agent networks associated with fundamental and less intelligent visual activities, can be combined to simulate more intelligent visual activities done by the biological brain.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  50.  9
    Mu-desynchronization, N400 and corticospinal excitability during observation of natural and anatomically unnatural finger movements.Nikolay Syrov, Dimitri Bredikhin, Lev Yakovlev, Andrei Miroshnikov & Alexander Kaplan - 2022 - Frontiers in Human Neuroscience 16:973229.
    The action observation networks (AON) (or the mirror neuron system) are the neural underpinnings of visuomotor integration and play an important role in motor control. Besides, one of the main functions of the human mirror neuron system is recognition of observed actions and the prediction of its outcome through the comparison with the internal mental motor representation. Previous studies focused on the human mirror neurons (MNs) activation during object-oriented movements observation, therefore intransitive movements observation effects on MNs activity (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
1 — 50 / 1000