Results for 'chaotic neural network'

1000+ found
Order:
  1.  7
    Noisy chaotic neural networks for combinatorial optimization.Lipo Wang & Haixiang Shi - 2007 - In Wlodzislaw Duch & Jacek Mandziuk (eds.), Challenges for Computational Intelligence. Springer. pp. 467--487.
    Direct download  
     
    Export citation  
     
    Bookmark  
  2.  41
    Information processing, memories, and synchronization in chaotic neural network with the time delay.Vladimir E. Bondarenko - 2005 - Complexity 11 (2):39-52.
  3.  6
    Dynamic Analysis and FPGA Implementation of New Chaotic Neural Network and Optimization of Traveling Salesman Problem.Li Cui, Chaoyang Chen, Jie Jin & Fei Yu - 2021 - Complexity 2021:1-10.
    A neural network is a model of the brain’s cognitive process, with a highly interconnected multiprocessor architecture. The neural network has incredible potential, in the view of these artificial neural networks inherently having good learning capabilities and the ability to learn different input features. Based on this, this paper proposes a new chaotic neuron model and a new chaotic neural network model. It includes a linear matrix, a sine function, and a (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  4.  3
    Fault Detection of the Power System Based on the Chaotic Neural Network and Wavelet Transform.Zuoxun Wang & Liqiang Xu - 2020 - Complexity 2020:1-15.
    The safety and stability of the power supply system are affected by some faults that often occur in power system. To solve this problem, a criterion algorithm based on the chaotic neural network and a fault detection algorithm based on discrete wavelet transform are proposed in this paper. MATLAB/Simulink is used to establish the system model to output fault signals and travelling wave signals. Db4 wavelet decomposes the travelling wave signals into detail signals and approximate signals, and (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  5.  26
    Fault-tolerant mixed H∞/passive synchronization for delayed chaotic neural networks with sampled-data control.Lei Su & Hao Shen - 2016 - Complexity 21 (6):246-259.
  6. Neural Network Models for Chaotic-Fuzzy Information Processing.Harold Szu, Joe Garcia, Lotfi Zadeh, Charles C. Hsu & Joseph DeWitte - 1994 - In Karl H. Pribram (ed.), Origins: Brain and Self-Organization. Lawrence Erlbaum.
  7.  23
    Synchronization in pth Moment for Stochastic Chaotic Neural Networks with Finite-Time Control.Yuhua Xu, Jinmeng Wang, Wuneng Zhou & Xin Wang - 2019 - Complexity 2019:1-8.
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  8. Neural Optimization and Dynamic Programming-Algorithm Analysis and Application Based on Chaotic Neural Network for Cellular Channel Assignment.Xiaojin Zhu, Yanchun Chen, Hesheng Zhang & Jialin Cao - 2006 - In O. Stock & M. Schaerf (eds.), Lecture Notes in Computer Science. Springer Verlag. pp. 991-996.
  9.  33
    Neural Network Models for Chaotic-Fuzzy Information Processing Harold Szu, Joe Garcia, G. Rogers, Lotfi Zadeh*/NSWC, Silver Spring MD 20903 Charles C. Hsu, Joseph DeWitte, Jr., Gyu Moon*, Desa Gobovic, Mona Zaghloul EE&CS GWU, Wash. DC 20052* Dept. of Electronics, Hallym Univ., Choonchun, Korea. [REVIEW]Charles C. Hsu - 1994 - In Karl H. Pribram (ed.), Origins: Brain and Self-Organization. Lawrence Erlbaum. pp. 435.
    Direct download  
     
    Export citation  
     
    Bookmark  
  10.  53
    Prediction of multivariate chaotic time series via radial basis function neural network.Diyi Chen & Wenting Han - 2013 - Complexity 18 (4):55-66.
  11. A Memristive Hyperjerk Chaotic System: Amplitude Control, FPGA Design, and Prediction with Artificial Neural Network.Ran Wang, Chunbiao Li, Serdar Çiçek, Karthikeyan Rajagopal & Xin Zhang - 2021 - Complexity 2021:1-17.
    An amplitude controllable hyperjerk system is constructed for chaos producing by introducing a nonlinear factor of memristor. In this case, the amplitude control is realized from a single coefficient in the memristor. The hyperjerk system has a line of equilibria and also shows extreme multistability indicated by the initial value-associated bifurcation diagram. FPGA-based circuit realization is also given for physical verification. Finally, the proposed memristive hyperjerk system is successfully predicted with artificial neural networks for AI based engineering applications.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  12.  40
    Information processing in neural networks by means of controlled dynamic regimes.François Chapeau-Blondeau - 1995 - Acta Biotheoretica 43 (1-2):155-167.
    This paper is concerned with the modeling of neural systems regarded as information processing entities. I investigate the various dynamic regimes that are accessible in neural networks considered as nonlinear adaptive dynamic systems. The possibilities of obtaining steady, oscillatory or chaotic regimes are illustrated with different neural network models. Some aspects of the dependence of the dynamic regimes upon the synaptic couplings are examined. I emphasize the role that the various regimes may play to support (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  13.  33
    Dynamics of the brain at global and microscopic scales: Neural networks and the EEG.J. J. Wright & D. T. J. Liley - 1996 - Behavioral and Brain Sciences 19 (2):285-295.
    There is some complementarity of models for the origin of the electroencephalogram (EEG) and neural network models for information storage in brainlike systems. From the EEG models of Freeman, of Nunez, and of the authors' group we argue that the wavelike processes revealed in the EEG exhibit linear and near-equilibrium dynamics at macroscopic scale, despite extremely nonlinear – probably chaotic – dynamics at microscopic scale. Simulations of cortical neuronal interactions at global and microscopic scales are then presented. (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   7 citations  
  14. Toward an interpretation of dynamic neural activity in terms of chaotic dynamical systems-Open Peer Commentary-Network stabilization on unstable manifolds: Computing with middle layer transients.A. J. Mandell & K. A. Selz - 2001 - Behavioral and Brain Sciences 24 (5):822-822.
     
    Export citation  
     
    Bookmark  
  15.  24
    Network stabilization on unstable manifolds: Computing with middle layer transients.Arnold J. Mandell & Karen A. Selz - 2001 - Behavioral and Brain Sciences 24 (5):822-823.
    Studies have failed to yield definitive evidence for the existence and/or role of well-defined chaotic attractors in real brain systems. Tsuda's transients stabilized on unstable manifolds of unstable fixed points using mechanisms similar to Ott's algorithmic “control of chaos” are demonstrable. Grebogi's order in preserving “strange nonchaotic” attractor with fractal dimension but Lyapounov is suggested for neural network tasks dependent on sequence.
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark  
  16.  91
    Précis of neural organization: Structure, function, and dynamics.Michael A. Arbib & Péter Érdi - 2000 - Behavioral and Brain Sciences 23 (4):513-533.
    Neural organization: Structure, function, and dynamics shows how theory and experiment can supplement each other in an integrated, evolving account of the brain's structure, function, and dynamics. (1) Structure: Studies of brain function and dynamics build on and contribute to an understanding of many brain regions, the neural circuits that constitute them, and their spatial relations. We emphasize Szentágothai's modular architectonics principle, but also stress the importance of the microcomplexes of cerebellar circuitry and the lamellae of hippocampus. (2) (...)
    Direct download (8 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  17.  5
    Basin of Attraction Analysis of New Memristor-Based Fractional-Order Chaotic System.Long Ding, Li Cui, Fei Yu & Jie Jin - 2021 - Complexity 2021:1-9.
    Memristor is the fourth basic electronic element discovered in addition to resistor, capacitor, and inductor. It is a nonlinear gadget with memory features which can be used for realizing chaotic, memory, neural network, and other similar circuits and systems. In this paper, a novel memristor-based fractional-order chaotic system is presented, and this chaotic system is taken as an example to analyze its dynamic characteristics. First, we used Adomian algorithm to solve the proposed fractional-order chaotic (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  18.  8
    Structure, Function, and Dynamics: An Integrated Approach to Neural Organization.M. Arbib, P. Érdi & J. Szentagothai - 2000 - Behavioral and Brain Sciences 23 (4):513-571.
    Neural organization: Structure, function, and dynamics shows how theory and experiment can supplement each other in an integrated, evolving account of the brain's structure, function, and dynamics. Structure: Studies of brain function and dynamics build on and contribute to an understanding of many brain regions, the neural circuits that constitute them, and their spatial relations. We emphasize Szentágothai's modular architectonics principle, but also stress the importance of the microcomplexes of cerebellar circuitry and the lamellae of hippocampus. Function: Control (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  19. Artificial Neural Network for Forecasting Car Mileage per Gallon in the City.Mohsen Afana, Jomana Ahmed, Bayan Harb, Bassem S. Abu-Nasser & Samy S. Abu-Naser - 2018 - International Journal of Advanced Science and Technology 124:51-59.
    In this paper an Artificial Neural Network (ANN) model was used to help cars dealers recognize the many characteristics of cars, including manufacturers, their location and classification of cars according to several categories including: Make, Model, Type, Origin, DriveTrain, MSRP, Invoice, EngineSize, Cylinders, Horsepower, MPG_Highway, Weight, Wheelbase, Length. ANN was used in prediction of the number of miles per gallon when the car is driven in the city(MPG_City). The results showed that ANN model was able to predict MPG_City (...)
    Direct download  
     
    Export citation  
     
    Bookmark   27 citations  
  20. Artificial Neural Network for Predicting Car Performance Using JNN.Awni Ahmed Al-Mobayed, Youssef Mahmoud Al-Madhoun, Mohammed Nasser Al-Shuwaikh & Samy S. Abu-Naser - 2020 - International Journal of Engineering and Information Systems (IJEAIS) 4 (9):139-145.
    In this paper an Artificial Neural Network (ANN) model was used to help cars dealers recognize the many characteristics of cars, including manufacturers, their location and classification of cars according to several categories including: Buying, Maint, Doors, Persons, Lug_boot, Safety, and Overall. ANN was used in forecasting car acceptability. The results showed that ANN model was able to predict the car acceptability with 99.12 %. The factor of Safety has the most influence on car acceptability evaluation. Comparative study (...)
    Direct download  
     
    Export citation  
     
    Bookmark   21 citations  
  21.  77
    Theorem proving in artificial neural networks: new frontiers in mathematical AI.Markus Pantsar - 2024 - European Journal for Philosophy of Science 14 (1):1-22.
    Computer assisted theorem proving is an increasingly important part of mathematical methodology, as well as a long-standing topic in artificial intelligence (AI) research. However, the current generation of theorem proving software have limited functioning in terms of providing new proofs. Importantly, they are not able to discriminate interesting theorems and proofs from trivial ones. In order for computers to develop further in theorem proving, there would need to be a radical change in how the software functions. Recently, machine learning results (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  22. Some Neural Networks Compute, Others Don't.Gualtiero Piccinini - 2008 - Neural Networks 21 (2-3):311-321.
    I address whether neural networks perform computations in the sense of computability theory and computer science. I explicate and defend
    the following theses. (1) Many neural networks compute—they perform computations. (2) Some neural networks compute in a classical way.
    Ordinary digital computers, which are very large networks of logic gates, belong in this class of neural networks. (3) Other neural networks
    compute in a non-classical way. (4) Yet other neural networks do not perform computations. Brains may well (...)
     
    Export citation  
     
    Bookmark   17 citations  
  23.  16
    A neural network model of lexical organization.Michael D. Fortescue (ed.) - 2009 - London: Continuum Intl Pub Group.
    The subject matter of this book is the mental lexicon, that is, the way in which the form and meaning of words is stored by speakers of specific languages. This book attempts to narrow the gap between the results of experimental neurology and the concerns of theoretical linguistics in the area of lexical semantics. The prime goal as regards linguistic theory is to show how matters of lexical organization can be analysed and discussed within a neurologically informed framework that is (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  24.  43
    Neural networks, AI, and the goals of modeling.Walter Veit & Heather Browning - 2023 - Behavioral and Brain Sciences 46:e411.
    Deep neural networks (DNNs) have found many useful applications in recent years. Of particular interest have been those instances where their successes imitate human cognition and many consider artificial intelligences to offer a lens for understanding human intelligence. Here, we criticize the underlying conflation between the predictive and explanatory power of DNNs by examining the goals of modeling.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  25.  30
    A Neural Network Framework for Cognitive Bias.Johan E. Korteling, Anne-Marie Brouwer & Alexander Toet - 2018 - Frontiers in Psychology 9:358644.
    Human decision making shows systematic simplifications and deviations from the tenets of rationality (‘heuristics’) that may lead to suboptimal decisional outcomes (‘cognitive biases’). There are currently three prevailing theoretical perspectives on the origin of heuristics and cognitive biases: a cognitive-psychological, an ecological and an evolutionary perspective. However, these perspectives are mainly descriptive and none of them provides an overall explanatory framework for the underlying mechanisms of cognitive biases. To enhance our understanding of cognitive heuristics and biases we propose a (...) network framework for cognitive biases, which explains why our brain systematically tends to default to heuristic (‘Type 1’) decision making. We argue that many cognitive biases arise from intrinsic brain mechanisms that are fundamental for the working of biological neural networks. In order to substantiate our viewpoint, we discern and explain four basic neural network principles: (1) Association, (2) Compatibility (3) Retainment, and (4) Focus. These principles are inherent to (all) neural networks which were originally optimized to perform concrete biological, perceptual, and motor functions. They form the basis for our inclinations to associate and combine (unrelated) information, to prioritize information that is compatible with our present state (such as knowledge, opinions and expectations), to retain given information that sometimes could better be ignored, and to focus on dominant information while ignoring relevant information that is not directly activated. The supposed mechanisms are complementary and not mutually exclusive. For different cognitive biases they may all contribute in varying degrees to distortion of information. The present viewpoint not only complements the earlier three viewpoints, but also provides a unifying and binding framework for many cognitive bias phenomena. (shrink)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  26.  22
    On the Opacity of Deep Neural Networks.Anders Søgaard - forthcoming - Canadian Journal of Philosophy:1-16.
    Deep neural networks are said to be opaque, impeding the development of safe and trustworthy artificial intelligence, but where this opacity stems from is less clear. What are the sufficient properties for neural network opacity? Here, I discuss five common properties of deep neural networks and two different kinds of opacity. Which of these properties are sufficient for what type of opacity? I show how each kind of opacity stems from only one of these five properties, (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  27. Diabetes Prediction Using Artificial Neural Network.Nesreen Samer El_Jerjawi & Samy S. Abu-Naser - 2018 - International Journal of Advanced Science and Technology 121:54-64.
    Diabetes is one of the most common diseases worldwide where a cure is not found for it yet. Annually it cost a lot of money to care for people with diabetes. Thus the most important issue is the prediction to be very accurate and to use a reliable method for that. One of these methods is using artificial intelligence systems and in particular is the use of Artificial Neural Networks (ANN). So in this paper, we used artificial neural (...)
    Direct download  
     
    Export citation  
     
    Bookmark   23 citations  
  28.  61
    Recurrent neural network-based models for recognizing requisite and effectuation parts in legal texts.Truong-Son Nguyen, Le-Minh Nguyen, Satoshi Tojo, Ken Satoh & Akira Shimazu - 2018 - Artificial Intelligence and Law 26 (2):169-199.
    This paper proposes several recurrent neural network-based models for recognizing requisite and effectuation parts in Legal Texts. Firstly, we propose a modification of BiLSTM-CRF model that allows the use of external features to improve the performance of deep learning models in case large annotated corpora are not available. However, this model can only recognize RE parts which are not overlapped. Secondly, we propose two approaches for recognizing overlapping RE parts including the cascading approach which uses the sequence of (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   10 citations  
  29. Glass Classification Using Artificial Neural Network.Mohmmad Jamal El-Khatib, Bassem S. Abu-Nasser & Samy S. Abu-Naser - 2019 - International Journal of Academic Pedagogical Research (IJAPR) 3 (23):25-31.
    As a type of evidence glass can be very useful contact trace material in a wide range of offences including burglaries and robberies, hit-and-run accidents, murders, assaults, ram-raids, criminal damage and thefts of and from motor vehicles. All of that offer the potential for glass fragments to be transferred from anything made of glass which breaks, to whoever or whatever was responsible. Variation in manufacture of glass allows considerable discrimination even with tiny fragments. In this study, we worked glass classification (...)
    Direct download  
     
    Export citation  
     
    Bookmark   27 citations  
  30.  53
    Neural networks, nativism, and the plausibility of constructivism.Steven R. Quartz - 1993 - Cognition 48 (3):223-242.
    No categories
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   42 citations  
  31.  77
    Ontology, neural networks, and the social sciences.David Strohmaier - 2020 - Synthese 199 (1-2):4775-4794.
    The ontology of social objects and facts remains a field of continued controversy. This situation complicates the life of social scientists who seek to make predictive models of social phenomena. For the purposes of modelling a social phenomenon, we would like to avoid having to make any controversial ontological commitments. The overwhelming majority of models in the social sciences, including statistical models, are built upon ontological assumptions that can be questioned. Recently, however, artificial neural networks have made their way (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  32.  31
    Deep problems with neural network models of human vision.Jeffrey S. Bowers, Gaurav Malhotra, Marin Dujmović, Milton Llera Montero, Christian Tsvetkov, Valerio Biscione, Guillermo Puebla, Federico Adolfi, John E. Hummel, Rachel F. Heaton, Benjamin D. Evans, Jeffrey Mitchell & Ryan Blything - 2023 - Behavioral and Brain Sciences 46:e385.
    Deep neural networks (DNNs) have had extraordinary successes in classifying photographic images of objects and are often described as the best models of biological vision. This conclusion is largely based on three sets of findings: (1) DNNs are more accurate than any other model in classifying images taken from various datasets, (2) DNNs do the best job in predicting the pattern of human errors in classifying objects taken from various behavioral datasets, and (3) DNNs do the best job in (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  33.  32
    Neural Networks and Psychopathology: Connectionist Models in Practice and Research.Dan J. Stein & Jacques Ludik (eds.) - 1998 - Cambridge University Press.
    Reviews the contribution of neural network models in psychiatry and psychopathology, including diagnosis, pharmacotherapy and psychotherapy.
    Direct download  
     
    Export citation  
     
    Bookmark   4 citations  
  34.  39
    A neural network for creative serial order cognitive behavior.Steve Donaldson - 2008 - Minds and Machines 18 (1):53-91.
    If artificial neural networks are ever to form the foundation for higher level cognitive behaviors in machines or to realize their full potential as explanatory devices for human cognition, they must show signs of autonomy, multifunction operation, and intersystem integration that are absent in most existing models. This model begins to address these issues by integrating predictive learning, sequence interleaving, and sequence creation components to simulate a spectrum of higher-order cognitive behaviors which have eluded the grasp of simpler systems. (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  35.  59
    Antagonistic neural networks underlying differentiated leadership roles.Richard E. Boyatzis, Kylie Rochford & Anthony I. Jack - 2014 - Frontiers in Human Neuroscience 8.
  36.  1
    Neural network methods for vowel classification in the vocalic systems with the [ATR] (Advanced Tongue Root) contrast.Н. В Макеева - 2023 - Philosophical Problems of IT and Cyberspace (PhilIT&C) 2:49-60.
    The paper aims to discuss the results of testing a neural network which classifies the vowels of the vocalic system with the [ATR] (Advanced Tongue Root) contrast based on the data of Akebu (Kwa family). The acoustic nature of the [ATR] feature is yet understudied. The only reliable acoustic correlate of [ATR] is the magnitude of the first formant (F1) which can be also modulated by tongue height, resulting in significant overlap between high [-ATR] vowels and mid [+ATR] (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  37.  80
    Neural networks discover a near-identity relation to distinguish simple syntactic forms.Thomas R. Shultz & Alan C. Bale - 2006 - Minds and Machines 16 (2):107-139.
    Computer simulations show that an unstructured neural-network model [Shultz, T. R., & Bale, A. C. (2001). Infancy, 2, 501–536] covers the essential features␣of infant learning of simple grammars in an artificial language [Marcus, G. F., Vijayan, S., Bandi Rao, S., & Vishton, P. M. (1999). Science, 283, 77–80], and generalizes to examples both outside and inside of the range of training sentences. Knowledge-representation analyses confirm that these networks discover that duplicate words in the sentences are nearly identical and (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  38.  35
    Adaptive Neural Network Control for Nonlinear Hydraulic Servo-System with Time-Varying State Constraints.Shu-Min Lu & Dong-Juan Li - 2017 - Complexity:1-11.
    An adaptive neural network control problem is addressed for a class of nonlinear hydraulic servo-systems with time-varying state constraints. In view of the low precision problem of the traditional hydraulic servo-system which is caused by the tracking errors surpassing appropriate bound, the previous works have shown that the constraint for the system is a good way to solve the low precision problem. Meanwhile, compared with constant constraints, the time-varying state constraints are more general in the actual systems. Therefore, (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  39.  21
    A neural network model of the structure and dynamics of human personality.Stephen J. Read, Brian M. Monroe, Aaron L. Brownstein, Yu Yang, Gurveen Chopra & Lynn C. Miller - 2010 - Psychological Review 117 (1):61-92.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  40.  6
    Front Waves of Chemical Reactions and Travelling Waves of Neural Activity.Yidi Zhang, Shan Guo, Mingzhu Sun, Lucio Mariniello, Arturo Tozzi & Xin Zhao - 2022 - Journal of Neurophilosophy 1 (2).
    Travelling waves crossing the nervous networks at mesoscopic/macroscopic scales have been correlated with different brain functions, from long-term memory to visual stimuli. Here we investigate a feasible relationship between wave generation/propagation in recurrent nervous networks and a physical/chemical model, namely the Belousov–Zhabotinsky reaction. Since BZ’s nonlinear, chaotic chemical process generates concentric/intersecting waves that closely resemble the diffusive nonlinear/chaotic oscillatory patterns crossing the nervous tissue, we aimed to investigate whether wave propagation of brain oscillations could be described in terms (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  41.  59
    Artificial Neural Networks in Medicine and Biology.Helge Malmgren - unknown
    Artificial neural networks (ANNs) are new mathematical techniques which can be used for modelling real neural networks, but also for data categorisation and inference tasks in any empirical science. This means that they have a twofold interest for the philosopher. First, ANN theory could help us to understand the nature of mental phenomena such as perceiving, thinking, remembering, inferring, knowing, wanting and acting. Second, because ANNs are such powerful instruments for data classification and inference, their use also leads (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  42.  7
    Artificial Neural Network Based Detection and Diagnosis of Plasma-Etch Faults.Shumeet Baluja & Roy A. Maxion - 1997 - Journal of Intelligent Systems 7 (1-2):57-82.
  43.  10
    Deep neural networks are not a single hypothesis but a language for expressing computational hypotheses.Tal Golan, JohnMark Taylor, Heiko Schütt, Benjamin Peters, Rowan P. Sommers, Katja Seeliger, Adrien Doerig, Paul Linton, Talia Konkle, Marcel van Gerven, Konrad Kording, Blake Richards, Tim C. Kietzmann, Grace W. Lindsay & Nikolaus Kriegeskorte - 2023 - Behavioral and Brain Sciences 46:e392.
    An ideal vision model accounts for behavior and neurophysiology in both naturalistic conditions and designed lab experiments. Unlike psychological theories, artificial neural networks (ANNs) actually perform visual tasks and generate testable predictions for arbitrary inputs. These advantages enable ANNs to engage the entire spectrum of the evidence. Failures of particular models drive progress in a vibrant ANN research program of human vision.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  44.  31
    Neural networks underlying contributions from semantics in reading aloud.Olga Boukrina & William W. Graves - 2013 - Frontiers in Human Neuroscience 7.
  45.  32
    A neural network model of retrieval-induced forgetting.Kenneth A. Norman, Ehren L. Newman & Greg Detre - 2007 - Psychological Review 114 (4):887-953.
  46.  30
    Differential neural network configuration during human path integration.Aiden E. G. F. Arnold, Ford Burles, Signe Bray, Richard M. Levy & Giuseppe Iaria - 2014 - Frontiers in Human Neuroscience 8.
  47.  29
    Using Neural Networks to Generate Inferential Roles for Natural Language.Peter Blouw & Chris Eliasmith - 2018 - Frontiers in Psychology 8.
  48.  47
    Neural networks learn highly selective representations in order to overcome the superposition catastrophe.Jeffrey S. Bowers, Ivan I. Vankov, Markus F. Damian & Colin J. Davis - 2014 - Psychological Review 121 (2):248-261.
  49.  37
    A neural-network interpretation of selection in learning and behavior.José E. Burgos - 2001 - Behavioral and Brain Sciences 24 (3):531-533.
    In their account of learning and behavior, the authors define an interactor as emitted behavior that operates on the environment, which excludes Pavlovian learning. A unified neural-network account of the operant-Pavlovian dichotomy favors interpreting neurons as interactors and synaptic efficacies as replicators. The latter interpretation implies that single-synapse change is inherently Lamarckian.
    Direct download (7 more)  
     
    Export citation  
     
    Bookmark  
  50.  9
    Convolutional Neural Network Based Vehicle Classification in Adverse Illuminous Conditions for Intelligent Transportation Systems.Muhammad Atif Butt, Asad Masood Khattak, Sarmad Shafique, Bashir Hayat, Saima Abid, Ki-Il Kim, Muhammad Waqas Ayub, Ahthasham Sajid & Awais Adnan - 2021 - Complexity 2021:1-11.
    In step with rapid advancements in computer vision, vehicle classification demonstrates a considerable potential to reshape intelligent transportation systems. In the last couple of decades, image processing and pattern recognition-based vehicle classification systems have been used to improve the effectiveness of automated highway toll collection and traffic monitoring systems. However, these methods are trained on limited handcrafted features extracted from small datasets, which do not cater the real-time road traffic conditions. Deep learning-based classification systems have been proposed to incorporate the (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
1 — 50 / 1000