Results for 'neural networks'

1000+ found
Order:
  1. Artificial Neural Network for Forecasting Car Mileage per Gallon in the City.Mohsen Afana, Jomana Ahmed, Bayan Harb, Bassem S. Abu-Nasser & Samy S. Abu-Naser - 2018 - International Journal of Advanced Science and Technology 124:51-59.
    In this paper an Artificial Neural Network (ANN) model was used to help cars dealers recognize the many characteristics of cars, including manufacturers, their location and classification of cars according to several categories including: Make, Model, Type, Origin, DriveTrain, MSRP, Invoice, EngineSize, Cylinders, Horsepower, MPG_Highway, Weight, Wheelbase, Length. ANN was used in prediction of the number of miles per gallon when the car is driven in the city(MPG_City). The results showed that ANN model was able to predict MPG_City with (...)
    Direct download  
     
    Export citation  
     
    Bookmark   28 citations  
  2. Artificial Neural Network for Predicting Car Performance Using JNN.Awni Ahmed Al-Mobayed, Youssef Mahmoud Al-Madhoun, Mohammed Nasser Al-Shuwaikh & Samy S. Abu-Naser - 2020 - International Journal of Engineering and Information Systems (IJEAIS) 4 (9):139-145.
    In this paper an Artificial Neural Network (ANN) model was used to help cars dealers recognize the many characteristics of cars, including manufacturers, their location and classification of cars according to several categories including: Buying, Maint, Doors, Persons, Lug_boot, Safety, and Overall. ANN was used in forecasting car acceptability. The results showed that ANN model was able to predict the car acceptability with 99.12 %. The factor of Safety has the most influence on car acceptability evaluation. Comparative study method (...)
    Direct download  
     
    Export citation  
     
    Bookmark   21 citations  
  3. Some Neural Networks Compute, Others Don't.Gualtiero Piccinini - 2008 - Neural Networks 21 (2-3):311-321.
    I address whether neural networks perform computations in the sense of computability theory and computer science. I explicate and defend
    the following theses. (1) Many neural networks compute—they perform computations. (2) Some neural networks compute in a classical way.
    Ordinary digital computers, which are very large networks of logic gates, belong in this class of neural networks. (3) Other neural networks
    compute in a non-classical way. (4) Yet other neural networks (...)
     
    Export citation  
     
    Bookmark   17 citations  
  4. Evolving Self-taught Neural Networks: The Baldwin Effect and the Emergence of Intelligence.Nam Le - 2019 - In AISB Annual Convention 2019 -- 10th Symposium on AI & Games.
    The so-called Baldwin Effect generally says how learning, as a form of ontogenetic adaptation, can influence the process of phylogenetic adaptation, or evolution. This idea has also been taken into computation in which evolution and learning are used as computational metaphors, including evolving neural networks. This paper presents a technique called evolving self-taught neural networksneural networks that can teach themselves without external supervision or reward. The self-taught neural network is intrinsically motivated. (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  5. Large neural networks for the resolution of lexical ambiguity.Jean Véronis & Nancy Ide - 1995 - In Patrick Saint-Dizier & Evelyne Viegas (eds.), Computational lexical semantics. New York: Cambridge University Press. pp. 251--269.
  6.  34
    Deep problems with neural network models of human vision.Jeffrey S. Bowers, Gaurav Malhotra, Marin Dujmović, Milton Llera Montero, Christian Tsvetkov, Valerio Biscione, Guillermo Puebla, Federico Adolfi, John E. Hummel, Rachel F. Heaton, Benjamin D. Evans, Jeffrey Mitchell & Ryan Blything - 2023 - Behavioral and Brain Sciences 46:e385.
    Deep neural networks (DNNs) have had extraordinary successes in classifying photographic images of objects and are often described as the best models of biological vision. This conclusion is largely based on three sets of findings: (1) DNNs are more accurate than any other model in classifying images taken from various datasets, (2) DNNs do the best job in predicting the pattern of human errors in classifying objects taken from various behavioral datasets, and (3) DNNs do the best job (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  7.  34
    A Neural Network Framework for Cognitive Bias.Johan E. Korteling, Anne-Marie Brouwer & Alexander Toet - 2018 - Frontiers in Psychology 9:358644.
    Human decision making shows systematic simplifications and deviations from the tenets of rationality (‘heuristics’) that may lead to suboptimal decisional outcomes (‘cognitive biases’). There are currently three prevailing theoretical perspectives on the origin of heuristics and cognitive biases: a cognitive-psychological, an ecological and an evolutionary perspective. However, these perspectives are mainly descriptive and none of them provides an overall explanatory framework for the underlying mechanisms of cognitive biases. To enhance our understanding of cognitive heuristics and biases we propose a (...) network framework for cognitive biases, which explains why our brain systematically tends to default to heuristic (‘Type 1’) decision making. We argue that many cognitive biases arise from intrinsic brain mechanisms that are fundamental for the working of biological neural networks. In order to substantiate our viewpoint, we discern and explain four basic neural network principles: (1) Association, (2) Compatibility (3) Retainment, and (4) Focus. These principles are inherent to (all) neural networks which were originally optimized to perform concrete biological, perceptual, and motor functions. They form the basis for our inclinations to associate and combine (unrelated) information, to prioritize information that is compatible with our present state (such as knowledge, opinions and expectations), to retain given information that sometimes could better be ignored, and to focus on dominant information while ignoring relevant information that is not directly activated. The supposed mechanisms are complementary and not mutually exclusive. For different cognitive biases they may all contribute in varying degrees to distortion of information. The present viewpoint not only complements the earlier three viewpoints, but also provides a unifying and binding framework for many cognitive bias phenomena. (shrink)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  8.  45
    Neural networks, AI, and the goals of modeling.Walter Veit & Heather Browning - 2023 - Behavioral and Brain Sciences 46:e411.
    Deep neural networks (DNNs) have found many useful applications in recent years. Of particular interest have been those instances where their successes imitate human cognition and many consider artificial intelligences to offer a lens for understanding human intelligence. Here, we criticize the underlying conflation between the predictive and explanatory power of DNNs by examining the goals of modeling.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  9. Diabetes Prediction Using Artificial Neural Network.Nesreen Samer El_Jerjawi & Samy S. Abu-Naser - 2018 - International Journal of Advanced Science and Technology 121:54-64.
    Diabetes is one of the most common diseases worldwide where a cure is not found for it yet. Annually it cost a lot of money to care for people with diabetes. Thus the most important issue is the prediction to be very accurate and to use a reliable method for that. One of these methods is using artificial intelligence systems and in particular is the use of Artificial Neural Networks (ANN). So in this paper, we used artificial (...) networks to predict whether a person is diabetic or not. The criterion was to minimize the error function in neural network training using a neural network model. After training the ANN model, the average error function of the neural network was equal to 0.01 and the accuracy of the prediction of whether a person is diabetics or not was 87.3%. (shrink)
    Direct download  
     
    Export citation  
     
    Bookmark   23 citations  
  10.  61
    Antagonistic neural networks underlying differentiated leadership roles.Richard E. Boyatzis, Kylie Rochford & Anthony I. Jack - 2014 - Frontiers in Human Neuroscience 8.
  11.  64
    Recurrent neural network-based models for recognizing requisite and effectuation parts in legal texts.Truong-Son Nguyen, Le-Minh Nguyen, Satoshi Tojo, Ken Satoh & Akira Shimazu - 2018 - Artificial Intelligence and Law 26 (2):169-199.
    This paper proposes several recurrent neural network-based models for recognizing requisite and effectuation parts in Legal Texts. Firstly, we propose a modification of BiLSTM-CRF model that allows the use of external features to improve the performance of deep learning models in case large annotated corpora are not available. However, this model can only recognize RE parts which are not overlapped. Secondly, we propose two approaches for recognizing overlapping RE parts including the cascading approach which uses the sequence of BiLSTM-CRF (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   10 citations  
  12. Glass Classification Using Artificial Neural Network.Mohmmad Jamal El-Khatib, Bassem S. Abu-Nasser & Samy S. Abu-Naser - 2019 - International Journal of Academic Pedagogical Research (IJAPR) 3 (23):25-31.
    As a type of evidence glass can be very useful contact trace material in a wide range of offences including burglaries and robberies, hit-and-run accidents, murders, assaults, ram-raids, criminal damage and thefts of and from motor vehicles. All of that offer the potential for glass fragments to be transferred from anything made of glass which breaks, to whoever or whatever was responsible. Variation in manufacture of glass allows considerable discrimination even with tiny fragments. In this study, we worked glass classification (...)
    Direct download  
     
    Export citation  
     
    Bookmark   28 citations  
  13.  32
    Differential neural network configuration during human path integration.Aiden E. G. F. Arnold, Ford Burles, Signe Bray, Richard M. Levy & Giuseppe Iaria - 2014 - Frontiers in Human Neuroscience 8.
  14. The Grossberg Code: Universal Neural Network Signatures of Perceptual Experience.Birgitta Dresp-Langley - 2023 - Information 14 (2):1-82.
    Two universal functional principles of Grossberg’s Adaptive Resonance Theory decipher the brain code of all biological learning and adaptive intelligence. Low-level representations of multisensory stimuli in their immediate environmental context are formed on the basis of bottom-up activation and under the control of top-down matching rules that integrate high-level, long-term traces of contextual configuration. These universal coding principles lead to the establishment of lasting brain signatures of perceptual experience in all living species, from aplysiae to primates. They are re-visited in (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  15.  56
    Neural networks, nativism, and the plausibility of constructivism.Steven R. Quartz - 1993 - Cognition 48 (3):223-242.
    No categories
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   42 citations  
  16.  17
    Neural Networks Based Adaptive Consensus for a Class of Fractional-Order Uncertain Nonlinear Multiagent Systems.Jing Bai & Yongguang Yu - 2018 - Complexity 2018:1-10.
    Due to the excellent approximation ability, the neural networks based control method is used to achieve adaptive consensus of the fractional-order uncertain nonlinear multiagent systems with external disturbance. The unknown nonlinear term and the external disturbance term in the systems are compensated by using the radial basis function neural networks method, a corresponding fractional-order adaption law is designed to approach the ideal neural network weight matrix of the unknown nonlinear terms, and a control law is (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  17.  39
    A neural-network interpretation of selection in learning and behavior.José E. Burgos - 2001 - Behavioral and Brain Sciences 24 (3):531-533.
    In their account of learning and behavior, the authors define an interactor as emitted behavior that operates on the environment, which excludes Pavlovian learning. A unified neural-network account of the operant-Pavlovian dichotomy favors interpreting neurons as interactors and synaptic efficacies as replicators. The latter interpretation implies that single-synapse change is inherently Lamarckian.
    Direct download (7 more)  
     
    Export citation  
     
    Bookmark  
  18.  9
    Convolutional Neural Network Based Vehicle Classification in Adverse Illuminous Conditions for Intelligent Transportation Systems.Muhammad Atif Butt, Asad Masood Khattak, Sarmad Shafique, Bashir Hayat, Saima Abid, Ki-Il Kim, Muhammad Waqas Ayub, Ahthasham Sajid & Awais Adnan - 2021 - Complexity 2021:1-11.
    In step with rapid advancements in computer vision, vehicle classification demonstrates a considerable potential to reshape intelligent transportation systems. In the last couple of decades, image processing and pattern recognition-based vehicle classification systems have been used to improve the effectiveness of automated highway toll collection and traffic monitoring systems. However, these methods are trained on limited handcrafted features extracted from small datasets, which do not cater the real-time road traffic conditions. Deep learning-based classification systems have been proposed to incorporate the (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  19.  3
    Neural Networks in Legal Theory.Vadim Verenich - 2024 - Studia Humana 13 (3):41-51.
    This article explores the domain of legal analysis and its methodologies, emphasising the significance of generalisation in legal systems. It discusses the process of generalisation in relation to legal concepts and the development of ideal concepts that form the foundation of law. The article examines the role of logical induction and its similarities with semantic generalisation, highlighting their importance in legal decision-making. It also critiques the formal-deductive approach in legal practice and advocates for more adaptable models, incorporating fuzzy logic, non-monotonic (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  20.  43
    A neural network for creative serial order cognitive behavior.Steve Donaldson - 2008 - Minds and Machines 18 (1):53-91.
    If artificial neural networks are ever to form the foundation for higher level cognitive behaviors in machines or to realize their full potential as explanatory devices for human cognition, they must show signs of autonomy, multifunction operation, and intersystem integration that are absent in most existing models. This model begins to address these issues by integrating predictive learning, sequence interleaving, and sequence creation components to simulate a spectrum of higher-order cognitive behaviors which have eluded the grasp of simpler (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  21.  79
    Ontology, neural networks, and the social sciences.David Strohmaier - 2020 - Synthese 199 (1-2):4775-4794.
    The ontology of social objects and facts remains a field of continued controversy. This situation complicates the life of social scientists who seek to make predictive models of social phenomena. For the purposes of modelling a social phenomenon, we would like to avoid having to make any controversial ontological commitments. The overwhelming majority of models in the social sciences, including statistical models, are built upon ontological assumptions that can be questioned. Recently, however, artificial neural networks have made their (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  22.  26
    A neural network model of the structure and dynamics of human personality.Stephen J. Read, Brian M. Monroe, Aaron L. Brownstein, Yu Yang, Gurveen Chopra & Lynn C. Miller - 2010 - Psychological Review 117 (1):61-92.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  23.  36
    Using Neural Networks to Generate Inferential Roles for Natural Language.Peter Blouw & Chris Eliasmith - 2018 - Frontiers in Psychology 8.
  24. Discourseology of Linguistic Consciousness: Neural Network Modeling of Some Structural and Semantic Relationships.Vitalii Shymko - 2021 - Psycholinguistics 29 (1):193-207.
    Objective. Study of the validity and reliability of the discourse approach for the psycholinguistic understanding of the nature, structure, and features of the linguistic consciousness functioning. -/- Materials & Methods. This paper analyzes artificial neural network models built on the corpus of texts, which were obtained in the process of experimental research of the coronavirus quarantine concept as a new category of linguistic consciousness. The methodology of feedforward artificial neural networks (multilayer perceptron) was used in order to (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  25. Dynamic Neural Network Reconfiguration During the Generation and Reinstatement of Mnemonic Representations.Aiden E. G. F. Arnold, Arne D. Ekstrom & Giuseppe Iaria - 2018 - Frontiers in Human Neuroscience 12.
  26.  32
    Neural Networks and Psychopathology: Connectionist Models in Practice and Research.Dan J. Stein & Jacques Ludik (eds.) - 1998 - Cambridge University Press.
    Reviews the contribution of neural network models in psychiatry and psychopathology, including diagnosis, pharmacotherapy and psychotherapy.
    Direct download  
     
    Export citation  
     
    Bookmark   4 citations  
  27.  37
    Neural networks underlying contributions from semantics in reading aloud.Olga Boukrina & William W. Graves - 2013 - Frontiers in Human Neuroscience 7.
  28. Biological neural networks in invertebrate neuroethology and robotics.Randall D. Beer, Roy E. Ritzmann & Thomas McKenna - 1994 - Bioessays 16 (11):857.
     
    Export citation  
     
    Bookmark  
  29.  48
    Neural networks learn highly selective representations in order to overcome the superposition catastrophe.Jeffrey S. Bowers, Ivan I. Vankov, Markus F. Damian & Colin J. Davis - 2014 - Psychological Review 121 (2):248-261.
  30. Neural network modeling.B. K. Chakrabarti & A. Basu - 2008 - In Rahul Banerjee & Bikas K. Chakrabarti (eds.), Models of brain and mind: physical, computational, and psychological approaches. Boston: Elsevier.
     
    Export citation  
     
    Bookmark  
  31.  12
    Neural networks ensembles approach for simulation of solar arrays degradation process.Vladimir Bukhtoyarov, Eugene Semenkin & Andrey Shabalov - 2012 - In Emilio Corchado, Vaclav Snasel, Ajith Abraham, Michał Woźniak, Manuel Grana & Sung-Bae Cho (eds.), Hybrid Artificial Intelligent Systems. Springer. pp. 186--195.
  32.  17
    Neural network analysis of learning in autism.I. L. Cohen - 1998 - In Dan J. Stein & J. Ludick (eds.), Neural Networks and Psychopathology. Cambridge University Press. pp. 274--315.
  33.  3
    Neural network methods for vowel classification in the vocalic systems with the [ATR] (Advanced Tongue Root) contrast.Н. В Макеева - 2023 - Philosophical Problems of IT and Cyberspace (PhilIT&C) 2:49-60.
    The paper aims to discuss the results of testing a neural network which classifies the vowels of the vocalic system with the [ATR] (Advanced Tongue Root) contrast based on the data of Akebu (Kwa family). The acoustic nature of the [ATR] feature is yet understudied. The only reliable acoustic correlate of [ATR] is the magnitude of the first formant (F1) which can be also modulated by tongue height, resulting in significant overlap between high [-ATR] vowels and mid [+ATR] vowels. (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  34.  80
    Neural networks discover a near-identity relation to distinguish simple syntactic forms.Thomas R. Shultz & Alan C. Bale - 2006 - Minds and Machines 16 (2):107-139.
    Computer simulations show that an unstructured neural-network model [Shultz, T. R., & Bale, A. C. (2001). Infancy, 2, 501–536] covers the essential features␣of infant learning of simple grammars in an artificial language [Marcus, G. F., Vijayan, S., Bandi Rao, S., & Vishton, P. M. (1999). Science, 283, 77–80], and generalizes to examples both outside and inside of the range of training sentences. Knowledge-representation analyses confirm that these networks discover that duplicate words in the sentences are nearly identical and (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  35.  89
    Stacked neural networks must emulate evolution's hierarchical complexity.Michael Lamport Commons - 2008 - World Futures 64 (5-7):444 – 451.
    The missing ingredients in efforts to develop neural networks and artificial intelligence (AI) that can emulate human intelligence have been the evolutionary processes of performing tasks at increased orders of hierarchical complexity. Stacked neural networks based on the Model of Hierarchical Complexity could emulate evolution's actual learning processes and behavioral reinforcement. Theoretically, this should result in stability and reduce certain programming demands. The eventual success of such methods begs questions of humans' survival in the face of (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  36. The Grossberg Code: Universal Neural Network Signatures of Perceptual Experience.Birgitta Dresp-Langley - 2023 - Information 14 (2):e82 1-17..
    Two universal functional principles of Grossberg’s Adaptive Resonance Theory [19] decipher the brain code of all biological learning and adaptive intelligence. Low-level representations of multisensory stimuli in their immediate environmental context are formed on the basis of bottom-up activation and under the control of top-down matching rules that integrate high-level long-term traces of contextual configuration. These universal coding principles lead to the establishment of lasting brain signatures of perceptual experience in all living species, from aplysiae to primates. They are re-visited (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  37. Knowledge Bases and Neural Network Synthesis.Todd R. Davies - 1991 - In Hozumi Tanaka (ed.), Artificial Intelligence in the Pacific Rim: Proceedings of the Pacific Rim International Conference on Artificial Intelligence. IOS Press. pp. 717-722.
    We describe and try to motivate our project to build systems using both a knowledge based and a neural network approach. These two approaches are used at different stages in the solution of a problem, instead of using knowledge bases exclusively on some problems, and neural nets exclusively on others. The knowledge base (KB) is defined first in a declarative, symbolic language that is easy to use. It is then compiled into an efficient neural network (NN) representation, (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  38. Tic-Tac-Toe Learning Using Artificial Neural Networks.Mohaned Abu Dalffa, Bassem S. Abu-Nasser & Samy S. Abu-Naser - 2019 - International Journal of Engineering and Information Systems (IJEAIS) 3 (2):9-19.
    Throughout this research, imposing the training of an Artificial Neural Network (ANN) to play tic-tac-toe bored game, by training the ANN to play the tic-tac-toe logic using the set of mathematical combination of the sequences that could be played by the system and using both the Gradient Descent Algorithm explicitly and the Elimination theory rules implicitly. And so on the system should be able to produce imunate amalgamations to solve every state within the game course to make better of (...)
    Direct download  
     
    Export citation  
     
    Bookmark   26 citations  
  39.  40
    Adaptive Neural Network Control for Nonlinear Hydraulic Servo-System with Time-Varying State Constraints.Shu-Min Lu & Dong-Juan Li - 2017 - Complexity:1-11.
    An adaptive neural network control problem is addressed for a class of nonlinear hydraulic servo-systems with time-varying state constraints. In view of the low precision problem of the traditional hydraulic servo-system which is caused by the tracking errors surpassing appropriate bound, the previous works have shown that the constraint for the system is a good way to solve the low precision problem. Meanwhile, compared with constant constraints, the time-varying state constraints are more general in the actual systems. Therefore, when (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  40.  9
    A Neural Network Approach to Timbre Discrimination of Identical Pitch Signals.S. Sayegh, C. Pomalaza, M. Badie & Κ. B. Beer - 1997 - Journal of Intelligent Systems 7 (3-4):339-348.
    Direct download  
     
    Export citation  
     
    Bookmark  
  41.  13
    Neural Network Connectivity During Post-encoding Rest: Linking Episodic Memory Encoding and Retrieval.Okka J. Risius, Oezguer A. Onur, Julian Dronse, Boris von Reutern, Nils Richter, Gereon R. Fink & Juraj Kukolja - 2019 - Frontiers in Human Neuroscience 12:406602.
    Commonly, a switch between networks mediating memory encoding and those mediating retrieval is observed. This may not only be due to differential involvement of neural resources due to distinct cognitive processes but could also reflect the formation of new memory traces and their dynamic change during consolidation. We used resting state fMRI to measure functional connectivity (FC) changes during post-encoding rest, hypothesizing that during this phase, new functional connections between encoding- and retrieval-related regions are created. Interfering and reminding (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  42. RBF Neural Network Backstepping Sliding Mode Adaptive Control for Dynamic Pressure Cylinder Electrohydraulic Servo Pressure System.Pan Deng, Liangcai Zeng & Yang Liu - 2018 - Complexity 2018:1-16.
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  43.  10
    Neural Networks: Test Tubes to Theorems.Leon N. Cooper, Mark F. Bear, Ford F. Ebner & Christopher Scofield - 1990 - In J. McGaugh, Jerry Weinberger & G. Lynch (eds.), Brain Organization and Memory. Guilford Press.
  44.  59
    Artificial Neural Networks in Medicine and Biology.Helge Malmgren - unknown
    Artificial neural networks (ANNs) are new mathematical techniques which can be used for modelling real neural networks, but also for data categorisation and inference tasks in any empirical science. This means that they have a twofold interest for the philosopher. First, ANN theory could help us to understand the nature of mental phenomena such as perceiving, thinking, remembering, inferring, knowing, wanting and acting. Second, because ANNs are such powerful instruments for data classification and inference, their use (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  45.  21
    Adaptive Neural Networks Control Using Barrier Lyapunov Functions for DC Motor System with Time-Varying State Constraints.Lei Ma & Dapeng Li - 2018 - Complexity 2018:1-9.
    This paper proposes an adaptive neural network control approach for a direct-current system with full state constraints. To guarantee that state constraints always remain in the asymmetric time-varying constraint regions, the asymmetric time-varying Barrier Lyapunov Function is employed to structure an adaptive NN controller. As we all know that the constant constraint is only a special case of the time-varying constraint, hence, the proposed control method is more general for dealing with constraint problem as compared with the existing works (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  46. Neural networks and psychopathology: an introduction.Dan J. Stein Andjacques Ludik - 1998 - In Dan J. Stein & J. Ludick (eds.), Neural Networks and Psychopathology. Cambridge University Press.
     
    Export citation  
     
    Bookmark  
  47.  11
    Deep neural networks are not a single hypothesis but a language for expressing computational hypotheses.Tal Golan, JohnMark Taylor, Heiko Schütt, Benjamin Peters, Rowan P. Sommers, Katja Seeliger, Adrien Doerig, Paul Linton, Talia Konkle, Marcel van Gerven, Konrad Kording, Blake Richards, Tim C. Kietzmann, Grace W. Lindsay & Nikolaus Kriegeskorte - 2023 - Behavioral and Brain Sciences 46:e392.
    An ideal vision model accounts for behavior and neurophysiology in both naturalistic conditions and designed lab experiments. Unlike psychological theories, artificial neural networks (ANNs) actually perform visual tasks and generate testable predictions for arbitrary inputs. These advantages enable ANNs to engage the entire spectrum of the evidence. Failures of particular models drive progress in a vibrant ANN research program of human vision.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  48.  4
    Neural Network Model for Predicting Student Failure in the Academic Leveling Course of Escuela Politécnica Nacional.Iván Sandoval-Palis, David Naranjo, Raquel Gilar-Corbi & Teresa Pozo-Rico - 2020 - Frontiers in Psychology 11.
    The purpose of this study is to train an artificial neural network model for predicting student failure in the academic leveling course of the Escuela Politécnica Nacional of Ecuador, based on academic and socioeconomic information. For this, 1308 higher education students participated, 69.0% of whom failed the academic leveling course; besides, 93.7% of the students self-identified as mestizo, 83.9% came from the province of Pichincha, and 92.4% belonged to general population. As a first approximation, a neural network model (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  49.  10
    Neither neural networks nor the language-of-thought alone make a complete game.Iris Oved, Nikhil Krishnaswamy, James Pustejovsky & Joshua K. Hartshorne - 2023 - Behavioral and Brain Sciences 46:e285.
    Cognitive science has evolved since early disputes between radical empiricism and radical nativism. The authors are reacting to the revival of radical empiricism spurred by recent successes in deep neural network (NN) models. We agree that language-like mental representations (language-of-thoughts [LoTs]) are part of the best game in town, but they cannot be understood independent of the other players.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  50.  23
    Neural Network Models of Conditionals.Hannes Leitgeb - 2012 - In Sven Ove Hansson & Vincent F. Hendricks (eds.), Introduction to Formal Philosophy. Cham: Springer. pp. 147-176.
    This chapter explains how artificial neural networks may be used as models for reasoning, conditionals, and conditional logic. It starts with the historical overlap between neural network research and logic, it discusses connectionism as a paradigm in cognitive science that opposes the traditional paradigm of symbolic computationalism, it mentions some recent accounts of how logic and neural networks may be combined, and it ends with a couple of open questions concerning the future of this area (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
1 — 50 / 1000