Results for ' Neural Networks (Computer)'

489 found
Order:
  1. Some Neural Networks Compute, Others Don't.Gualtiero Piccinini - 2008 - Neural Networks 21 (2-3):311-321.
    I address whether neural networks perform computations in the sense of computability theory and computer science. I explicate and defend
    the following theses. (1) Many neural networks compute—they perform computations. (2) Some neural networks compute in a classical way.
    Ordinary digital computers, which are very large networks of logic gates, belong in this class of neural networks. (3) Other neural networks
    compute in a non-classical way. (4) Yet other neural (...) do not perform computations. Brains may well fall into this last class. (shrink)
     
    Export citation  
     
    Bookmark   17 citations  
  2.  11
    Deep neural networks are not a single hypothesis but a language for expressing computational hypotheses.Tal Golan, JohnMark Taylor, Heiko Schütt, Benjamin Peters, Rowan P. Sommers, Katja Seeliger, Adrien Doerig, Paul Linton, Talia Konkle, Marcel van Gerven, Konrad Kording, Blake Richards, Tim C. Kietzmann, Grace W. Lindsay & Nikolaus Kriegeskorte - 2023 - Behavioral and Brain Sciences 46:e392.
    An ideal vision model accounts for behavior and neurophysiology in both naturalistic conditions and designed lab experiments. Unlike psychological theories, artificial neural networks (ANNs) actually perform visual tasks and generate testable predictions for arbitrary inputs. These advantages enable ANNs to engage the entire spectrum of the evidence. Failures of particular models drive progress in a vibrant ANN research program of human vision.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  3.  53
    The application of neural network algorithm and embedded system in computer distance teach system.Qin Qiu - 2022 - Journal of Intelligent Systems 31 (1):148-158.
    The computer distance teaching system teaches through the network, and there is no entrance threshold. Any student who is willing to study can log in to the network computer distance teaching system for study at any free time. Neural network has a strong self-learning ability and is an important part of artificial intelligence research. Based on this study, a neural network-embedded architecture based on shared memory and bus structure is proposed. By looking for an alternative method (...)
    No categories
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  4.  6
    Neural Network-Based Intelligent Computing Algorithms for Discrete-Time Optimal Control with the Application to a Cyberphysical Power System.Feng Jiang, Kai Zhang, Jinjing Hu & Shunjiang Wang - 2021 - Complexity 2021:1-10.
    Adaptive dynamic programming, which belongs to the field of computational intelligence, is a powerful tool to address optimal control problems. To overcome the bottleneck of solving Hamilton–Jacobi–Bellman equations, several state-of-the-art ADP approaches are reviewed in this paper. First, two model-based offline iterative ADP methods including policy iteration and value iteration are given, and their respective advantages and shortcomings are discussed in detail. Second, the multistep heuristic dynamic programming method is introduced, which avoids the requirement of initial admissible control and achieves (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  5.  15
    Neural networks and computational theory: Solving the right problem.David C. Plaut - 1989 - Behavioral and Brain Sciences 12 (3):411-413.
  6.  36
    Universal computation in fluid neural networks.Ricard V. Solé & Jordi Delgado - 1996 - Complexity 2 (2):49-56.
    Fluid neural networks can be used as a theoretical framework for a wide range of complex systems as social insects. In this article we show that collective logical gates can be built in such a way that complex computation can be possible by means of the interplay between local interactions and the collective creation of a global field. This is exemplified by a NOR gate. Some general implications for ant societies are outlined. ©.
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  7.  69
    Moving beyond content‐specific computation in artificial neural networks.Nicholas Shea - 2021 - Mind and Language 38 (1):156-177.
    A basic deep neural network (DNN) is trained to exhibit a large set of input–output dispositions. While being a good model of the way humans perform some tasks automatically, without deliberative reasoning, more is needed to approach human‐like artificial intelligence. Analysing recent additions brings to light a distinction between two fundamentally different styles of computation: content‐specific and non‐content‐specific computation (as first defined here). For example, deep episodic RL networks draw on both. So does human conceptual reasoning. Combining the (...)
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  8.  48
    Locality, modularity, and computational neural networks.Horst Bischof - 1997 - Behavioral and Brain Sciences 20 (3):516-517.
    There is a distinction between locality and modularity. These two terms have often been used interchangeably in the target article and commentary. Using this distinction we argue in favor of a modularity. In addition we also argue that both PDP-type networks and box-and-arrow models have their own strengths and pitfalls.
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark  
  9.  30
    How to train a neural network:An introduction to the new computational paradigm.Jeffrey Johnson & Philip Picton - 1996 - Complexity 1 (6):13-28.
  10.  33
    Deep problems with neural network models of human vision.Jeffrey S. Bowers, Gaurav Malhotra, Marin Dujmović, Milton Llera Montero, Christian Tsvetkov, Valerio Biscione, Guillermo Puebla, Federico Adolfi, John E. Hummel, Rachel F. Heaton, Benjamin D. Evans, Jeffrey Mitchell & Ryan Blything - 2023 - Behavioral and Brain Sciences 46:e385.
    Deep neural networks (DNNs) have had extraordinary successes in classifying photographic images of objects and are often described as the best models of biological vision. This conclusion is largely based on three sets of findings: (1) DNNs are more accurate than any other model in classifying images taken from various datasets, (2) DNNs do the best job in predicting the pattern of human errors in classifying objects taken from various behavioral datasets, and (3) DNNs do the best job (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  11.  9
    Convolutional Neural Network Based Vehicle Classification in Adverse Illuminous Conditions for Intelligent Transportation Systems.Muhammad Atif Butt, Asad Masood Khattak, Sarmad Shafique, Bashir Hayat, Saima Abid, Ki-Il Kim, Muhammad Waqas Ayub, Ahthasham Sajid & Awais Adnan - 2021 - Complexity 2021:1-11.
    In step with rapid advancements in computer vision, vehicle classification demonstrates a considerable potential to reshape intelligent transportation systems. In the last couple of decades, image processing and pattern recognition-based vehicle classification systems have been used to improve the effectiveness of automated highway toll collection and traffic monitoring systems. However, these methods are trained on limited handcrafted features extracted from small datasets, which do not cater the real-time road traffic conditions. Deep learning-based classification systems have been proposed to incorporate (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  12.  10
    Entrepreneurship education-infiltrated computer-aided instruction system for college Music Majors using convolutional neural network.Hong Cao - 2022 - Frontiers in Psychology 13.
    The purpose is to improve the teaching and learning efficiency of college Innovation and Entrepreneurship Education. Firstly, from the perspective of aesthetic education, this work designs the teacher and student sides of the Computer-aided Instruction system. Secondly, the CAI model is implemented based on the weight sharing and local perception of the Convolutional Neural Network. Finally, the performance of the CNN-based CAI model is tested. Meanwhile, it analyses students’ IEE experience under the proposed CAI model through a case (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  13.  30
    Computability of Logical Neural Networks.T. B. Ludermir - 1992 - Journal of Intelligent Systems 2 (1-4):261-290.
  14.  80
    Neural networks discover a near-identity relation to distinguish simple syntactic forms.Thomas R. Shultz & Alan C. Bale - 2006 - Minds and Machines 16 (2):107-139.
    Computer simulations show that an unstructured neural-network model [Shultz, T. R., & Bale, A. C. (2001). Infancy, 2, 501–536] covers the essential features␣of infant learning of simple grammars in an artificial language [Marcus, G. F., Vijayan, S., Bandi Rao, S., & Vishton, P. M. (1999). Science, 283, 77–80], and generalizes to examples both outside and inside of the range of training sentences. Knowledge-representation analyses confirm that these networks discover that duplicate words in the sentences are nearly identical (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  15. Neural network modeling.B. K. Chakrabarti & A. Basu - 2008 - In Rahul Banerjee & Bikas K. Chakrabarti (eds.), Models of brain and mind: physical, computational, and psychological approaches. Boston: Elsevier.
     
    Export citation  
     
    Bookmark  
  16.  47
    Combining distributed and localist computations in real-time neural networks.Gail A. Carpenter - 2000 - Behavioral and Brain Sciences 23 (4):473-474.
    In order to benefit from the advantages of localist coding, neural models that feature winner-take-all representations at the top level of a network hierarchy must still solve the computational problems inherent in distributed representations at the lower levels.
    Direct download (7 more)  
     
    Export citation  
     
    Bookmark  
  17.  4
    Convolutional neural networks reveal differences in action units of facial expressions between face image databases developed in different countries.Mikio Inagaki, Tatsuro Ito, Takashi Shinozaki & Ichiro Fujita - 2022 - Frontiers in Psychology 13.
    Cultural similarities and differences in facial expressions have been a controversial issue in the field of facial communications. A key step in addressing the debate regarding the cultural dependency of emotional expression is to characterize the visual features of specific facial expressions in individual cultures. Here we developed an image analysis framework for this purpose using convolutional neural networks that through training learned visual features critical for classification. We analyzed photographs of facial expressions derived from two databases, each (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  18.  6
    Neural Networks and Intellect: Using Model Based Concepts.Leonid I. Perlovsky - 2000 - Oxford, England and New York, NY, USA: Oxford University Press USA.
    This work describes a mathematical concept of modelling field theory and its applications to a variety of problems, while offering a view of the relationships among mathematics, computational concepts in neural networks, semiotics, and concepts of mind in psychology and philosophy.
  19.  16
    Interacting neural networks and the emergence of social structure.Christina Stoica-Klüver & Jürgen Klüver - 2007 - Complexity 12 (3):41-52.
  20. Intelligent Computing in Bioinformatics-Genetic Algorithm and Neural Network Based Classification in Microarray Data Analysis with Biological Validity Assessment.Vitoantonio Bevilacqua, Giuseppe Mastronardi & Filippo Menolascina - 2006 - In O. Stock & M. Schaerf (eds.), Lecture Notes in Computer Science. Springer Verlag. pp. 4115--475.
     
    Export citation  
     
    Bookmark  
  21.  27
    Two-Level Domain Adaptation Neural Network for EEG-Based Emotion Recognition.Guangcheng Bao, Ning Zhuang, Li Tong, Bin Yan, Jun Shu, Linyuan Wang, Ying Zeng & Zhichong Shen - 2021 - Frontiers in Human Neuroscience 14.
    Emotion recognition plays an important part in human-computer interaction. Currently, the main challenge in electroencephalogram -based emotion recognition is the non-stationarity of EEG signals, which causes performance of the trained model decreasing over time. In this paper, we propose a two-level domain adaptation neural network to construct a transfer model for EEG-based emotion recognition. Specifically, deep features from the topological graph, which preserve topological information from EEG signals, are extracted using a deep neural network. These features are (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  22.  13
    Neural Network Models as Evidence for Different Types of Visual Representations.Stephen M. Kosslyn, Christopher F. Chabris & David P. Baker - 1995 - Cognitive Science 19 (4):575-579.
    Cook (1995) criticizes the work of Jacobs and Kosslyn (1994) on spatial relations, shape representations, and receptive fields in neural network models on the grounds that first‐order correlations between input and output unit activities can explain the results. We reply briefly to Cook's arguments here (and in Kosslyn, Chabris, Marsolek, Jacobs & Koenig, 1995) and discuss how new simulations can confirm the importance of receptive field size as a crucial variable in the encoding of categorical and coordinate spatial relations (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  23.  25
    A neural network to identify requests, decisions, and arguments in court rulings on custody.José Félix Muñoz-Soro, Rafael del Hoyo Alonso, Rosa Montañes & Francisco Lacueva - forthcoming - Artificial Intelligence and Law:1-35.
    Court rulings are among the most important documents in all legal systems. This article describes a study in which natural language processing is used for the automatic characterization of Spanish judgments that deal with the physical custody (joint or individual) of minors. The model was trained to identify a set of elements: the type of custody requested by the plaintiff, the type of custody decided on by the court, and eight of the most commonly used arguments in this type of (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  24.  20
    Hybridizing Evolutionary Computation and Deep Neural Networks: An Approach to Handwriting Recognition Using Committees and Transfer Learning.Alejandro Baldominos, Yago Saez & Pedro Isasi - 2019 - Complexity 2019:1-16.
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  25.  10
    A Radial Basis Function Neural Network Approach to Predict Preschool Teachers’ Technology Acceptance Behavior.Dana Rad, Gilbert C. Magulod, Evelina Balas, Alina Roman, Anca Egerau, Roxana Maier, Sonia Ignat, Tiberiu Dughi, Valentina Balas, Edgar Demeter, Gavril Rad & Roxana Chis - 2022 - Frontiers in Psychology 13.
    With the continual development of artificial intelligence and smart computing in recent years, quantitative approaches have become increasingly popular as an efficient modeling tool as they do not necessitate complicated mathematical models. Many nations have taken steps, such as transitioning to online schooling, to decrease the harm caused by coronaviruses. Inspired by the demand for technology in early education, the present research uses a radial basis function neural network modeling technique to predict preschool instructors’ technology usage in classes based (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  26. Computational Intelligence Part II Lecture 1: Identification Using Neural Networks.Farzaneh Abdollahi - 2009 - In L. Magnani (ed.), Computational Intelligence.
  27.  7
    Subject-Independent Functional Near-Infrared Spectroscopy-Based Brain–Computer Interfaces Based on Convolutional Neural Networks.Jinuk Kwon & Chang-Hwan Im - 2021 - Frontiers in Human Neuroscience 15.
    Functional near-infrared spectroscopy has attracted increasing attention in the field of brain–computer interfaces owing to their advantages such as non-invasiveness, user safety, affordability, and portability. However, fNIRS signals are highly subject-specific and have low test-retest reliability. Therefore, individual calibration sessions need to be employed before each use of fNIRS-based BCI to achieve a sufficiently high performance for practical BCI applications. In this study, we propose a novel deep convolutional neural network -based approach for implementing a subject-independent fNIRS-based BCI. (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  28.  7
    A Novel Resource Productivity Based on Granular Neural Network in Cloud Computing.Farnaz Mahan, Seyyed Meysam Rozehkhani & Witold Pedrycz - 2021 - Complexity 2021:1-15.
    In recent years, due to the growing demand for computational resources, particularly in cloud computing systems, the data centers’ energy consumption is continually increasing, which directly causes price rise and reductions of resources’ productivity. Although many energy-aware approaches attempt to minimize the consumption of energy, they cannot minimize the violation of service-level agreements at the same time. In this paper, we propose a method using a granular neural network, which is used to model data processing. This method identifies the (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  29.  14
    How Do Artificial Neural Networks Classify Musical Triads? A Case Study in Eluding Bonini's Paradox.Arturo Perez, Helen L. Ma, Stephanie Zawaduk & Michael R. W. Dawson - 2023 - Cognitive Science 47 (1):e13233.
    How might artificial neural networks (ANNs) inform cognitive science? Often cognitive scientists use ANNs but do not examine their internal structures. In this paper, we use ANNs to explore how cognition might represent musical properties. We train ANNs to classify musical chords, and we interpret network structure to determine what representations ANNs discover and use. We find connection weights between input units and hidden units can be described using Fourier phase spaces, a representation studied in musical set theory. (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  30. Evolving Self-taught Neural Networks: The Baldwin Effect and the Emergence of Intelligence.Nam Le - 2019 - In AISB Annual Convention 2019 -- 10th Symposium on AI & Games.
    The so-called Baldwin Effect generally says how learning, as a form of ontogenetic adaptation, can influence the process of phylogenetic adaptation, or evolution. This idea has also been taken into computation in which evolution and learning are used as computational metaphors, including evolving neural networks. This paper presents a technique called evolving self-taught neural networksneural networks that can teach themselves without external supervision or reward. The self-taught neural network is intrinsically motivated. (...)
    Direct download  
     
    Export citation  
     
    Bookmark   1 citation  
  31. Neural Networks and Statistical Learning Methods (III)-The Application of Modified Hierarchy Genetic Algorithm Based on Adaptive Niches.Wei-Min Qi, Qiao-Ling Ji & Wei-You Cai - 2006 - In O. Stock & M. Schaerf (eds.), Lecture Notes in Computer Science. Springer Verlag. pp. 3930--842.
  32.  96
    Theorem proving in artificial neural networks: new frontiers in mathematical AI.Markus Pantsar - 2024 - European Journal for Philosophy of Science 14 (1):1-22.
    Computer assisted theorem proving is an increasingly important part of mathematical methodology, as well as a long-standing topic in artificial intelligence (AI) research. However, the current generation of theorem proving software have limited functioning in terms of providing new proofs. Importantly, they are not able to discriminate interesting theorems and proofs from trivial ones. In order for computers to develop further in theorem proving, there would need to be a radical change in how the software functions. Recently, machine learning (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  33.  9
    A separable convolutional neural network-based fast recognition method for AR-P300.Chunzhao He, Yulin Du & Xincan Zhao - 2022 - Frontiers in Human Neuroscience 16:986928.
    Augmented reality-based brain–computer interface (AR–BCI) has a low signal-to-noise ratio (SNR) and high real-time requirements. Classical machine learning algorithms that improve the recognition accuracy through multiple averaging significantly affect the information transfer rate (ITR) of the AR–SSVEP system. In this study, a fast recognition method based on a separable convolutional neural network (SepCNN) was developed for an AR-based P300 component (AR–P300). SepCNN achieved single extraction of AR–P300 features and improved the recognition speed. A nine-target AR–P300 single-stimulus paradigm was (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  34.  32
    Computationalism, Neural Networks and Minds, Analog or Otherwise.Michael G. Dyer & Boelter Hall - unknown
    A working hypothesis of computationalism is that Mind arises, not from the intrinsic nature of the causal properties of particular forms of matter, but from the organization of matter. If this hypothesis is correct, then a wide range of physical systems (e.g. optical, chemical, various hybrids, etc.) should support Mind, especially computers, since they have the capability to create/manipulate organizations of bits of arbitrarily complexity and dynamics. In any particular computer, these bit patterns are quite physical, but their particular (...)
    Direct download  
     
    Export citation  
     
    Bookmark  
  35.  98
    Nonmonotonic Inferences and Neural Networks.Reinhard Blutner - 2004 - Synthese 142 (2):143-174.
    There is a gap between two different modes of computation: the symbolic mode and the subsymbolic (neuron-like) mode. The aim of this paper is to overcome this gap by viewing symbolism as a high-level description of the properties of (a class of) neural networks. Combining methods of algebraic semantics and non-monotonic logic, the possibility of integrating both modes of viewing cognition is demonstrated. The main results are (a) that certain activities of connectionist networks can be interpreted as (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  36. Neural Networks-Fast Kernel Classifier Construction Using Orthogonal Forward Selection to Minimise Leave-One-Out Misclassification Rate.X. Hong, S. Chen & C. J. Harris - 2006 - In O. Stock & M. Schaerf (eds.), Lecture Notes in Computer Science. Springer Verlag. pp. 4113--106.
  37. Large neural networks for the resolution of lexical ambiguity.Jean Véronis & Nancy Ide - 1995 - In Patrick Saint-Dizier & Evelyne Viegas (eds.), Computational Lexical Semantics. Cambridge University Press. pp. 251--269.
  38. Neural Network Applications-Face Recognition Using Probabilistic Two-Dimensional Principal Component Analysis and Its Mixture Model.Haixian Wang & Zilan Hu - 2006 - In O. Stock & M. Schaerf (eds.), Lecture Notes in Computer Science. Springer Verlag. pp. 4221--337.
     
    Export citation  
     
    Bookmark  
  39. How do connectionist networks compute?Gerard O'Brien & Jonathan Opie - 2006 - Cognitive Processing 7 (1):30-41.
    Although connectionism is advocated by its proponents as an alternative to the classical computational theory of mind, doubts persist about its _computational_ credentials. Our aim is to dispel these doubts by explaining how connectionist networks compute. We first develop a generic account of computation—no easy task, because computation, like almost every other foundational concept in cognitive science, has resisted canonical definition. We opt for a characterisation that does justice to the explanatory role of computation in cognitive science. Next we (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   21 citations  
  40.  5
    Connectionist representations of tonal music: discovering musical patterns by interpreting artificial neural networks.Michael Robert William Dawson - 2018 - Edmonton, Alberta: AU Press.
    Intended to introduce readers to the use of artificial neural networks in the study of music, this volume contains numerous case studies and research findings that address problems related to identifying scales, keys, classifying musical chords, and learning jazz chord progressions. A detailed analysis of networks is provided for each case study which together demonstrate that focusing on the internal structure of trained networks could yield important contributions to the field of music cognition.
    Direct download  
     
    Export citation  
     
    Bookmark  
  41.  32
    Handbook of Brain Theory and Neural Networks.Michael A. Arbib (ed.) - 1995 - MIT Press.
    Choice Outstanding Academic Title, 1996. In hundreds of articles by experts from around the world, and in overviews and "road maps" prepared by the editor, The Handbook of Brain Theory and Neural Networkscharts the immense progress made in recent years in many specific areas related to two great questions: How does the brain work? and How can we build intelligent machines? While many books have appeared on limited aspects of one subfield or another of brain theory and neural (...)
    Direct download  
     
    Export citation  
     
    Bookmark   15 citations  
  42. Computational Finance and Business Intelligence-Comparisons of the Different Frequencies of Input Data for Neural Networks in Foreign Exchange Rates Forecasting.Wei Huang, Lean Yu, Shouyang Wang, Yukun Bao & Lin Wang - 2006 - In O. Stock & M. Schaerf (eds.), Lecture Notes in Computer Science. Springer Verlag. pp. 517-524.
  43.  17
    EARSHOT: A Minimal Neural Network Model of Incremental Human Speech Recognition.James S. Magnuson, Heejo You, Sahil Luthra, Monica Li, Hosung Nam, Monty Escabí, Kevin Brown, Paul D. Allopenna, Rachel M. Theodore, Nicholas Monto & Jay G. Rueckl - 2020 - Cognitive Science 44 (4):e12823.
    Despite the lack of invariance problem (the many‐to‐many mapping between acoustics and percepts), human listeners experience phonetic constancy and typically perceive what a speaker intends. Most models of human speech recognition (HSR) have side‐stepped this problem, working with abstract, idealized inputs and deferring the challenge of working with real speech. In contrast, carefully engineered deep learning networks allow robust, real‐world automatic speech recognition (ASR). However, the complexities of deep learning architectures and training regimens make it difficult to use them (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  44.  6
    Face Recognition Depends on Specialized Mechanisms Tuned to View‐Invariant Facial Features: Insights from Deep Neural Networks Optimized for Face or Object Recognition.Naphtali Abudarham, Idan Grosbard & Galit Yovel - 2021 - Cognitive Science 45 (9):e13031.
    Face recognition is a computationally challenging classification task. Deep convolutional neural networks (DCNNs) are brain‐inspired algorithms that have recently reached human‐level performance in face and object recognition. However, it is not clear to what extent DCNNs generate a human‐like representation of face identity. We have recently revealed a subset of facial features that are used by humans for face recognition. This enables us now to ask whether DCNNs rely on the same facial information and whether this human‐like representation (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  45.  18
    Attentive deep neural networks for legal document retrieval.Ha-Thanh Nguyen, Manh-Kien Phi, Xuan-Bach Ngo, Vu Tran, Le-Minh Nguyen & Minh-Phuong Tu - 2022 - Artificial Intelligence and Law 32 (1):57-86.
    Legal text retrieval serves as a key component in a wide range of legal text processing tasks such as legal question answering, legal case entailment, and statute law retrieval. The performance of legal text retrieval depends, to a large extent, on the representation of text, both query and legal documents. Based on good representations, a legal text retrieval model can effectively match the query to its relevant documents. Because legal documents often contain long articles and only some parts are relevant (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  46. Empiricism without Magic: Transformational Abstraction in Deep Convolutional Neural Networks.Cameron Buckner - 2018 - Synthese (12):1-34.
    In artificial intelligence, recent research has demonstrated the remarkable potential of Deep Convolutional Neural Networks (DCNNs), which seem to exceed state-of-the-art performance in new domains weekly, especially on the sorts of very difficult perceptual discrimination tasks that skeptics thought would remain beyond the reach of artificial intelligence. However, it has proven difficult to explain why DCNNs perform so well. In philosophy of mind, empiricists have long suggested that complex cognition is based on information derived from sensory experience, often (...)
    Direct download (8 more)  
     
    Export citation  
     
    Bookmark   42 citations  
  47.  14
    What Can Deep Neural Networks Teach Us About Embodied Bounded Rationality.Edward A. Lee - 2022 - Frontiers in Psychology 13.
    “Rationality” in Simon's “bounded rationality” is the principle that humans make decisions on the basis of step-by-step reasoning using systematic rules of logic to maximize utility. “Bounded rationality” is the observation that the ability of a human brain to handle algorithmic complexity and large quantities of data is limited. Bounded rationality, in other words, treats a decision maker as a machine carrying out computations with limited resources. Under the principle of embodied cognition, a cognitive mind is an interactive machine. Turing-Church (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  48. Michael A. Arbib, The metaphorical brain 2: Neural networks and beyond.John A. Barnden - 1998 - Artificial Intelligence 101 (1-2):301-309.
    The book is thought-provoking and informative, wide in scope while also being technically detailed, and still relevant to modem AI [at least as of 1998, the time of writing this review, but probably also at the time of posting this entry here, 2023] even though it was published in 1989. This relevance lies mainly in the book’s advocacy of distributed computation at multiple levels of description, its combining of neural networks and other techniques, its emphasis on the interplay (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  49.  67
    Evaluating (and Improving) the Correspondence Between Deep Neural Networks and Human Representations.Joshua C. Peterson, Joshua T. Abbott & Thomas L. Griffiths - 2018 - Cognitive Science 42 (8):2648-2669.
    Decades of psychological research have been aimed at modeling how people learn features and categories. The empirical validation of these theories is often based on artificial stimuli with simple representations. Recently, deep neural networks have reached or surpassed human accuracy on tasks such as identifying objects in natural images. These networks learn representations of real‐world stimuli that can potentially be leveraged to capture psychological representations. We find that state‐of‐the‐art object classification networks provide surprisingly accurate predictions of (...)
    No categories
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  50.  32
    On stability and solvability (or, when does a neural network solve a problem?).Stan Franklin & Max Garzon - 1992 - Minds and Machines 2 (1):71-83.
    The importance of the Stability Problem in neurocomputing is discussed, as well as the need for the study of infinite networks. Stability must be the key ingredient in the solution of a problem by a neural network without external intervention. Infinite discrete networks seem to be the proper objects of study for a theory of neural computability which aims at characterizing problems solvable, in principle, by a neural network. Precise definitions of such problems and their (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
1 — 50 / 489