Results for 'Neural networks (Computer science). '

240 found
Order:
  1. Some Neural Networks Compute, Others Don't.Gualtiero Piccinini - 2008 - Neural Networks 21 (2-3):311-321.
    I address whether neural networks perform computations in the sense of computability theory and computer science. I explicate and defend
    the following theses. (1) Many neural networks compute—they perform computations. (2) Some neural networks compute in a classical way.
    Ordinary digital computers, which are very large networks of logic gates, belong in this class of neural networks. (3) Other neural networks
    compute in a non-classical way. (4) Yet other neural (...)
     
    Export citation  
     
    Bookmark   17 citations  
  2.  11
    Deep neural networks are not a single hypothesis but a language for expressing computational hypotheses.Tal Golan, JohnMark Taylor, Heiko Schütt, Benjamin Peters, Rowan P. Sommers, Katja Seeliger, Adrien Doerig, Paul Linton, Talia Konkle, Marcel van Gerven, Konrad Kording, Blake Richards, Tim C. Kietzmann, Grace W. Lindsay & Nikolaus Kriegeskorte - 2023 - Behavioral and Brain Sciences 46:e392.
    An ideal vision model accounts for behavior and neurophysiology in both naturalistic conditions and designed lab experiments. Unlike psychological theories, artificial neural networks (ANNs) actually perform visual tasks and generate testable predictions for arbitrary inputs. These advantages enable ANNs to engage the entire spectrum of the evidence. Failures of particular models drive progress in a vibrant ANN research program of human vision.
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  3.  15
    Neural networks and computational theory: Solving the right problem.David C. Plaut - 1989 - Behavioral and Brain Sciences 12 (3):411-413.
  4.  48
    Locality, modularity, and computational neural networks.Horst Bischof - 1997 - Behavioral and Brain Sciences 20 (3):516-517.
    There is a distinction between locality and modularity. These two terms have often been used interchangeably in the target article and commentary. Using this distinction we argue in favor of a modularity. In addition we also argue that both PDP-type networks and box-and-arrow models have their own strengths and pitfalls.
    Direct download (6 more)  
     
    Export citation  
     
    Bookmark  
  5.  33
    Deep problems with neural network models of human vision.Jeffrey S. Bowers, Gaurav Malhotra, Marin Dujmović, Milton Llera Montero, Christian Tsvetkov, Valerio Biscione, Guillermo Puebla, Federico Adolfi, John E. Hummel, Rachel F. Heaton, Benjamin D. Evans, Jeffrey Mitchell & Ryan Blything - 2023 - Behavioral and Brain Sciences 46:e385.
    Deep neural networks (DNNs) have had extraordinary successes in classifying photographic images of objects and are often described as the best models of biological vision. This conclusion is largely based on three sets of findings: (1) DNNs are more accurate than any other model in classifying images taken from various datasets, (2) DNNs do the best job in predicting the pattern of human errors in classifying objects taken from various behavioral datasets, and (3) DNNs do the best job (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  6.  80
    Neural networks discover a near-identity relation to distinguish simple syntactic forms.Thomas R. Shultz & Alan C. Bale - 2006 - Minds and Machines 16 (2):107-139.
    Computer simulations show that an unstructured neural-network model [Shultz, T. R., & Bale, A. C. (2001). Infancy, 2, 501–536] covers the essential features␣of infant learning of simple grammars in an artificial language [Marcus, G. F., Vijayan, S., Bandi Rao, S., & Vishton, P. M. (1999). Science, 283, 77–80], and generalizes to examples both outside and inside of the range of training sentences. Knowledge-representation analyses confirm that these networks discover that duplicate words in the sentences are nearly (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  7.  47
    Combining distributed and localist computations in real-time neural networks.Gail A. Carpenter - 2000 - Behavioral and Brain Sciences 23 (4):473-474.
    In order to benefit from the advantages of localist coding, neural models that feature winner-take-all representations at the top level of a network hierarchy must still solve the computational problems inherent in distributed representations at the lower levels.
    Direct download (7 more)  
     
    Export citation  
     
    Bookmark  
  8. Intelligent Computing in Bioinformatics-Genetic Algorithm and Neural Network Based Classification in Microarray Data Analysis with Biological Validity Assessment.Vitoantonio Bevilacqua, Giuseppe Mastronardi & Filippo Menolascina - 2006 - In O. Stock & M. Schaerf (eds.), Lecture Notes in Computer Science. Springer Verlag. pp. 4115--475.
     
    Export citation  
     
    Bookmark  
  9.  13
    Neural Network Models as Evidence for Different Types of Visual Representations.Stephen M. Kosslyn, Christopher F. Chabris & David P. Baker - 1995 - Cognitive Science 19 (4):575-579.
    Cook (1995) criticizes the work of Jacobs and Kosslyn (1994) on spatial relations, shape representations, and receptive fields in neural network models on the grounds that first‐order correlations between input and output unit activities can explain the results. We reply briefly to Cook's arguments here (and in Kosslyn, Chabris, Marsolek, Jacobs & Koenig, 1995) and discuss how new simulations can confirm the importance of receptive field size as a crucial variable in the encoding of categorical and coordinate spatial relations (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  10.  14
    How Do Artificial Neural Networks Classify Musical Triads? A Case Study in Eluding Bonini's Paradox.Arturo Perez, Helen L. Ma, Stephanie Zawaduk & Michael R. W. Dawson - 2023 - Cognitive Science 47 (1):e13233.
    How might artificial neural networks (ANNs) inform cognitive science? Often cognitive scientists use ANNs but do not examine their internal structures. In this paper, we use ANNs to explore how cognition might represent musical properties. We train ANNs to classify musical chords, and we interpret network structure to determine what representations ANNs discover and use. We find connection weights between input units and hidden units can be described using Fourier phase spaces, a representation studied in musical set (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  11.  5
    Connectionist representations of tonal music: discovering musical patterns by interpreting artificial neural networks.Michael Robert William Dawson - 2018 - Edmonton, Alberta: AU Press.
    Intended to introduce readers to the use of artificial neural networks in the study of music, this volume contains numerous case studies and research findings that address problems related to identifying scales, keys, classifying musical chords, and learning jazz chord progressions. A detailed analysis of networks is provided for each case study which together demonstrate that focusing on the internal structure of trained networks could yield important contributions to the field of music cognition.
    Direct download  
     
    Export citation  
     
    Bookmark  
  12.  96
    Theorem proving in artificial neural networks: new frontiers in mathematical AI.Markus Pantsar - 2024 - European Journal for Philosophy of Science 14 (1):1-22.
    Computer assisted theorem proving is an increasingly important part of mathematical methodology, as well as a long-standing topic in artificial intelligence (AI) research. However, the current generation of theorem proving software have limited functioning in terms of providing new proofs. Importantly, they are not able to discriminate interesting theorems and proofs from trivial ones. In order for computers to develop further in theorem proving, there would need to be a radical change in how the software functions. Recently, machine learning (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  13. Neural Networks and Statistical Learning Methods (III)-The Application of Modified Hierarchy Genetic Algorithm Based on Adaptive Niches.Wei-Min Qi, Qiao-Ling Ji & Wei-You Cai - 2006 - In O. Stock & M. Schaerf (eds.), Lecture Notes in Computer Science. Springer Verlag. pp. 3930--842.
  14. Neural Networks-Fast Kernel Classifier Construction Using Orthogonal Forward Selection to Minimise Leave-One-Out Misclassification Rate.X. Hong, S. Chen & C. J. Harris - 2006 - In O. Stock & M. Schaerf (eds.), Lecture Notes in Computer Science. Springer Verlag. pp. 4113--106.
  15. How do connectionist networks compute?Gerard O'Brien & Jonathan Opie - 2006 - Cognitive Processing 7 (1):30-41.
    Although connectionism is advocated by its proponents as an alternative to the classical computational theory of mind, doubts persist about its _computational_ credentials. Our aim is to dispel these doubts by explaining how connectionist networks compute. We first develop a generic account of computation—no easy task, because computation, like almost every other foundational concept in cognitive science, has resisted canonical definition. We opt for a characterisation that does justice to the explanatory role of computation in cognitive science. (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   21 citations  
  16.  32
    Handbook of Brain Theory and Neural Networks.Michael A. Arbib (ed.) - 1995 - MIT Press.
    Choice Outstanding Academic Title, 1996. In hundreds of articles by experts from around the world, and in overviews and "road maps" prepared by the editor, The Handbook of Brain Theory and Neural Networkscharts the immense progress made in recent years in many specific areas related to two great questions: How does the brain work? and How can we build intelligent machines? While many books have appeared on limited aspects of one subfield or another of brain theory and neural (...)
    Direct download  
     
    Export citation  
     
    Bookmark   15 citations  
  17.  21
    Methodology of Computer Science.Timothy Colburn - 2004 - In Luciano Floridi (ed.), The Blackwell Guide to the Philosophy of Computing and Information. Oxford, UK: Blackwell. pp. 318–326.
    The prelims comprise: Introduction Computer Science and Mathematics The Formal Verification Debate Abstraction in Computer Science Conclusion.
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  18. Neural Network Applications-Face Recognition Using Probabilistic Two-Dimensional Principal Component Analysis and Its Mixture Model.Haixian Wang & Zilan Hu - 2006 - In O. Stock & M. Schaerf (eds.), Lecture Notes in Computer Science. Springer Verlag. pp. 4221--337.
     
    Export citation  
     
    Bookmark  
  19. Computational Finance and Business Intelligence-Comparisons of the Different Frequencies of Input Data for Neural Networks in Foreign Exchange Rates Forecasting.Wei Huang, Lean Yu, Shouyang Wang, Yukun Bao & Lin Wang - 2006 - In O. Stock & M. Schaerf (eds.), Lecture Notes in Computer Science. Springer Verlag. pp. 517-524.
  20.  17
    EARSHOT: A Minimal Neural Network Model of Incremental Human Speech Recognition.James S. Magnuson, Heejo You, Sahil Luthra, Monica Li, Hosung Nam, Monty Escabí, Kevin Brown, Paul D. Allopenna, Rachel M. Theodore, Nicholas Monto & Jay G. Rueckl - 2020 - Cognitive Science 44 (4):e12823.
    Despite the lack of invariance problem (the many‐to‐many mapping between acoustics and percepts), human listeners experience phonetic constancy and typically perceive what a speaker intends. Most models of human speech recognition (HSR) have side‐stepped this problem, working with abstract, idealized inputs and deferring the challenge of working with real speech. In contrast, carefully engineered deep learning networks allow robust, real‐world automatic speech recognition (ASR). However, the complexities of deep learning architectures and training regimens make it difficult to use them (...)
    No categories
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   4 citations  
  21.  6
    Face Recognition Depends on Specialized Mechanisms Tuned to View‐Invariant Facial Features: Insights from Deep Neural Networks Optimized for Face or Object Recognition.Naphtali Abudarham, Idan Grosbard & Galit Yovel - 2021 - Cognitive Science 45 (9):e13031.
    Face recognition is a computationally challenging classification task. Deep convolutional neural networks (DCNNs) are brain‐inspired algorithms that have recently reached human‐level performance in face and object recognition. However, it is not clear to what extent DCNNs generate a human‐like representation of face identity. We have recently revealed a subset of facial features that are used by humans for face recognition. This enables us now to ask whether DCNNs rely on the same facial information and whether this human‐like representation (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  22.  67
    Evaluating (and Improving) the Correspondence Between Deep Neural Networks and Human Representations.Joshua C. Peterson, Joshua T. Abbott & Thomas L. Griffiths - 2018 - Cognitive Science 42 (8):2648-2669.
    Decades of psychological research have been aimed at modeling how people learn features and categories. The empirical validation of these theories is often based on artificial stimuli with simple representations. Recently, deep neural networks have reached or surpassed human accuracy on tasks such as identifying objects in natural images. These networks learn representations of real‐world stimuli that can potentially be leveraged to capture psychological representations. We find that state‐of‐the‐art object classification networks provide surprisingly accurate predictions of (...)
    No categories
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   9 citations  
  23.  22
    Attention-based convolutional neural network for Bangla sentiment analysis.Sadia Sharmin & Danial Chakma - 2021 - AI and Society 36 (1):381-396.
    With the accelerated evolution of the internet in the form of web-sites, social networks, microblogs, and online portals, a large number of reviews, opinions, recommendations, ratings, and feedback are generated by writers or users. This user-generated sentiment content can be about books, people, hotels, products, research, events, etc. These sentiments become very beneficial for businesses, governments, and individuals. While this content is meant to be useful, a bulk of this writer-generated content requires using text mining techniques and sentiment analysis. (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark  
  24.  68
    A Falsificationist Account of Artificial Neural Networks.Oliver Buchholz & Eric Raidl - forthcoming - The British Journal for the Philosophy of Science.
    Machine learning operates at the intersection of statistics and computer science. This raises the question as to its underlying methodology. While much emphasis has been put on the close link between the process of learning from data and induction, the falsificationist component of machine learning has received minor attention. In this paper, we argue that the idea of falsification is central to the methodology of machine learning. It is commonly thought that machine learning algorithms infer general prediction rules (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  25. The AHA! Experience: Creativity Through Emergent Binding in Neural Networks.Paul Thagard & Terrence C. Stewart - 2011 - Cognitive Science 35 (1):1-33.
    Many kinds of creativity result from combination of mental representations. This paper provides a computational account of how creative thinking can arise from combining neural patterns into ones that are potentially novel and useful. We defend the hypothesis that such combinations arise from mechanisms that bind together neural activity by a process of convolution, a mathematical operation that interweaves structures. We describe computer simulations that show the feasibility of using convolution to produce emergent patterns of neural (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   38 citations  
  26.  45
    Multiscale Modeling of Gene–Behavior Associations in an Artificial Neural Network Model of Cognitive Development.Michael S. C. Thomas, Neil A. Forrester & Angelica Ronald - 2016 - Cognitive Science 40 (1):51-99.
    In the multidisciplinary field of developmental cognitive neuroscience, statistical associations between levels of description play an increasingly important role. One example of such associations is the observation of correlations between relatively common gene variants and individual differences in behavior. It is perhaps surprising that such associations can be detected despite the remoteness of these levels of description, and the fact that behavior is the outcome of an extended developmental process involving interaction of the whole organism with a variable environment. Given (...)
    No categories
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  27. To transform the phenomena: Feyerabend, proliferation, and recurrent neural networks.Paul M. Churchland - 1997 - Philosophy of Science 64 (4):420.
    Paul Feyerabend recommended the methodological policy of proliferating competing theories as a means to uncovering new empirical data, and thus as a means to increase the empirical constraints that all theories must confront. Feyerabend's policy is here defended as a clear consequence of connectionist models of explanatory understanding and learning. An earlier connectionist "vindication" is criticized, and a more realistic and penetrating account is offered in terms of the computationally plastic cognitive profile displayed by neural networks with a (...)
    Direct download (7 more)  
     
    Export citation  
     
    Bookmark   6 citations  
  28. Generic Intelligent Systems-Artificial Neural Networks and Connectionists Systems-An Improved OIF Elman Neural Network and Its Applications to Stock Market.Limin Wang, Yanchun Liang, Xiaohu Shi, Ming Li & Xuming Han - 2006 - In O. Stock & M. Schaerf (eds.), Lecture Notes in Computer Science. Springer Verlag. pp. 21-28.
    No categories
     
    Export citation  
     
    Bookmark  
  29.  29
    Population lateralization arises in simulated evolution of non-interacting neural networks.James A. Reggia & Alexander Grushin - 2005 - Behavioral and Brain Sciences 28 (4):609-611.
    Recent computer simulations of evolving neural networks have shown that population-level behavioral asymmetries can arise without social interactions. Although these models are quite limited at present, they support the hypothesis that social pressures can be sufficient but are not necessary for population lateralization to occur, and they provide a framework for further theoretical investigation of this issue.
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark  
  30.  39
    Logic Diagrams, Sacred Geometry and Neural Networks.Jens Lemanski - 2019 - Logica Universalis 13 (4):495-513.
    In early modernity, one can find many spatial logic diagrams whose geometric forms share a family resemblance with religious art and symbols. The family resemblance these diagrams bear in form is often based on a vesica piscis or on a cross: Both logic diagrams and spiritual symbols focus on the intersection or conjunction of two or more entities, e.g. subject and predicate, on the one hand, or god and man, on the other. This paper deals with the development and function (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  31. Analogy making in legal reasoning with neural networks and fuzzy logic.Jürgen Hollatz - 1999 - Artificial Intelligence and Law 7 (2-3):289-301.
    Analogy making from examples is a central task in intelligent system behavior. A lot of real world problems involve analogy making and generalization. Research investigates these questions by building computer models of human thinking concepts. These concepts can be divided into high level approaches as used in cognitive science and low level models as used in neural networks. Applications range over the spectrum of recognition, categorization and analogy reasoning. A major part of legal reasoning could be (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   2 citations  
  32.  44
    Responsibility and Decision Making in the Era of Neural Networks.William Bechtel - 1996 - Social Philosophy and Policy 13 (2):267.
    Many of the mathematicians and scientists who guided the development of digital computers in the late 1940s, such as Alan Turing and John von Neumann, saw these new devices not just as tools for calculation but as devices that might employ the same principles as are exhibited in rational human thought. Thus, a subfield of what came to be called computer science assumed the label artificial intelligence. The idea of building artificial systems which could exhibit intelligent behavior comparable (...)
    Direct download (7 more)  
     
    Export citation  
     
    Bookmark   1 citation  
  33. Pattern Classification-A Morphological Neural Network Approach for Vehicle Detection from High Resolution Satellite Imagery.Hong Zheng, Li Pan & Li Li - 2006 - In O. Stock & M. Schaerf (eds.), Lecture Notes in Computer Science. Springer Verlag. pp. 4233--99.
     
    Export citation  
     
    Bookmark  
  34.  40
    Can Darwinian Mechanisms Make Novel Discoveries?: Learning from discoveries made by evolving neural networks.Robert T. Pennock - 2000 - Foundations of Science 5 (2):225-238.
    Some philosophers suggest that the development of scientificknowledge is a kind of Darwinian process. The process of discovery,however, is one problematic element of this analogy. I compare HerbertSimon's attempt to simulate scientific discovery in a computer programto recent connectionist models that were not designed for that purpose,but which provide useful cases to help evaluate this aspect of theanalogy. In contrast to the classic A.I. approach Simon used, ``neuralnetworks'' contain no explicit protocols, but are generic learningsystems built on the model (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   5 citations  
  35.  15
    Interperforming in AI: question of ‘natural’ in machine learning and recurrent neural networks.Tolga Yalur - 2020 - AI and Society 35 (3):737-745.
    This article offers a critical inquiry of contemporary neural network models as an instance of machine learning, from an interdisciplinary perspective of AI studies and performativity. It shows the limits on the architecture of these network systems due to the misemployment of ‘natural’ performance, and it offers ‘context’ as a variable from a performative approach, instead of a constant. The article begins with a brief review of machine learning-based natural language processing systems and continues with a concentration on the (...)
    Direct download (2 more)  
     
    Export citation  
     
    Bookmark  
  36.  62
    Jeffrey L. Elman, Elizabeth A. Bates, mark H. Johnson, Annette karmiloff-Smith, Domenico Parisi, and Kim Plunkett, (eds.), Rethinking innateness: A connectionist perspective on development, neural network modeling and connectionism series and Kim Plunkett and Jeffrey L. Elman, exercises in rethinking innateness: A handbook for connectionist simulations. [REVIEW]Kenneth Aizawa - 1999 - Minds and Machines 9 (3):447-456.
  37.  42
    Creativity or mental illness: Possible errors of relational priming in neural networks of the brain.James E. Swain & John D. Swain - 2008 - Behavioral and Brain Sciences 31 (4):398-399.
    If connectionist computational models explain the acquisition of complex cognitive skills, errors in such models would also help explain unusual brain activity such as in creativity – as well as in mental illness, including childhood onset problems with social behaviors in autism, the inability to maintain focus in attention deficit and hyperactivity disorder (ADHD), and the lack of motivation of depression disorders.
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark  
  38. Neurobiological Modeling and Analysis-An Electromechanical Neural Network Robotic Model of the Human Body and Brain: Sensory-Motor Control by Reverse Engineering Biological Somatic Sensors.Alan Rosen & David B. Rosen - 2006 - In O. Stock & M. Schaerf (eds.), Lecture Notes in Computer Science. Springer Verlag. pp. 4232--105.
  39. Special Session on Bioinformatics-Protein Stability Engineering in Staphylococcal Nuclease Using an AI-Neural Network Hybrid System and a Genetic Algorithm.Christopher M. Frenz - 2006 - In O. Stock & M. Schaerf (eds.), Lecture Notes in Computer Science. Springer Verlag. pp. 4031--935.
  40. Neural and super-Turing computing.Hava T. Siegelmann - 2003 - Minds and Machines 13 (1):103-114.
    ``Neural computing'' is a research field based on perceiving the human brain as an information system. This system reads its input continuously via the different senses, encodes data into various biophysical variables such as membrane potentials or neural firing rates, stores information using different kinds of memories (e.g., short-term memory, long-term memory, associative memory), performs some operations called ``computation'', and outputs onto various channels, including motor control commands, decisions, thoughts, and feelings. We show a natural model of (...) computing that gives rise to hyper-computation. Rigorous mathematical analysis is applied, explicating our model's exact computational power and how it changes with the change of parameters. Our analog neural network allows for supra-Turing power while keeping track of computational constraints, and thus embeds a possible answer to the superiority of the biological intelligence within the framework of classical computer science. We further propose it as standard in the field of analog computation, functioning in a role similar to that of the universal Turing machine in digital computation. In particular an analog of the Church-Turing thesis of digital computation is stated where the neural network takes place of the Turing machine. (shrink)
    Direct download (10 more)  
     
    Export citation  
     
    Bookmark   8 citations  
  41. Hardware Implementation-Effect of Steady and Relaxation Oscillations in Brillouin-Active Fiber Structural Sensor Based Neural Network in Smart Structures.Yong-Kab Kim, Soonja Lim & ChangKug Kim - 2006 - In O. Stock & M. Schaerf (eds.), Lecture Notes in Computer Science. Springer Verlag. pp. 3973--1374.
  42. Data Preprocessing-A Novel Input Stochastic Sensitivity Definition of Radial Basis Function Neural Networks and Its Application to Feature Selection.Xi-Zhao Wang & Hui Zhang - 2006 - In O. Stock & M. Schaerf (eds.), Lecture Notes in Computer Science. Springer Verlag. pp. 3971--1352.
     
    Export citation  
     
    Bookmark  
  43. Signal Processing-Fractional Order Digital Differentiators Design Using Exponential Basis Function Neural Network.Ke Liao, Xiao Yuan, Yi-Fei Pu & Ji-Liu Zhou - 2006 - In O. Stock & M. Schaerf (eds.), Lecture Notes in Computer Science. Springer Verlag. pp. 735-740.
     
    Export citation  
     
    Bookmark  
  44. Theoretical Analysis-Existence and Global Attractability of Almost Periodic Solution for Competitive Neural Networks with Time-Varying Delays and Different Time Scales.Wentong Liao & Linshan Wang - 2006 - In O. Stock & M. Schaerf (eds.), Lecture Notes in Computer Science. Springer Verlag. pp. 3971--297.
    No categories
     
    Export citation  
     
    Bookmark  
  45. Neurodynamic and Particle Swarm Optimization-A Recurrent Neural Network for Non-smooth Convex Programming Subject to Linear Equality and Bound Constraints.Qingshan Liu & Jun Wang - 2006 - In O. Stock & M. Schaerf (eds.), Lecture Notes in Computer Science. Springer Verlag. pp. 4233--1004.
  46. Neural Optimization and Dynamic Programming-Algorithm Analysis and Application Based on Chaotic Neural Network for Cellular Channel Assignment.Xiaojin Zhu, Yanchun Chen, Hesheng Zhang & Jialin Cao - 2006 - In O. Stock & M. Schaerf (eds.), Lecture Notes in Computer Science. Springer Verlag. pp. 991-996.
  47.  47
    Physical Computation: A Mechanistic Account.Gualtiero Piccinini - 2015 - Oxford, GB: Oxford University Press UK.
    Gualtiero Piccinini articulates and defends a mechanistic account of concrete, or physical, computation. A physical system is a computing system just in case it is a mechanism one of whose functions is to manipulate vehicles based solely on differences between different portions of the vehicles according to a rule defined over the vehicles. Physical Computation discusses previous accounts of computation and argues that the mechanistic account is better. Many kinds of computation are explicated, such as digital vs. analog, serial vs. (...)
  48. Computation in cognitive science: it is not all about Turing-equivalent computation.Kenneth Aizawa - 2010 - Studies in History and Philosophy of Science Part A 41 (3):227-236.
    It is sometimes suggested that the history of computation in cognitive science is one in which the formal apparatus of Turing-equivalent computation, or effective computability, was exported from mathematical logic to ever wider areas of cognitive science and its environs. This paper, however, indicates some respects in which this suggestion is inaccurate. Computability theory has not been focused exclusively on Turing-equivalent computation. Many essential features of Turing-equivalent computation are not captured in definitions of computation as symbol manipulation. Turing-equivalent (...)
    Direct download (3 more)  
     
    Export citation  
     
    Bookmark   3 citations  
  49. Dynamic mechanistic explanation: computational modeling of circadian rhythms as an exemplar for cognitive science.William Bechtel & Adele Abrahamsen - 2010 - Studies in History and Philosophy of Science Part A 41 (3):321-333.
    Two widely accepted assumptions within cognitive science are that (1) the goal is to understand the mechanisms responsible for cognitive performances and (2) computational modeling is a major tool for understanding these mechanisms. The particular approaches to computational modeling adopted in cognitive science, moreover, have significantly affected the way in which cognitive mechanisms are understood. Unable to employ some of the more common methods for conducting research on mechanisms, cognitive scientists’ guiding ideas about mechanism have developed in conjunction (...)
    Direct download (4 more)  
     
    Export citation  
     
    Bookmark   118 citations  
  50.  36
    Neural computation, architecture, and evolution.Paul Skokowski - 1997 - Behavioral and Brain Sciences 20 (1):80-80.
    Biological neural computation relies a great deal on architecture, which constrains the types of content that can be processed by distinct modules in the brain. Though artificial neural networks are useful tools and give insight, they cannot be relied upon yet to give definitive answers to problems in cognition. Knowledge re-use may be driven more by architectural inheritance than by epistemological drives.
    Direct download (5 more)  
     
    Export citation  
     
    Bookmark  
1 — 50 / 240