Synthese (12):1-34 (2018)
Authors |
|
Abstract |
In artificial intelligence, recent research has demonstrated the remarkable potential of Deep Convolutional Neural Networks (DCNNs), which seem to exceed state-of-the-art performance in new domains weekly, especially on the sorts of very difficult perceptual discrimination tasks that skeptics thought would remain beyond the reach of artificial intelligence. However, it has proven difficult to explain why DCNNs perform so well. In philosophy of mind, empiricists have long suggested that complex cognition is based on information derived from sensory experience, often appealing to a faculty of abstraction. Rationalists have frequently complained, however, that empiricists never adequately explained how this faculty of abstraction actually works. In this paper, I tie these two questions together, to the mutual benefit of both disciplines. I argue that the architectural features that distinguish DCNNs from earlier neural networks allow them to implement a form of hierarchical processing that I call “transformational abstraction”. Transformational abstraction iteratively converts sensory-based representations of category exemplars into new formats that are increasingly tolerant to “nuisance variation” in input. Reflecting upon the way that DCNNs leverage a combination of linear and non-linear processing to efficiently accomplish this feat allows us to understand how the brain is capable of bi-directional travel between exemplars and abstractions, addressing longstanding problems in empiricist philosophy of mind. I end by considering the prospects for future research on DCNNs, arguing that rather than simply implementing 80s connectionism with more brute-force computation, transformational abstraction counts as a qualitatively distinct form of processing ripe with philosophical and psychological significance, because it is significantly better suited to depict the generic mechanism responsible for this important kind of psychological processing in the brain.
|
Keywords | abstraction connectionism convolution deep learning empiricism mechanism nuisance variation |
Categories | (categorize this paper) |
Reprint years | 2018 |
ISBN(s) | |
DOI | 10.1007/s11229-018-01949-1 |
Options |
![]() ![]() ![]() ![]() |
Download options
References found in this work BETA
Thinking About Mechanisms.Peter Machamer, Lindley Darden & Carl F. Craver - 2000 - Philosophy of Science 67 (1):1-25.
A Neurocomputational Perspective: The Nature of Mind and the Structure of Science.Paul M. Churchland - 1989 - MIT Press.
A Treatise of Human Nature.David Hume & A. D. Lindsay - 1958 - Philosophical Quarterly 8 (33):379-380.
View all 50 references / Add more references
Citations of this work BETA
Understanding From Machine Learning Models.Emily Sullivan - 2022 - British Journal for the Philosophy of Science 73 (1):109-133.
Solving the Black Box Problem: A Normative Framework for Explainable Artificial Intelligence.Carlos Zednik - 2019 - Philosophy and Technology 34 (2):265-288.
The Rhetoric and Reality of Anthropomorphism in Artificial Intelligence.David S. Watson - 2019 - Minds and Machines 29 (3):417-440.
View all 24 citations / Add more citations
Similar books and articles
Abstraction and the Origin of General Ideas.Stephen Laurence & Eric Margolis - 2012 - Philosophers' Imprint 12:1-22.
Reconstructing Aquinas's Process of Abstraction.Liran Shia Gordon - 2018 - Heythrop Journal 59 (4):639-652.
Some Neural Networks Compute, Others Don't.Gualtiero Piccinini - 2008 - Neural Networks 21 (2-3):311-321.
Birth of an Abstraction: A Dynamical Systems Account of the Discovery of an Elsewhere Principle in a Category Learning Task.Whitney Tabor, Pyeong W. Cho & Harry Dankowicz - 2013 - Cognitive Science 37 (7):1193-1227.
Abstraction Relations Need Not Be Reflexive.Jonathan Payne - 2013 - Thought: A Journal of Philosophy 2 (2):137-147.
Numbers and Numerosities: Absence of Abstract Neural Realization Doesn't Mean Non-Abstraction.Rafael E. Núñez - 2009 - Behavioral and Brain Sciences 32 (3-4):344-344.
Out of Their Minds: Legal Theory in Neural Networks. [REVIEW]Dan Hunter - 1999 - Artificial Intelligence and Law 7 (2-3):129-151.
The Nuisance Principle in Infinite Settings.Sean C. Ebels-Duggan - 2015 - Thought: A Journal of Philosophy 4 (4):263-268.
Cortical Connections and Parallel Processing: Structure and Function.Dana H. Ballard - 1986 - Behavioral and Brain Sciences 9 (1):67-90.
Logic and Abstraction as Capabilities of the Mind: Reconceptualizations of Computational Approaches to the Mind.D. J. Saab & U. V. Riss (eds.) - 2010 - IGI.
Consciousness, Connectionism, and Cognitive Neuroscience: A Meeting of the Minds.Dan Lloyd - 1996 - Philosophical Psychology 9 (1):61-78.
The Troubled History of Abstraction.Ignacio Angelelli - 2005 - History of Philosophy & Logical Analysis 8.
Analytics
Added to PP index
2018-09-17
Total views
687 ( #11,228 of 2,507,807 )
Recent downloads (6 months)
52 ( #16,348 of 2,507,807 )
2018-09-17
Total views
687 ( #11,228 of 2,507,807 )
Recent downloads (6 months)
52 ( #16,348 of 2,507,807 )
How can I increase my downloads?
Downloads