Throwing light on black boxes: emergence of visual categories from deep learning

Synthese 198 (10):10021-10041 (2020)
  Copy   BIBTEX

Abstract

One of the best known arguments against the connectionist approach to artificial intelligence and cognitive science is that neural networks are black boxes, i.e., there is no understandable account of their operation. This difficulty has impeded efforts to explain how categories arise from raw sensory data. Moreover, it has complicated investigation about the role of symbols and language in cognition. This state of things has been radically changed by recent experimental findings in artificial deep learning research. Two kinds of artificial deep learning networks, namely the convolutional neural network and the generative adversarial network have been found to possess the capability to build internal states that are interpreted by humans as complex visual categories, without any specific hints or any grammatical processing. This emergent ability suggests that those categories do not depend on human knowledge or the syntactic structure of language, while they do rely on their visual context. This supports a mild form of empiricism, while it does not assume that computational functionalism is true. Some consequences are extracted regarding the debate about amodal and grounded representations in the human brain. Furthermore, new avenues for research on cognitive science are open.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 93,774

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Analytics

Added to PP
2020-06-04

Downloads
46 (#106,786)

6 months
22 (#694,291)

Historical graph of downloads
How can I increase my downloads?