Abstract
In recent years, the family of algorithms collected under the term ``deep
learning'' has revolutionized artificial intelligence, enabling machines to
reach human-like performances in many complex cognitive tasks.
Although deep learning models are grounded in the connectionist paradigm, their
recent advances were basically developed with engineering goals in mind.
Despite of their applied focus, deep learning models eventually seem fruitful
for cognitive purposes. This can be thought as a kind of biological exaptation,
where a physiological structure becomes applicable for a function different from
that for which it was selected.
In this paper, it will be argued that it is time for cognitive science to
seriously come to terms with deep learning, and we try to spell out the reasons
why this is the case.
First, the path of the evolution of deep learning from the connectionist project
is traced, demonstrating the remarkable continuity, and the differences as well.
Then, it will be considered how deep learning models can be useful for many
cognitive topics, especially those where it has achieved performance comparable
to humans, from perception to language.
It will be maintained that deep learning poses questions that cognitive sciences
should try to answer. One of such questions is the reasons why deep
convolutional models that are disembodied, inactive, unaware of context, and
static, are by far the closest to the patterns of activation in the brain visual
system.