Explanation and connectionist models

In Mark Sprevak & Matteo Colombo (eds.), The Routledge Handbook of the Computational Mind. Routledge. pp. 120-133 (2018)
  Copy   BIBTEX

Abstract

This chapter explores the epistemic roles played by connectionist models of cognition, and offers a formal analysis of how connectionist models explain. It looks at how other types of computational models explain. Classical artificial intelligence (AI) programs explain using abductive reasoning, or inference to the best explanation; they begin with the phenomena to be explained, and devise rules that can produce the right outcome. The chapter also looks at several examples of connectionist models of cognition, observing what sorts of constraints are used in their design, and how their results are evaluated. It argues that the point of implementing networks roughly analogous to neural structures is to discover and explores the generic mechanisms at work in the brain, not to deduce the precise activities of specific structures. The chapter explores a formal analysis of the explanations offered, which interprets connectionist models and the cognitive theories they represent as sharing membership in a type of mechanism.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 93,745

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Analytics

Added to PP
2024-04-17

Downloads
10 (#395,257)

6 months
10 (#1,198,792)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Catherine Stinson
Queen's University

References found in this work

No references found.

Add more references