Beyond Human: Deep Learning, Explainability and Representation

Theory, Culture and Society:026327642096638 (forthcoming)
  Copy   BIBTEX

Abstract

This article addresses computational procedures that are no longer constrained by human modes of representation and considers how these procedures could be philosophically understood in terms of ‘algorithmic thought’. Research in deep learning is its case study. This artificial intelligence technique operates in computational ways that are often opaque. Such a black-box character demands rethinking the abstractive operations of deep learning. The article does so by entering debates about explainability in AI and assessing how technoscience and technoculture tackle the possibility to ‘re-present’ the algorithmic procedures of feature extraction and feature learning to the human mind. The article thus mobilises the notion of incommensurability to address explainability as a communicational and representational issue, which challenges phenomenological and existential modes of comparison between human and algorithmic ‘thinking’ operations.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 91,322

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Analytics

Added to PP
2020-11-28

Downloads
21 (#715,461)

6 months
11 (#226,803)

Historical graph of downloads
How can I increase my downloads?