Human Induction in Machine Learning: A Survey of the Nexus

ACM Computing Surveys (forthcoming)
  Copy   BIBTEX

Abstract

As our epistemic ambitions grow, the common and scientific endeavours are becoming increasingly dependent on Machine Learning (ML). The field rests on a single experimental paradigm, which consists of splitting the available data into a training and testing set and using the latter to measure how well the trained ML model generalises to unseen samples. If the model reaches acceptable accuracy, an a posteriori contract comes into effect between humans and the model, supposedly allowing its deployment to target environments. Yet the latter part of the contract depends on human inductive predictions or generalisations, which infer a uniformity between the trained ML model and the targets. The paper asks how we justify the contract between human and machine learning. It is argued that the justification becomes a pressing issue when we use ML to reach ‘elsewheres’ in space and time or deploy ML models in non-benign environments. The paper argues that the only viable version of the contract can be based on optimality (instead of on reliability which cannot be justified without circularity) and aligns this position with Schurz’s optimality justification. It is shown that when dealing with inaccessible/unstable ground-truths (‘elsewheres’ and non-benign targets), the optimality justification undergoes a slight change, which should reflect critically on our epistemic ambitions. Therefore, the study of ML robustness should involve not only heuristics that lead to acceptable accuracies on testing sets. The justification of human inductive predictions or generalisations about the uniformity between ML models and targets should be included as well. Without it, the assumptions about inductive risk minimisation in ML are not addressed in full.

Links

PhilArchive

External links

  • This entry has no external links. Add one.
Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Concept Representation Analysis in the Context of Human-Machine Interactions.Farshad Badie - 2016 - In 14th International Conference on e-Society. pp. 55-61.
Inductive logic, verisimilitude, and machine learning.Ilkka Niiniluoto - 2005 - In Petr H’Ajek, Luis Vald’es-Villanueva & Dag Westerståhl (eds.), Logic, methodology and philosophy of science. London: College Publications. pp. 295/314.
Varieties of Justification in Machine Learning.David Corfield - 2010 - Minds and Machines 20 (2):291-301.
Inductive learning by machines.Stuart Russell - 1991 - Philosophical Studies 64 (October):37-64.
The complexity of learning SUBSEQ(A).Stephen Fenner, William Gasarch & Brian Postow - 2009 - Journal of Symbolic Logic 74 (3):939-975.
Understanding from Machine Learning Models.Emily Sullivan - 2022 - British Journal for the Philosophy of Science 73 (1):109-133.
Human Semi-Supervised Learning.Bryan R. Gibson, Timothy T. Rogers & Xiaojin Zhu - 2013 - Topics in Cognitive Science 5 (1):132-172.

Analytics

Added to PP
2021-02-12

Downloads
259 (#75,277)

6 months
84 (#50,382)

Historical graph of downloads
How can I increase my downloads?

Author's Profile

Petr Spelda
Charles University, Prague

Citations of this work

No citations found.

Add more citations

References found in this work

No references found.

Add more references