ACM Computing Surveys (forthcoming)

Petr Spelda
Charles University, Prague
As our epistemic ambitions grow, the common and scientific endeavours are becoming increasingly dependent on Machine Learning (ML). The field rests on a single experimental paradigm, which consists of splitting the available data into a training and testing set and using the latter to measure how well the trained ML model generalises to unseen samples. If the model reaches acceptable accuracy, an a posteriori contract comes into effect between humans and the model, supposedly allowing its deployment to target environments. Yet the latter part of the contract depends on human inductive predictions or generalisations, which infer a uniformity between the trained ML model and the targets. The paper asks how we justify the contract between human and machine learning. It is argued that the justification becomes a pressing issue when we use ML to reach ‘elsewheres’ in space and time or deploy ML models in non-benign environments. The paper argues that the only viable version of the contract can be based on optimality (instead of on reliability which cannot be justified without circularity) and aligns this position with Schurz’s optimality justification. It is shown that when dealing with inaccessible/unstable ground-truths (‘elsewheres’ and non-benign targets), the optimality justification undergoes a slight change, which should reflect critically on our epistemic ambitions. Therefore, the study of ML robustness should involve not only heuristics that lead to acceptable accuracies on testing sets. The justification of human inductive predictions or generalisations about the uniformity between ML models and targets should be included as well. Without it, the assumptions about inductive risk minimisation in ML are not addressed in full.
Keywords Machine Learning  Inductive Inference  Robustness
Categories (categorize this paper)
Edit this record
Mark as duplicate
Export citation
Find it on Scholar
Request removal from index
Revision history

Download options

PhilArchive copy

 PhilArchive page | Other versions
External links

Setup an account with your affiliations in order to access resources via your University's proxy server
Configure custom proxy (use this if your affiliation does not provide a proxy)
Through your library

References found in this work BETA

No references found.

Add more references

Citations of this work BETA

No citations found.

Add more citations

Similar books and articles

Concept Representation Analysis in the Context of Human-Machine Interactions.Farshad Badie - 2016 - In 14th International Conference on e-Society. pp. 55-61.
Inductive Logic, Verisimilitude, and Machine Learning.Ilkka Niiniluoto - 2005 - In Petr H’Ajek, Luis Vald’es-Villanueva & Dag Westerståhl (eds.), Logic, methodology and philosophy of science. London: College Publications. pp. 295/314.
Varieties of Justification in Machine Learning.David Corfield - 2010 - Minds and Machines 20 (2):291-301.
Inductive Learning by Machines.Stuart Russell - 1991 - Philosophical Studies 64 (October):37-64.
The Complexity of Learning SUBSEQ(A).Stephen Fenner, William Gasarch & Brian Postow - 2009 - Journal of Symbolic Logic 74 (3):939-975.
Understanding From Machine Learning Models.Emily Sullivan - 2022 - British Journal for the Philosophy of Science 73 (1):109-133.
Human Semi-Supervised Learning.Bryan R. Gibson, Timothy T. Rogers & Xiaojin Zhu - 2013 - Topics in Cognitive Science 5 (1):132-172.


Added to PP index

Total views
88 ( #130,605 of 2,497,775 )

Recent downloads (6 months)
26 ( #33,254 of 2,497,775 )

How can I increase my downloads?


My notes