Model-Selection Theory: The Need for a More Nuanced Picture of Use-Novelty and Double-Counting

British Journal for the Philosophy of Science 69 (2):351-375 (2018)
  Copy   BIBTEX

Abstract

This article argues that common intuitions regarding (a) the specialness of ‘use-novel’ data for confirmation and (b) that this specialness implies the ‘no-double-counting rule’, which says that data used in ‘constructing’ (calibrating) a model cannot also play a role in confirming the model’s predictions, are too crude. The intuitions in question are pertinent in all the sciences, but we appeal to a climate science case study to illustrate what is at stake. Our strategy is to analyse the intuitive claims in light of prominent accounts of confirmation of model predictions. We show that on the Bayesian account of confirmation, and also on the standard classical hypothesis-testing account, claims (a) and (b) are not generally true; but for some select cases, it is possible to distinguish data used for calibration from use-novel data, where only the latter confirm. The more specialized classical model-selection methods, on the other hand, uphold a nuanced version of claim (a), but this comes apart from (b), which must be rejected in favour of a more refined account of the relationship between calibration and confirmation. Thus, depending on the framework of confirmation, either the scope or the simplicity of the intuitive position must be revised. 1 Introduction2 A Climate Case Study3 The Bayesian Method vis-à-vis Intuitions4 Classical Tests vis-à-vis Intuitions5 Classical Model-Selection Methods vis-à-vis Intuitions 5.1 Introducing classical model-selection methods 5.2 Two cases6 Re-examining Our Case Study7 Conclusion.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 93,127

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Climate models, calibration, and confirmation.Charlotte Werndl & Katie Steele - 2013 - British Journal for the Philosophy of Science 64 (3):609-635.
Climate Models, Calibration, and Confirmation.Katie Steele & Charlotte Werndl - 2013 - British Journal for the Philosophy of Science 64 (3):609-635.
Predictivism and old evidence: a critical look at climate model tuning.Mathias Frisch - 2015 - European Journal for Philosophy of Science 5 (2):171-190.
Model tuning in engineering: uncovering the logic.Katie Steele & Charlotte Werndl - 2015 - Journal of Strain Analysis for Engineering Design 51 (1):63-71.
Bayesian Confirmation: A Means with No End.Peter Brössel & Franz Huber - 2015 - British Journal for the Philosophy of Science 66 (4):737-749.

Analytics

Added to PP
2023-06-13

Downloads
12 (#1,115,280)

6 months
10 (#308,815)

Historical graph of downloads
How can I increase my downloads?

Author Profiles

Charlotte Sophie Werndl
London School of Economics
Katie Steele
Australian National University

References found in this work

Prediction versus accommodation and the risk of overfitting.Christopher Hitchcock & Elliott Sober - 2004 - British Journal for the Philosophy of Science 55 (1):1-34.
Logical versus historical theories of confirmation.Alan Musgrave - 1974 - British Journal for the Philosophy of Science 25 (1):1-23.
Climate Models, Calibration, and Confirmation.Katie Steele & Charlotte Werndl - 2013 - British Journal for the Philosophy of Science 64 (3):609-635.
Novel evidence and severe tests.Deborah G. Mayo - 1991 - Philosophy of Science 58 (4):523-552.

View all 15 references / Add more references