The problem of evaluating automated large-scale evidence aggregators

Synthese (8):3083-3102 (2019)
  Copy   BIBTEX

Abstract

In the biomedical context, policy makers face a large amount of potentially discordant evidence from different sources. This prompts the question of how this evidence should be aggregated in the interests of best-informed policy recommendations. The starting point of our discussion is Hunter and Williams’ recent work on an automated aggregation method for medical evidence. Our negative claim is that it is far from clear what the relevant criteria for evaluating an evidence aggregator of this sort are. What is the appropriate balance between explicitly coded algorithms and implicit reasoning involved, for instance, in the packaging of input evidence? In short: What is the optimal degree of ‘automation’? On the positive side: We propose the ability to perform an adequate robustness analysis as the focal criterion, primarily because it directs efforts to what is most important, namely, the structure of the algorithm and the appropriate extent of automation. Moreover, where there are resource constraints on the aggregation process, one must also consider what balance between volume of evidence and accuracy in the treatment of individual evidence best facilitates inference. There is no prerogative to aggregate the total evidence available if this would in fact reduce overall accuracy.

Similar books and articles

Is meta-analysis the platinum standard of evidence?Jacob Stegenga - 2011 - Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences 42 (4):497-507.
Robust evidence and secure evidence claims.Kent W. Staley - 2004 - Philosophy of Science 71 (4):467-488.
Corroborating evidence‐based medicine.Alexander Mebius - 2014 - Journal of Evaluation in Clinical Practice 20 (6):915-920.
Down with the Hierarchies.Jacob Stegenga - 2014 - Topoi 33 (2):313-322.
Computer models and the evidence of anthropogenic climate change: An epistemology of variety-of-evidence inferences and robustness analysis.Martin Vezer - 2016 - Computer Models and the Evidence of Anthropogenic Climate Change: An Epistemology of Variety-of-Evidence Inferences and Robustness Analysis MA Vezér Studies in History and Philosophy of Science 56:95-102.

Analytics

Added to PP
2018-01-05

Downloads
537 (#35,717)

6 months
102 (#46,871)

Historical graph of downloads
How can I increase my downloads?

Author Profiles

Katie Steele
Australian National University
Nicolas Wuethrich
London School of Economics

References found in this work

Interpreting causality in the health sciences.Federica Russo & Jon Williamson - 2007 - International Studies in the Philosophy of Science 21 (2):157 – 170.
Is meta-analysis the platinum standard of evidence?Jacob Stegenga - 2011 - Studies in History and Philosophy of Science Part C: Studies in History and Philosophy of Biological and Biomedical Sciences 42 (4):497-507.
Economic Modelling as Robustness Analysis.Jaakko Kuorikoski, Aki Lehtinen & Caterina Marchionni - 2010 - British Journal for the Philosophy of Science 61 (3):541-567.

View all 17 references / Add more references