Abstract
In my paper entitled ‘Testimonial injustice in medical machine learning’,1 I argued that machine learning (ML)-based Prediction Drug Monitoring Programmes (PDMPs) could infringe on patients’ epistemic and moral standing inflicting a testimonial injustice.2 I am very grateful for all the comments the paper received, some of which expand on it while others take a more critical view. This response addresses two objections raised to my consideration of ML-induced testimonial injustice in order to clarify the position taken in the paper. The first maintains that my critical stance toward ML-based PDMPs idealises standard medical practice. Moreover, it claims that the ML-induced testimonial injustice I discuss is not substantially different from situations in which it emerges in human–human interactions. The second claims that my analysis does not establish a link to issues of automation bias, even if these are to be considered the core of testimonial injustice in ML. In the following, I address each objection in turn. Gillett3 argues that my critical stance towards using risk prediction tools such as PDMPs implies the idealisation of standard (ie, non-ML-mediated) modes of clinical practice. Considering certain uses of ML in a different setting, that is, psychiatry, the author goes as far as claiming that ‘traditional models of clinical practice in psychiatry are far from a utopia, free from epistemic injustice, which Pozzi’s argument risks proposing’. Since this statement does not represent what I intend to suggest, I am glad to have the possibility to clarify …