Abstract
The question of how to treat an incapacitated patient is vexed, both normatively and practically—normatively, because it is not obvious what the relevant objectives are; practically, because even once the relevant objectives are set, it is often difficult to determine which treatment option is best given those objectives. But despite these complications, here is one consideration that is clearly relevant: what a patient prefers. And so any device that could reliably identify a patient’s preferences would be a promising tool for guiding the treatment of incapacitated patients. The patient preference predictor is just such a tool—an algorithm that takes as inputs a patient’s sociodemographic characteristics, and outputs a reliable prediction about that patient’s treatment preferences.1 But some have worried that the use of such a tool would violate or fail to appropriately respect patients’ autonomy. There are, I think, two ways to understand this kind of criticism. First, globally—as a worry that any systematic implementation of the PPP would be problematic on the grounds that it would result in significant or pervasive autonomy violations. Second, locally—as a worry that in some important range of cases, certain uses of the PPP would be problematic on autonomy grounds. Jardas et al, as I read them, address global autonomy-based criticisms, arguing—convincingly, in my view—that there’s no reason to suspect the autonomy concerns raised by the PPP would be so significant and pervasive as to render any implementation of it generally problematic.1 But even with the global criticisms rebuffed, there remains work to be done. Any ethically acceptable implementation of the PPP must be sensitive to the more local autonomy-based criticisms, with restrictions and safeguards in place to ensure respect for autonomy in the kinds of cases in which the use of the PPP might otherwise threaten …