Abstract
Computer scientists have made great strides in characterizing different measures of algorithmic fairness, and showing that certain measures of fairness cannot be jointly satisfied. In this paper, I argue that the three most popular families of measures - unconditional independence, target-conditional independence and classification-conditional independence - make assumptions that are unsustainable in the context of an unjust world. I begin by introducing the measures and the implicit idealizations they make about the underlying causal structure of the contexts in which they are deployed. I then discuss how these idealizations fall apart in the context of historical injustice, ongoing unmodeled oppression, and the permissibility of using sensitive attributes to rectify injustice. In the final section, I suggest an alternative framework for measuring fairness in the context of existing injustice: distributive fairness.