Abstract
This article gives a legal-conceptual analysis of the use of counterfactuals as transparency tools for automated decision making. The first part of the analysis discusses three notions: transparency, ADM and generative artificial intelligence. The second part of this article takes a closer look at the pros and cons of counterfactuals in making ADM explainable. Transparency is only useful if it is actionable, that is, if it challenges systemic bias or unjustified decisions. Existing ways of providing transparency about ADM systems are often limited in terms of being actionable. In contrast to many existing transparency tools, counterfactual explanations hold the promise of providing actionable and individually tailored transparency while not revealing too much of the model. Another strength of counterfactuals is that they show that transparency should not be understood as the immediate visibility of some underlying truth. While promising, counterfactuals have their limitations. Firstly, there is always a multiplicity of counterfactuals. Secondly, counterfactual explanations are not natural givens. Instead, they are constructed and the many underlying design decisions can turn out for better or worse.