Abstract
Completeness is an important but misunderstood norm of explanation. It has recently been argued that mechanistic accounts of scientific explanation are committed to the thesis that models are complete only if they describe everything about a mechanism and, as a corollary, that incomplete models are always improved by adding more details. If so, mechanistic accounts are at odds with the obvious and important role of abstraction in scientific modelling. We respond to this characterization of the mechanist’s views about abstraction and articulate norms of completeness for mechanistic explanations that have no such unwanted implications. _1_ Introduction _2_ A Balancing Act: When Do Details Matter? _3_ The Norms of Causal Explanation _4_ The Norms of Constitutive Explanation _5_ Salmon-Completeness _6_ From More Details to More Relevant Details _7_ Non-explanatory Virtues of Abstraction _8_ From Explanatory Models to Explanatory Knowledge _9_ Mechanistic Completeness Reconsidered _10_ Conclusion