Abstract
Although it has been argued that mechanistic explanation is compatible with abstraction, there are still doubts about whether mechanism can account for the explanatory power of significant abstract models in computational neuroscience. Chirimuuta has recently claimed that models describing canonical neural computations must be evaluated using a non-mechanistic framework. I defend two claims regarding these models. First, I argue that their prevailing neurocognitive interpretation is mechanistic. Additionally, a criterion recently proposed by Levy and Bechtel to legitimize mechanistic abstract models, and also a criterion proposed by Chirimuuta herself aimed to distinguish between causal and non-causal explanation, can be employed to show why these models are explanatory only under this interpretation. Second, I argue that mechanism is able to account for the special epistemic achievement implied by CNC models. Canonical neural components contribute to an integrated understanding of different cognitive functions. They make it possible for us to explain these functions by describing different mechanisms constituted by common basic components arranged in different ways.