Abstract
How can a model that stops short of representing the whole truth about the causal production of a phenomenon help us to understand the phenomenon? I answer this question from the perspective of what I call the simple view of understanding, on which to understand a phenomenon is to grasp a correct explanation of the phenomenon. Idealizations, I have argued in previous work, flag factors that are casually relevant but explanatorily irrelevant to the phenomena to be explained. Though useful to the would-be understander, such flagging is only a first step. Are there any further and more advanced ways that idealized models aid understanding? Yes, I propose: the manipulation of idealized models can provide considerable insight into the reasons that some causal factors are difference-makers and others are not, which helps the understander to grasp the nature of explanatory connections and so to better grasp the explanation itself