Abstract
I use some ideas of Keith DeRose's to develop an (invariantist!) account of why sceptical reasoning doesn't show that I don't know that I'm not a brain in a vat. I argue that knowledge is subject to the risk-of-error constraint: a true belief won’t have the status of knowledge if there is a substantial risk of the belief being in error that hasn’t been brought under control. When a substantial risk of error is present (i.e. beliefs in propositions that are false in nearby worlds), satisfying the constraint requires bringing the risk under control. This is achieved either by sensitivity, i.e. you wouldn’t have the belief if it were false, or by identifying evidence for the proposition. However, when the risk of error is not substantial (i.e. beliefs in propositions that are not false in nearby worlds), the constraint is satisfied by default. My belief that I am not a brain in a vat is insensitive and I have no evidence for it, but since it is not false in nearby worlds, it satisfies the constraint by default