Abstract
Two long-standing arguments in cognitive science invoke the assumption that holistic inference is computationally infeasible. The first is Fodor’s skeptical argument toward computational modeling of ordinary inductive reasoning. The second advocates modular computational mechanisms of the kind posited by Cosmides, Tooby and Sperber. Based on advances in machine learning related to Bayes nets, as well as investigations into the structure of scientific and ordinary information, I maintain neither argument establishes its architectural conclusion. Similar considerations also undermine Fodor’s decades-long diagnosis of artificial intelligence research as confounded by an inability to circumscribe the amount of information relevant to inferential processes. This diagnosis is particularly inapposite with respect to Bayes nets, since one of their strengths as machine learning systems has been their capacity to reason probabilistically about large data sets whose size overwhelms the capacities of individual human reasoners. A general moral follows from these criticisms: Insights into artificial and human cognitive systems are likely to be cultivated by focusing greater attention on the structure and density of connections among items of information that are available to them.