Abstract
This work is concerned with hierarchical modular descriptions, their algorithmic production, and their importance for certain types of scientific explanations of the structure and dynamical behavior of complex systems. Networks are taken into consideration as paradigmatic representations of complex systems. It turns out that algorithmic detection of hierarchical modularity in networks is a task plagued in certain cases by theoretical intractability and in most cases by the still high computational complexity of most approximated methods. A new notion, antimodularity, is then proposed, which consists in the impossibility to algorithmically obtain a modular description fitting the explanatory purposes of the observer for reasons tied to the computational cost of typical algorithmic methods of modularity detection, in relation to the excessive size of the system under assessment and to the required precision. It turns out that occurrence of antimodularity hinders both mechanistic and functional explanation, by damaging their intelligibility. Another newly proposed more general notion, explanatory emergence, subsumes antimodularity under any case in which a system resists intelligible explanations because of the excessive computational cost of algorithmic methods required to obtain the relevant explanatory descriptions from the raw data. The possible consequences, and the likelihood, of incurring in antimodularity or explanatory emergence in the actual scientific practice are finally assessed, concluding that this eventuality is possible, at least in disciplines which are based on the algorithmic analysis of big data. The present work aims to be an example of how certain notions of theoretical computer science can be fruitfully imported into philosophy of science.