We provide three innovations to recent debates about whether topological or “network” explanations are a species of mechanistic explanation. First, we more precisely characterize the requirement that all topological explanations are mechanistic explanations and show scientific practice to belie such a requirement. Second, we provide an account that unifies mechanistic and non-mechanistic topological explanations, thereby enriching both the mechanist and autonomist programs by highlighting when and where topological explanations are mechanistic. Third, we defend this view against some powerful mechanist objections. (...) We conclude from this that topological explanations are autonomous from their mechanistic counterparts. (shrink)
This chapter provides a systematic overview of topological explanations in the philosophy of science literature. It does so by presenting an account of topological explanation that I (Kostić and Khalifa 2021; Kostić 2020a; 2020b; 2018) have developed in other publications and then comparing this account to other accounts of topological explanation. Finally, this appraisal is opinionated because it highlights some problems in alternative accounts of topological explanations, and also it outlines responses to some of the main criticisms raised by the (...) so-called new mechanists. (shrink)
In this paper, I present a general theory of topological explanations, and illustrate its fruitfulness by showing how it accounts for explanatory asymmetry. My argument is developed in three steps. In the first step, I show what it is for some topological property A to explain some physical or dynamical property B. Based on that, I derive three key criteria of successful topological explanations: a criterion concerning the facticity of topological explanations, i.e. what makes it true of a particular system; (...) a criterion for describing counterfactual dependencies in two explanatory modes, i.e. the vertical and the horizontal; and, finally, a third perspectival one that tells us when to use the vertical and when to use the horizontal mode. In the second step, I show how this general theory of topological explanations accounts for explanatory asymmetry in both the vertical and horizontal explanatory modes. Finally, in the third step, I argue that this theory is universally applicable across biological sciences, which helps to unify essential concepts of biological networks. (shrink)
In this paper, I argue that the newly developed network approach in neuroscience and biology provides a basis for formulating a unique type of realization, which I call topological realization. Some of its features and its relation to one of the dominant paradigms of realization and explanation in sciences, i.e. the mechanistic one, are already being discussed in the literature. But the detailed features of topological realization, its explanatory power and its relation to another prominent view of realization, namely the (...) semantic one, have not yet been discussed. I argue that topological realization is distinct from mechanistic and semantic ones because the realization base in this framework is not based on local realisers, regardless of the scale but on global realizers. In mechanistic approach, the realization base is always at the local level, in both ontic and epistemic accounts. The explanatory power of realization relation in mechanistic approach comes directly from the realization relation-either by showing how a model is mapped onto a mechanism, or by describing some ontic relations that are explanatory in themselves. Similarly, the semantic approach requires that concepts at different scales logically satisfy microphysical descriptions, which are at the local level. In topological framework the realization base can be found at different scales, but whatever the scale the realization base is global, within that scale, and not local. Furthermore, topological realization enables us to answer the “why” questions, which according to Polger 2010 make it explanatory. The explanatoriness of topological realization stems from understanding mathematical consequences of different topologies, not from the mere fact that a system realizes them. (shrink)
Proponents of ontic conceptions of explanation require all explanations to be backed by causal, constitutive, or similar relations. Among their justifications is that only ontic conceptions can do justice to the ‘directionality’ of explanation, i.e., the requirement that if X explains Y , then not-Y does not explain not-X . Using topological explanations as an illustration, we argue that non-ontic conceptions of explanation have ample resources for securing the directionality of explanations. The different ways in which neuroscientists rely on multiplexes (...) involving both functional and anatomical connectivity in their topological explanations vividly illustrate why ontic considerations are frequently (if not always) irrelevant to explanatory directionality. Therefore, directionality poses no problem to non-ontic conceptions of explanation. (shrink)
In the last 20 years or so, since the publication of a seminal paper by Watts and Strogatz :440–442, 1998), an interest in topological explanations has spread like a wild fire over many areas of science, e.g. ecology, evolutionary biology, medicine, and cognitive neuroscience. The topological approach is still very young by all standards, and even within special sciences it still doesn’t have a single methodological programme that is applicable across all areas of science. That is why this special issue (...) is important as a first systematic philosophical study of topological explanations and their relation to a well understood and widespread explanatory strategy, such as mechanistic one. (shrink)
We provide two programmatic frameworks for integrating philosophical research on understanding with complementary work in computer science, psychology, and neuroscience. First, philosophical theories of understanding have consequences about how agents should reason if they are to understand that can then be evaluated empirically by their concordance with findings in scientific studies of reasoning. Second, these studies use a multitude of explanations, and a philosophical theory of understanding is well suited to integrating these explanations in illuminating ways.
In this paper, I outline a heuristic for thinking about the relation between explanation and understanding that can be used to capture various levels of “intimacy”, between them. I argue that the level of complexity in the structure of explanation is inversely proportional to the level of intimacy between explanation and understanding, i.e. the more complexity the less intimacy. I further argue that the level of complexity in the structure of explanation also affects the explanatory depth in a similar way (...) to intimacy between explanation and understanding, i.e. the less complexity the greater explanatory depth and vice versa. (shrink)
I argue that the hard problem of consciousness occurs only in very limited contexts. My argument is based on the idea of explanatory perspectivalism, according to which what we want to know about a phenomenon determines the type of explanation we use to understand it. To that effect the hard problem arises only in regard to questions such as how is it that concepts of subjective experience can refer to physical properties, but not concerning questions such as what gives rise (...) to qualia or why certain brain states have certain qualities and not others. In this sense we could for example fully explain why certain brain processes have certain subjective qualities, while we still don’t have a viable theory of concepts that explains co-referentiality of phenomenal and physical concepts. Given this limitation, the hard problem doesn’t pose a problem for the empirical study of consciousness. (shrink)
Over the last two decades, network-focused approaches have become highly popular in diverse fields of biology, including neuroscience, ecology, molecular biology and genetics. While the network approach continues to grow very rapidly, some of its conceptual and methodological aspects still require a programmatic foundation. This challenge particularly concerns the question of whether a generalized account of explanatory, organisational and descriptive levels of networks can be applied universally across biological sciences. Consequently, the central focus of this theme issue will be on (...) the definition, motivation and application of key concepts in biological network science, such as levels, hierarchies, and explanatory directionality. A unification will be achieved by formulating norms for delimiting the distinctively network-topological class of explanations that connect general as well as very specific biological research questions. The impact of this theme issue is broad and encompassing, as it is highly interdisciplinary and opens a uniquely normative perspective on the foundational aspects of network-based explanations and modelling. This theme issue is intended to become a landmark publication for practical research as well as funding policy decisions, by unifying network approaches in biological sciences in terms of fundamental concepts and informing the public understanding of the network science. (shrink)
This paper is concerned with a quality space model as an account of the intelligibility of explanation. I argue that descriptions of causal or functional roles (Chalmers Levine, 2001) are not the only basis for intelligible explanations. If we accept that phenomenal concepts refer directly, not via descriptions of causal or functional roles, then it is difficult to find role fillers for the described causal roles. This constitutes a vagueness constraint on the intelligibility of explanation. Thus, I propose to use (...) quality space models to develop a systematic way of studying different modalities of perception and feelings, e.g., visual and auditory perception, pain, and emotion, that can reveal some structural relations among these modalities. It might turn out that topological explanation can be more intelligible than causal explanation in this case. I discuss two accounts of a quality space for color vision (Clark, 2000; Rosenthal, 2010) and propose how to construct a quality space for pain. Daniel Kostic is Associated Researcher at Berlin School of Mind and Brain. (shrink)
Over the last decades, network-based approaches have become highly popular in diverse fields of biology, including neuroscience, ecology, molecular biology and genetics. While these approaches continue to grow very rapidly, some of their conceptual and methodological aspects still require a programmatic foundation. This challenge particularly concerns the question of whether a generalized account of explanatory, organisational and descriptive levels of networks can be applied universally across biological sciences. To this end, this highly interdisciplinary theme issue focuses on the definition, motivation (...) and application of key concepts in biological network science, such as explanatory power of distinctively network explanations, network levels, and network hierarchies. (shrink)
This paper examines the explanatory gap account. The key notions for its proper understanding are analysed. In particular, the analysis is concerned with the role of “thick” and “thin” modes of presentation and “thick” and “thin” concepts which are relevant for the notions of “thick” and “thin” conceivability, and to that effect relevant for the gappy and non-gappy identities. The last section of the paper discusses the issue of the intelligibility of explanations. One of the conclusions is that the explanatory (...) gap account only succeeds in establishing the epistemic gap. The claim that psychophysical identity is not intelligibly explicable, and thus opens the explanatory gap, would require an indepen- dent argument which would prove that intelligible explanations stem only from conceptual analysis. This, I argue, is not the case. (shrink)
This paper is divided into three sections. In the first section I briefly outline the background of the problem, i.e. Kripke’s modal argument (Kripke 1980). In the second section I present Chalmers’ account of two- dimensional semantics and two-dimensional argument against physicalism. In the third section I criticize Chalmers’ approach based on two crucial points, one is about necessity of identities and the other is about microphysi- cal descriptions and a priori derivation.
In the last couple of years a few seemingly independent debates on scientific explanation have emerged, with several key questions that take different forms in different areas. For example, the questions what makes an explanation distinctly mathematical and are there any non-causal explanations in sciences sometimes take a form of the question what makes mathematical models explanatory, especially whether highly idealized models in science can be explanatory and in virtue of what they are explanatory. These questions raise further issues about (...) counterfactuals, modality, and explanatory asymmetries: i.e., do mathematical and non-causal explanations support... (shrink)
Kareem Khalifa’s Understanding, Explanation, and Scientific Knowledge is a splendid book, written in a beautiful and accessible style. It provides the ultimate articulation of his account of explanatory understanding that I am sure will be regarded as one of the landmark publications on the topic of scientific understanding. Many of the central questions regarding scientific understanding are treated from different perspectives in the book. Such questions are: Does understanding require explanations? Must it consist of mostly true information? Is it a (...) species of knowledge? -/- I cannot do justice to all the intricate details of Khalifa’s arguments in a short piece like this, so I will focus on discussing his main line of argument and some of the most salient questions that are addressed in the book, instead of summarizing and commenting on each chapter. (shrink)
This thesis evaluates several powerful arguments that not only deny that brain states and conscious states are one and the same thing, but also claim that such an identity is unintelligible. I argue that these accounts do not undermine physicalism because they don’t provide any direct or independent justification for their tacit assumptions about a link between modes of presentation and explanation. In my view intelligibility of psychophysical identity should not be based exclusively on the analysis of meaning. The main (...) concern then should be why expect that fully intelligible explanation must be based on the descriptions of the causal roles as modes of presentation. To this effect I propose that we examine "psychological concepts". The psychological concepts are concepts that use descriptions of the functional roles but are about qualities of our experiences. I propose to analyze them in quality space models in order to unveil why phenomenal concepts are expected to refer via descriptions of the causal or functional roles. The quality space should be understood here as a multidimensional space consisting of several axes of relative similarity and differences among the structures of ordering in different modalities of conscious experience. On my proposal it is possible that some axes in the quality space consist of their own quality spaces so we could “zoom in” and “zoom out” into the descriptions of the functional roles and see more clearly what the explanation of certain aspects of consciousness looks like when thought of in terms of psychological concepts. (shrink)