Social research, from economics to demography and epidemiology, makes extensive use of statistical models in order to establish causal relations. The question arises as to what guarantees the causal interpretation of such models. In this paper we focus on econometrics and advance the view that causal models are ‘augmented’ statistical models that incorporate important causal information which contributes to their causal interpretation. The primary objective of this paper is to argue that causal claims are established on the basis of a (...) plurality of evidence. We discuss the consequences of ‘evidential pluralism’ in the context of econometric modelling. (shrink)
The logical hexagon (or hexagon of opposition) is a strange, yet beautiful, highly symmetrical mathematical figure, mysteriously intertwining fundamental logical and geometrical features. It was discovered more or less at the same time (i.e. around 1950), independently, by a few scholars. It is the successor of an equally strange (but mathematically less impressive) structure, the “logical square” (or “square of opposition”), of which it is a much more general and powerful “relative”. The discovery of the former did not raise interest, (...) neither among logicians, nor among philosophers of logic, whereas the latter played a very important theoretical role (both for logic and philosophy) for nearly two thousand years, before falling in disgrace in the first half of the twentieth century: it was, so to say, “sentenced to death” by the so-called analytical philosophers and logicians. Contrary to this, since 2004 a new, unexpected promising branch of mathematics (dealing with “oppositions”) has appeared, “oppositional geometry” (also called “n-opposition theory”, “NOT”), inside which the logical hexagon (as well as its predecessor, the logical square) is only one term of an infinite series of “logical bi-simplexes of dimension m”, itself just one term of the more general infinite series (of series) of the “logical poly-simplexes of dimension m”. In this paper we recall the main historical and the main theoretical elements of these neglected recent discoveries. After proposing some new results, among which the notion of “hybrid logical hexagon”, we show which strong reasons, inside oppositional geometry, make understand that the logical hexagon is in fact a very important and profound mathematical structure, destined to many future fruitful developments and probably bearer of a major epistemological paradigm change. (shrink)
Whereas geometrical oppositions (logical squares and hexagons) have been so far investigated in many fields of modal logic (both abstract and applied), the oppositional geometrical side of “deontic logic” (the logic of “obligatory”, “forbidden”, “permitted”, . . .) has rather been neglected. Besides the classical “deontic square” (the deontic counterpart of Aristotle’s “logical square”), some interesting attempts have nevertheless been made to deepen the geometrical investigation of the deontic oppositions: Kalinowski (La logique des normes, PUF, Paris, 1972) has proposed a (...) “deontic hexagon” as being the geometrical representation of standard deontic logic, whereas Joerden (jointly with Hruschka, in Archiv für Rechtsund Sozialphilosophie 73:1, 1987), McNamara (Mind 105:419, 1996) and Wessels (Die gute Samariterin. Zur Struktur der Supererogation, Walter de Gruyter, Berlin, 2002) have proposed some new “deontic polygons” for dealing with conservative extensions of standard deontic logic internalising the concept of “supererogation”. Since 2004 a new formal science of the geometrical oppositions inside logic has appeared, that is “ n -opposition theory”, or “NOT”, which relies on the notion of “logical bi-simplex of dimension m ” ( m = n − 1). This theory has received a complete mathematical foundation in 2008, and since then several extensions. In this paper, by using it, we show that in standard deontic logic there are in fact many more oppositional deontic figures than Kalinowski’s unique “hexagon of norms” (more ones, and more complex ones, geometrically speaking: “deontic squares”, “deontic hexagons”, “deontic cubes”, . . ., “deontic tetraicosahedra”, . . .): the real geometry of the oppositions between deontic modalities is composed by the aforementioned structures (squares, hexagons, cubes, . . ., tetraicosahedra and hyper-tetraicosahedra), whose complete mathematical closure happens in fact to be a “deontic 5-dimensional hyper-tetraicosahedron” (an oppositional very regular solid). (shrink)
We investigate an extension of the formalism of interpreted systems by Halpern and colleagues to model the correct behaviour of agents. The semantical model allows for the representation and reasoning about states of correct and incorrect functioning behaviour of the agents, and of the system as a whole. We axiomatise this semantic class by mapping it into a suitable class of Kripke models. The resulting logic, KD45n i-j, is a stronger version of KD, the system often referred to as Standard (...) Deontic Logic. We extend this formal framework to include the standard epistemic notions defined on interpreted systems, and introduce a new doubly-indexed operator representing the knowledge that an agent would have if it operates under the assumption that a group of agents is functioning correctly. We discuss these issues both theoretically and in terms of applications, and present further directions of work. (shrink)
Some Carrollian posthumous manuscripts reveal, in addition to his famous ‘logical diagrams’, two mysterious ‘logical charts’. The first chart, a strange network making out of fourteen logical sentences a large 2D ‘triangle’ containing three smaller ones, has been shown equivalent—modulo the rediscovery of a fourth smaller triangle implicit in Carroll's global picture—to a 3D tetrahedron, the four triangular faces of which are the 3+1 Carrollian complex triangles. As it happens, such an until now very mysterious 3D logical shape—slightly deformed—has been (...) rediscovered, independently from Carroll and much later, by a logician , a mathematician and a linguist studying the geometry of the ‘opposition relations’, that is, the mathematical generalisations of the ‘logical square’. We show that inside what is called equivalently ‘n-opposition theory’, ‘oppositional geometry’ or ‘logical geometry’, Carroll's first chart corresponds exactly, duly reshap.. (shrink)
According to an influential line of thought, from the assumption that indeterminism makes future contingents neither true nor false, one can conclude that assertions of future contingents are never permissible. This conclusion, however, fails to recognize that we ordinarily assert future contingents even when we take the future to be unsettled. Several attempts have been made to solve this puzzle, either by arguing that, albeit truth-valueless, future contingents can be correctly assertable, or by rejecting the claim that future contingents are (...) truth-valueless. The paper examines three of most representative accounts in line with the first attempt, and concludes that none of them succeed in providing a persuasive answer as to why we felicitously assert future contingents. (shrink)
Excerpt from Über die Erkennbarkeit der Gegenstände Auf die erkenntnistheoretische Bedeutung des Zufalls begriffes haben zumal Windelbands Lehren vom Zufall aufmerksam gemacht. About the Publisher Forgotten Books publishes hundreds of thousands of rare and classic books. Find more at www.forgottenbooks.com This book is a reproduction of an important historical work. Forgotten Books uses state-of-the-art technology to digitally reconstruct the work, preserving the original format whilst repairing imperfections present in the aged copy. In rare cases, an imperfection in the original, such (...) as a blemish or missing page, may be replicated in our edition. We do, however, repair the vast majority of imperfections successfully; any imperfections that remain are intentionally left to preserve the state of such historical works. (shrink)
We investigate an extension of the formalism of interpreted systems by Halpern and colleagues to model the correct behaviour of agents. The semantical model allows for the representation and reasoning about states of correct and incorrect functioning behaviour of the agents, and of the system as a whole. We axiomatise this semantic class by mapping it into a suitable class of Kripke models. The resulting logic, $\text{KD}45_{n}^{i-j}$, is a stronger version of KD, the system often referred to as Standard Deontic (...) Logic. We extend this formal framework to include the standard epistemic notions defined on interpreted systems, and introduce a new doubly-indexed operator representing the knowledge that an agent would have if it operates under the assumption that a group of agents is functioning correctly. We discuss these issues both theoretically and in terms of applications, and present further directions of work. (shrink)
Sleep and dreaming are important daily phenomena that are receiving growing attention from both the scientific and the philosophical communities. The increasingly popular predictive brain framework within cognitive science aims to give a full account of all aspects of cognition. The aim of this paper is to critically assess the theoretical advantages of Predictive Processing (PP, as proposed by Clark 2013, Clark 2016; and Hohwy 2013) in defining sleep and dreaming. After a brief introduction, we overview the state of the (...) art at the intersection between dream research and PP (with particular reference to Hobson and Friston 2012; Hobson et al. 2014). In the following sections we focus on two theoretically promising aspects of the research program. First, we consider the explanations of phenomenal consciousness during sleep (i.e. dreaming) and how it arises from the neural work of the brain. PP provides a good picture of the peculiarity of dreaming but it can’t fully address the problem of how consciousness comes to be in the first place. We propose that Integrated Information Theory (IIT) (Oizumi et al. 2014; Tononi et al. 2016) is a good candidate for this role and we will show its advantages and points of contact with PP. After introducing IIT, we deal with the evolutionary function of sleeping and dreaming. We illustrate that PP fits with contemporary researches on the important adaptive function of sleep and we discuss why IIT can account for sleep mentation (i.e. dreaming) in evolutionary terms (Albantakis et al. 2014). In the final section, we discuss two future avenues for dream research that can fruitfully adopt the perspective offered by PP: (a) the role of bodily predictions in the constitution of the sleeping brain activity and the dreaming experience, and (b) the precise role of the difference stages of sleep (REM (Rapid eye movement), NREM (Non-rapid eye movement) in the constitution and refinement of the predictive machinery. (shrink)
Based on a gambling formulation of quantum mechanics, we derive a Gleason-type theorem that holds for any dimension n of a quantum system, and in particular for \. The theorem states that the only logically consistent probability assignments are exactly the ones that are definable as the trace of the product of a projector and a density matrix operator. In addition, we detail the reason why dispersion-free probabilities are actually not valid, or rational, probabilities for quantum mechanics, and hence should (...) be excluded from consideration. (shrink)
This paper addresses a fundamental line of research in neuroscience: the identification of a putative neural processing core of the cerebral cortex, often claimed to be “canonical”. This “canonical” core would be shared by the entire cortex, and would explain why it is so powerful and diversified in tasks and functions, yet so uniform in architecture. The purpose of this paper is to analyze the search for canonical explanations over the past 40 years, discussing the theoretical frameworks informing this research. (...) It will highlight a bias that, in my opinion, has limited the success of this research project, that of overlooking the dimension of cortical development. The earliest explanation of the cerebral cortex as canonical was attempted by David Marr, deriving putative cortical circuits from general mathematical laws, loosely following a deductive-nomological account. Although Marr’s theory turned out to be incorrect, one of its merits was to have put the issue of cortical circuit development at the top of his agenda. This aspect has been largely neglected in much of the research on canonical models that has followed. Models proposed in the 1980s were conceived as mechanistic. They identified a small number of components that interacted as a basic circuit, with each component defined as a function. More recent models have been presented as idealized canonical computations, distinct from mechanistic explanations, due to the lack of identifiable cortical components. Currently, the entire enterprise of coming up with a single canonical explanation has been criticized as being misguided, and the premise of the uniformity of the cortex has been strongly challenged. This debate is analyzed here. The legacy of the canonical circuit concept is reflected in both positive and negative ways in recent large-scale brain projects, such as the Human Brain Project. One positive aspect is that these projects might achieve the aim of producing detailed simulations of cortical electrical activity, a negative one regards whether they will be able to find ways of simulating how circuits actually develop. (shrink)
This paper analyzes the rapid and unexpected rise of deep learning within Artificial Intelligence and its applications. It tackles the possible reasons for this remarkable success, providing candidate paths towards a satisfactory explanation of why it works so well, at least in some domains. A historical account is given for the ups and downs, which have characterized neural networks research and its evolution from “shallow” to “deep” learning architectures. A precise account of “success” is given, in order to sieve out (...) aspects pertaining to marketing or sociology of research, and the remaining aspects seem to certify a genuine value of deep learning, calling for explanation. The alleged two main propelling factors for deep learning, namely computing hardware performance and neuroscience findings, are scrutinized, and evaluated as relevant but insufficient for a comprehensive explanation. We review various attempts that have been made to provide mathematical foundations able to justify the efficiency of deep learning, and we deem this is the most promising road to follow, even if the current achievements are too scattered and relevant for very limited classes of deep neural models. The authors’ take is that most of what can explain the very nature of why deep learning works at all and even very well across so many domains of application is still to be understood and further research, which addresses the theoretical foundation of artificial learning, is still very much needed. (shrink)
In this paper we defend structural representations, more specifically neural structural representation. We are not alone in this, many are currently engaged in this endeavor. The direction we take, however, diverges from the main road, a road paved by the mathematical theory of measure that, in the 1970s, established homomorphism as the way to map empirical domains of things in the world to the codomain of numbers. By adopting the mind as codomain, this mapping became a boon for all those (...) convinced that a representation system should bear similarities with what was being represented, but struggled to find a precise account of what such similarities mean. The euforia was brief, however, and soon homomorphism revealed itself to be affected by serious weaknesses, the primary one being that it included systems embarrassingly alien to representations. We find that the defense attempts that have followed, adopt strategies that share a common format: valid structural representations come as “homomorphism plus X”, with various “X”, provided in descriptive format only. Our alternative direction stems from the observation of the overlooked departure from homomorphism as used in the theory of measure and its later use in mental representations. In the former case, the codomain or the realm of numbers, is the most suited for developing theorems detailing the existence and uniqueness of homomorphism for a wide range of empirical domains. In the latter case, the codomain is the realm of the mind, possibly more vague and more ill-defined than the empirical domain itself. The time is ripe for articulating the mapping between represented domains and the mind in formal terms, by exploiting what is currently known about coding mechanisms in the brain. We provide a sketch of a possible development in this direction, one that adopts the theory of neural population coding as codomain. We will show that our framework is not only not in disagreement with the “plus X” proposals, but can lead to natural derivation of several of the “X”. (shrink)
IntroductionAnorexia nervosa promotes psychological distress in caregivers who adopt different coping strategies. Dysfunctional caregiving styles exacerbate further distress in the patient promoting the maintenance of the illness. We aimed to assess the possible contribution of personality traits of caregivers to the adoption of different coping strategies to deal with the affected relative.MethodsAbout 87 adolescents with AN were recruited. Their parents completed the Family Coping Questionnaire for Eating Disorders and the Temperament and Character Inventory-Revised. Differences between mothers and fathers were assessed (...) through the independent sample t-test. Multivariate regression analyses were run to assess if personality traits, the occurrence of psychiatry conditions in the parents, the marital status, and the duration of the illness predicted parental coping strategies.ResultsThe group of mothers showed higher levels of avoidance and seeking for information coping strategies than the sample of fathers. Lower illness duration predicted higher collusion with the illness in both parents. Harm avoidance, cooperativeness, and self-directedness positively predicted parental coercion, collusion, and seeking for information strategies with some differences between mothers and fathers.DiscussionIllness duration and personality traits of parents affect the type of parental coping strategies developed to face AN in adolescents. These variables should be considered in the assessment of families of adolescents with AN and may be addressed to promote more fine-tuned clinical interventions for caregivers. (shrink)
We argue that there is a simple, unique, reason for all quantum paradoxes, and that such a reason is not uniquely related to quantum theory. It is rather a mathematical question that arises at the intersection of logic, probability, and computation. We give our ‘weirdness theorem’ that characterises the conditions under which the weirdness will show up. It shows that whenever logic has bounds due to the algorithmic nature of its tasks, then weirdness arises in the special form of negative (...) probabilities or non-classical evaluation functionals. Weirdness is not logical inconsistency, however. It is only the expression of the clash between an unbounded and a bounded view of computation in logic. We discuss the implication of these results for quantum mechanics, arguing in particular that its interpretation should ultimately be computational rather than exclusively physical. We develop in addition a probabilistic theory in the real numbers that exhibits the phenomenon of entanglement, thus concretely showing that the latter is not specific to quantum mechanics. (shrink)
This paper analyzes the rapid and unexpected rise of deep learning within Artificial Intelligence and its applications. It tackles the possible reasons for this remarkable success, providing candidate paths towards a satisfactory explanation of why it works so well, at least in some domains. A historical account is given for the ups and downs, which have characterized neural networks research and its evolution from “shallow” to “deep” learning architectures. A precise account of “success” is given, in order to sieve out (...) aspects pertaining to marketing or sociology of research, and the remaining aspects seem to certify a genuine value of deep learning, calling for explanation. The alleged two main propelling factors for deep learning, namely computing hardware performance and neuroscience findings, are scrutinized, and evaluated as relevant but insufficient for a comprehensive explanation. We review various attempts that have been made to provide mathematical foundations able to justify the efficiency of deep learning, and we deem this is the most promising road to follow, even if the current achievements are too scattered and relevant for very limited classes of deep neural models. The authors’ take is that most of what can explain the very nature of why deep learning works at all and even very well across so many domains of application is still to be understood and further research, which addresses the theoretical foundation of artificial learning, is still very much needed. (shrink)
Vera Zasulich’s shooting of Trepov, a governor of St Petersburg who had ordered the flogging of a political prisoner, in January 1878, catapulted her to international fame as a revolutionary heroine, a reputation that she put to good use by becoming one of the five ‘founding parents’ of Russian Marxism that created the ‘Group for the Emancipation of Labour’ in 1883. But her act of self-sacrifice also triggered, to her dismay, the institutionalisation of individual terrorist tactics in the Russian Populist (...) movement with the creation of the ‘People’s Will’ (Narodnaya Volya) Party in 1879. The organisation went into decline after the killing of Tsar Alexander II in 1881, and Populism itself was increasingly superseded by Marxism as the hegemonic force on the left with the rise of the Russian Social Democratic Labour Party (RSDLP). But individual terrorist tactics reappeared with the creation of the Socialist Revolutionary Party in 1902, prompting Zasulich to write an article for Die neue Zeit, the theoretical organ of German Social Democracy, in which she both condemned the Neo-Populist tendency as deleterious to the rising labour movement and supported the organisational plans for the RSDLP sponsored by the Iskra group, developed at length by Lenin in his book What Is to Be Done?, published in March 1902. This article provides the background to Vera Zasulich’s article ‘The Terrorist Tendency in Russia’ (December 1902), setting it against the history of the Russian revolutionary movement from 1878 to 1902. (shrink)