Adam Smith said that ‘the propensity to truck, barter and exchange one thing for another is common to all men, and to be found in no other race of animals.’ Smith addressed the mark of the man economical, and there is no denying that this is the peculiar way he acts: clearly, to truck, barter and exchange is to act in a certain way. Austrian economics adopts this way of looking at the realm of economics. It prides itself as a (...) theory of human action. This claim seems ill-founded as long as so important a contribution as Ludwig von Mises’s praxeology remains insufficiently understood. In this paper, I address Barry Smith’s charge that in praxeology ‘other core notions, in addition to the concept of action, have been smuggled into and the theory is therefore not purely analytic’. I offer logical proofs of two cornerstone theorems of praxeology, the uneasiness theorem and the scarcity theorem, and thus provide vindication. Also, the findings support Mises’s controversial claim that economics is a priori founded in action theory. Thus, Carl Menger’s dream to lay foundations to economics and the other social sciences may have come true in the guise of praxeology. (shrink)
The theory of the just price is commonly assumed to have three sources: Political philosophy of Greek antiquity, scholastic ethics of the High Middle Ages, and the Roman law of obligations of late antiquity. While closer inspection confirms this holds for the first two worlds of thought the latter assumption seems ultimately unfounded. The paper claims that the evidence notoriously presented on behalf of that assumption – two rescripts attributed to Roman emperor Diocletian, namely Codex Iustinianus 4.44.2 and 4.44.8 – (...) ultimately points in another direction. Offering both an analysis and an alternative reading of the rescripts an integrated interpretation is given that reconciles them with the “liberalistic spirit of Roman law”. Also, it is explained why, from the point of view of legal and social philosophy, the fact that Roman law refrains from introducing a moral aspect to the institute of price fixing speaks in favour of Roman law rather than against it. [in German]. (shrink)
Venkataraman’s essay in this volume argues that the “fourth” force of control of the modern corporation, the entrepreneurial discovery process, can limit the ability of firms to exploit stakeholders. In this essay I explicitly examine the role of time in the entrepreneurial discovery process. First, the role of time in the individual stakeholder’s decisions is examined. Second, at an organizational level, I examine some historical evidence in order to empirically consider how swiftly the discovery process may work. Implications for both (...) theory and empirical testing are discussed. (shrink)
The arguments for redistribution of wealth, and for prohibiting certain transactions such as price-gouging, both are based in mistaken conceptions of exchange. This paper proposes a neologism, “euvoluntary” exchange, meaning both that the exchange is truly voluntary and that it benefits both parties to the transaction. The argument has two parts: First, all euvoluntary exchanges should be permitted, and there is no justification for redistribution of wealth if disparities result only from euvoluntary exchanges. Second, even exchanges that are not euvoluntary (...) should generally be permitted, because access to market exchange may be the only means by which people in desperate circumstances can improve their position. (shrink)
Arising from a graduate course taught to math and engineering students, this text provides a systematic grounding in the theory of Hamiltonian systems, as well as introducing the theory of integrals and reduction. A number of other topics are covered too.
What are the processes, from conception to adulthood, that enable a single cell to grow into a sentient adult? Neuroconstructivism is a pioneering 2 volume work that sets out a whole new framework for considering the complex topic of development, integrating data from cognitive studies, computational work, and neuroimaging.
Management involves change. The aim of this paper is to introduce a threefold classification of change with the purpose of making clear how the third type, creational change, is distinctive compared to the other two types. Four types of management situation are introduced, based on the type of change involved in the managed domain and in the management system. The role of creational change in management is discussed and a number of guidelines or suggestions relevant to this sort of management (...) are outlined. One feature of the notion of creational change is the conjecture that such change is not amenable to scientific investigation and understanding. Once creational change has produced whatever it does produce then the product may be amenable to scientific investigation and understanding but the actual unique and open process of its production will not be. One of the aims of this paper is to heighten our general awareness of creational change as different from other sorts of change. (shrink)
What makes us conscious? Many theories that attempt to answer this question have appeared recently in the context of widespread interest about consciousness in the cognitive neurosciences. Most of these proposals are formulated in terms of the information processing conducted by the brain. In this overview, we survey and contrast these models. We first delineate several notions of consciousness, addressing what it is that the various models are attempting to explain. Next, we describe a conceptual landscape that addresses how the (...) theories attempt to explain consciousness. We then situate each of several representative models in this landscape and indicate which aspect of consciousness they try to explain. We conclude that the search for the neural correlates of consciousness should be usefully complemented by a search for the computational correlates of consciousness. (shrink)
We address two points in this commentary. First, we question the extent to which O'Brien & Opie have established that the classical approach is unable to support a viable vehicle theory of consciousness. Second, assuming that connectionism does have the resources to support a vehicle theory, we explore how the activity of the units of a PDP network might sum together to form phenomenal experience (PE).
In the multidisciplinary field of developmental cognitive neuroscience, statistical associations between levels of description play an increasingly important role. One example of such associations is the observation of correlations between relatively common gene variants and individual differences in behavior. It is perhaps surprising that such associations can be detected despite the remoteness of these levels of description, and the fact that behavior is the outcome of an extended developmental process involving interaction of the whole organism with a variable environment. Given (...) that they have been detected, how do such associations inform cognitive-level theories? To investigate this question, we employed a multiscale computational model of development, using a sample domain drawn from the field of language acquisition. The model comprised an artificial neural network model of past-tense acquisition trained using the backpropagation learning algorithm, extended to incorporate population modeling and genetic algorithms. It included five levels of description—four internal: genetic, network, neurocomputation, behavior; and one external: environment. Since the mechanistic assumptions of the model were known and its operation was relatively transparent, we could evaluate whether cross-level associations gave an accurate picture of causal processes. We established that associations could be detected between artificial genes and behavioral variation, even under polygenic assumptions of a many-to-one relationship between genes and neurocomputational parameters, and when an experience-dependent developmental process interceded between the action of genes and the emergence of behavior. We evaluated these associations with respect to their specificity, to their developmental stability, and to their replicability, as well as considering issues of missing heritability and gene–environment interactions. We argue that gene–behavior associations can inform cognitive theory with respect to effect size, specificity, and timing. The model demonstrates a means by which researchers can undertake multiscale modeling with respect to cognition and develop highly specific and complex hypotheses across multiple levels of description. (shrink)
In this response, we consider four main issues arising from the commentaries to the target article. These include further details of the theory of interactive specialization, the relationship between neuroconstructivism and selectionism, the implications of neuroconstructivism for the notion of representation, and the role of genetics in theories of development. We conclude by stressing the importance of multidisciplinary approaches in the future study of cognitive development and by identifying the directions in which neuroconstructivism can expand in the Twenty-first Century.
We argue that are no such things as literal categories in human cognition. Instead, we argue that there are merely temporary coalescences of dimensions of similarity, which are brought together by context in order to create the similarity structure in mental representations appropriate for the task at hand. Fodor contends that context‐sensitive cognition cannot be realised by current computational theories of mind. We address this challenge by describing a simple computational implementation that exhibits internal knowledge representations whose similarity structure alters (...) fluidly depending on context. We explicate the processing properties that support this function and illustrate with two more complex models, one applied to the development of semantic knowledge , the second to the processing of simple metaphorical comparisons . The models firstly demonstrate how phenomena that seem problematic for literal categorisation resolve to particular cases of the contextual modulation of mental representations; and secondly prompt a new perspective on the relation between language and thought: language affords the strategic control of context on semantic knowledge, allowing information to be brought to bear in a given situation that might otherwise not be available to influence processing. This may explain one way in which human thought is creative, and distinctive from animal cognition. (shrink)
This is the first book to address philosophically the moral and political underpinnings of terrorism and anti-terrorism. It brings together authors with different attitudes and original perspectives on attitudes and ethical and practical justifications for terrorism.
Neuroconstructivism: How the Brain Constructs Cognition proposes a unifying framework for the study of cognitive development that brings together (1) constructivism (which views development as the progressive elaboration of increasingly complex structures), (2) cognitive neuroscience (which aims to understand the neural mechanisms underlying behavior), and (3) computational modeling (which proposes formal and explicit specifications of information processing). The guiding principle of our approach is context dependence, within and (in contrast to Marr ) between levels of organization. We propose that three (...) mechanisms guide the emergence of representations: competition, cooperation, and chronotopy; which themselves allow for two central processes: proactivity and progressive specialization. We suggest that the main outcome of development is partial representations, distributed across distinct functional circuits. This framework is derived by examining development at the level of single neurons, brain systems, and whole organisms. We use the terms encellment, embrainment, and embodiment to describe the higher-level contextual influences that act at each of these levels of organization. To illustrate these mechanisms in operation we provide case studies in early visual perception, infant habituation, phonological development, and object representations in infancy. Three further case studies are concerned with interactions between levels of explanation: social development, atypical development and within that, developmental dyslexia. We conclude that cognitive development arises from a dynamic, contextual change in embodied neural structures leading to partial representations across multiple brain regions and timescales, in response to proactively specified physical and social environment. (shrink)
Abstract: In this article, I offer a proposal to clarify what I believe is the proper relation between value maximization and stakeholder theory, which I call enlightened value maximization. Enlightened value maximization utilizes much of the structure of stakeholder theory but accepts maximization of the long-run value of the firm as the criterion for making the requisite tradeoffs among its stakeholders, and specifies long-term value maximization or value seeking as the firm’s objective. This proposal therefore solves the problems that arise (...) from the multiple objectives that accompany traditional stakeholder theory. I also discuss the Balanced Scorecard, which is the managerial equivalent of stakeholder theory, explaining how this theory is flawed because it presents managers with a scorecard that gives no score—that is, no single-valued measure of how they have performed. Thus managers evaluated with such a system (which can easily have two dozen measures and provides no information on the tradeoffs between them) have no way to make principled or purposeful decisions. The solution is to define a true (single dimensional) score for measuring performance for the organization or division (and it must be consistent with the organization’s strategy), and as long as their score is defined properly, (and for lower levels in the organization it will generally not be value) this will enhance their contribution to the firm. (shrink)
Full-stack seismic interpretation continues to be the primary means of subsurface interpretation. However, the underlying impact of amplitude variation with offset is effectively ignored or overlooked during the full-stack interpretation process. Recent advances in well-logging and rock physics techniques highlight the fact that AVO is a useful tool not only for detection of fluid anomalies, but also for the detection and characterization of lithology. We evaluated an overview of some of the key steps in the rock physics assessment of well (...) logs and seismic data, and highlight the potential to move toward a new convention of interpretation on so-called lithology stacks. Lithology stacks may come in a variety of forms but should form the focus of interpretation efforts in the early part of the exploration and appraisal cycle. Several case studies were used to highlight that subtle fluid effects can only be extracted from the seismic data after careful assessment of the lithology response. These case studies cover a wide geography and variable geology and demonstrate that the techniques we tested are transferable and applicable across many different oil and gas provinces. The use of lithology stacks has many benefits. It allows interpretation on a single stack rather than many different offset or angle stacks. A lithology stack provides a robust, objective framework for lithostratigraphic interpretation and can be calibrated to offset wells when available. They are conceptually simple, repeatable, and transferable, allowing close cooperation across the different subsurface disciplines. (shrink)
Here, we argue that any neurobiological theory based on an experience/function division cannot be empirically confirmed or falsified and is thus outside the scope of science. A ‘perfect experiment’ illustrates this point, highlighting the unbreachable boundaries of the scientific study of consciousness. We describe a more nuanced notion of cognitive access that captures personal experience without positing the existence of inaccessible conscious states. Finally, we discuss the criteria necessary for forming and testing a falsifiable theory of consciousness.
What are the processes, from conception to adulthood, that enable a single cell to grow into a sentient adult? The processes that occur along the way are so complex that any attempt to understand development necessitates a multi-disciplinary approach, integrating data from cognitive studies, computational work, and neuroimaging - an approach till now seldom taken in the study of child development. Neuroconstructivism is a major new 2 volume publication that seeks to redress this balance, presenting an integrative new framework for (...) considering development. In the first volume, the authors review up-to-to date findings from neurobiology, brain imaging, child development, computer and robotic modelling to consider why children's thinking develops the way it does. They propose a new synthesis of development that is based on 5 key principles found to operate at many levels of descriptions. They use these principles to explain what causes a number of key developmental phenomena, including infants' interacting with objects, early social cognitive interactions, and the causes of dyslexia. The "neuroconstructivist" framework also shows how developmental disorders do not arise from selective damage to the normal cognitive system, but instead arise from developmental processes that operate under atypical constraints. How these principles work is illustrated in several case studies ranging from perceptual to social and reading development. Finally, the authors use neuroimaging, behavioural analyses, computational simulations and robotic models to provide a way of understanding the mechanisms and processes that cause development to occur. (shrink)
Philosophical naturalism, according to which philosophy is continuous with the natural sciences, has dominated the Western academy for well over a century, but Michael Rea claims that it is without rational foundation. Rea argues compellingly to the surprising conclusion that naturalists are committed to rejecting realism about material objects, materialism, and perhaps realism about other minds.
Researchers misunderstand their role in creating ethical problems when they allow dogmas to purportedly divorce scientists and scientific practices from the values that they embody. Cortina, Edwards, and Powell help us clarify and further develop our position by responding to our critique of, and alternatives to, this misleading separation. In this rebuttal, we explore how the desire to achieve the separation of facts and values is unscientific on the very terms endorsed by its advocates—this separation is refuted by empirical observation. (...) We show that positivists like Cortina and Edwards offer no rigorous theoretical or empirical justifications to substantiate their claims, let alone critique ours. Following Powell, we point to how classical pragmatism understands ‘purpose’ in scientific pursuits while also providing an alternative to the dogmas of positivism and related philosophical positions. In place of dogmatic, unscientific cries about an abstract and therefore always-unobservable ‘reality,’ we invite all organizational scholars to join us in shifting the discussion about quantitative research towards empirically grounded scientific inquiry. This makes the ethics of actual people and their practices central to quantitative research, including the thoughts, discourses, and behaviors of researchers who are always in particular places doing particular things. We propose that quantitative researchers can thus start to think about their research practices as a kind of work, rather than having the status of a kind of dogma. We conclude with some implications that this has for future research and education, including the relevance of research and research methods. (shrink)
My goal in this paper is to provide characterizations of matter, form and constituency in a way that avoids what I take to be the three main drawbacks of other hylomorphic theories: (i) commitment to the universal-particular distinction; (ii) commitment to a primitive or problematic notion of inherence or constituency; (iii) inability to identify viable candidates for matter and form in nature, or to characterize them in terms of primitives widely regarded to be intelligible.
Although our subjective impression is of a richly detailed visual world, numerous empirical results suggest that the amount of visual information observers can perceive and remember at any given moment is limited. How can our subjective impressions be reconciled with these objective observations? Here, we answer this question by arguing that, although we see more than the handful of objects, claimed by prominent models of visual attention and working memory, we still see far less than we think we do. Taken (...) together, we argue that these considerations resolve the apparent conflict between our subjective impressions and empirical data on visual capacity, while also illuminating the nature of the representations underlying perceptual experience. (shrink)
This paper defends Mereological Universalism(the thesis that, for any set S of disjoint objects, there is an object that the members of S compose. Universalism is unpalatable to many philosophers because it entails that if there are such things as my left tennis shoe, W. V. Quine, and the Taj Mahal, then there is another object that those three things compose. This paper presents and criticizes Peter van Inwagen's argument against Universalism and then presents a new argument in favor of (...) Universalism. It turns out that the most reasonable way to resist the argument for Universalism is to deny the existence of artifacts; thus, if we believe in artifacts, we have no real choice other than to embrace Universalism. (shrink)