Recent work by Frigg et. al. and Mayo-Wilson have called attention to a particular sort of error associated with attempts to model certain complex systems: structural modeling error. The assessment of the degree of SME in a model presupposes agreement between modelers about the best way to individuate natural systems, an agreement which can be more problematic than it appears. This problem, which we dub “the system individuation problem” arises in many of the same contexts as SME, and the two (...) often compound one another. This paper explores the common roots of the two problems in concerns about the precision of predictions generated by scientific models, and discusses how both concerns bear on the study of complex natural systems, particularly the global climate. (shrink)
The article presents the empirical confirmation to the black hole and white hole juxtapose theory. The author based the experiment on the multi- mission multi-spectral space telescope data conducted remotely with the NASA Data Challenge and Harvard- Smithsonian Micro-Observatory. Since the loss of the original manuscript, the author reformulated the mathematics during the research. The observation developed a resonance observation technique that observed the white hole to the moon’s direction with the sun. The data reduction of the white hole and (...) other observations provides an explanation for the undetected gravitons and some empirical physical features of the white hole. (shrink)
The article summarizes the software tool on astrophysical analysis with multi-wavelength space telescope data. It recaps the evidence analysis conducted on the Kerr-Newman black hole (KNBH). It was written prior to the article Research on the Kerr-Newman Black Hole in M82 Confirms Black Hole and White Hole Juxtapose not soon after the experiment. The conducted analysis suggested Hawking radiation is caused by the movement of ergosurfaces of the BH and serves as the primal evidence for black hole and white hole (...) juxtapose. A later data exploration was conducted with the radiation trails in the multi-wavelength data. The evidences produced corroborates with Yale professor Priyamvada Natarajan's black hole seeds theory. It implies that the electromagnetic dynamic of fusion and fission temperature determines the pressure of surface tension on the macro particles, and the mass density of the BH is determined by the electromagnetic tension between the outer ergosurface and inner ergosurface. The ring singularity of the BH and the Penrose-Hawking singularity of the BH determine the spin and gravitational singularity of the BH. Inside the ergosurfaces, the BH singularities' backward inflow causes the vicinity's rate on feeding the BH. It makes the active galactic nuclei (AGN) a semi-closed system. The density pressure is released on threshold. During the mass exchange observed by energy momentum upon the AGN's open, the macroparticle composition of the BH changes with the galactic event. This causes the expansion rate of the galaxy and contraction rate of the BH. AGN types and their relative motions determine the expansion rate of the cosmic universe in a generalized quantitative thinking. The article concludes that BH spin is caused by the asymmetric motions of AGN in the BH system. A set of the nuclear resonance caused by BH and white hole (WH) oscillation is processed with the same set of data. (shrink)
In philosophical studies regarding mathematical models of dynamical systems, instability due to sensitive dependence on initial conditions, on the one side, and instability due to sensitive dependence on model structure, on the other, have by now been extensively discussed. Yet there is a third kind of instability, which by contrast has thus far been rather overlooked, that is also a challenge for model predictions about dynamical systems. This is the numerical instability due to the employment of numerical methods involving a (...) discretization process, where discretization is required to solve the differential equations of dynamical systems on a computer. We argue that the criteria for numerical stability, as usually provided by numerical analysis textbooks, are insufficient, and, after mentioning the promising development of backward analysis, we discuss to what extent, in practice, numerical instability can be controlled or avoided. (shrink)
I investigate the role of stability in cosmology through two episodes from the recent history of cosmology: Einstein’s static universe and Eddington’s demonstration of its instability, and the flatness problem of the hot big bang model and its claimed solution by inflationary theory. These episodes illustrate differing reactions to instability in cosmological models, both positive ones and negative ones. To provide some context to these reactions, I also situate them in relation to perspectives on stability from dynamical systems theory and (...) its epistemology. This reveals, for example, an insistence on stability as an extreme position in relation to the spectrum of physical systems which exhibit degrees of stability and fragility, one which has a pragmatic rationale, but not any deeper one. (shrink)
The thesis of this paper is that panpsychism theory is very close to jungian theory, especially thinking of the quantum psychoid aspects of C.G.Jung and W.Pauli theory: a psyche that touches matter and matter with a “latent psyche”. The two theories seem to describe the same reality, an animation of matter in a spiritual sense, as the jungian Self seems to do at a higher level.The complexity theory appears instead to be a description of reality still nomothetic.
Can stable regularities be explained without appealing to governing laws or any other modal notion? In this paper, I consider what I will call a ‘Humean system’—a generic dynamical system without guiding laws—and assess whether it could display stable regularities. First, I present what can be interpreted as an account of the rise of stable regularities, following from Strevens , which has been applied to explain the patterns of complex systems (such as those from meteorology and statistical mechanics). Second, since (...) this account presupposes that the underlying dynamics displays deterministic chaos, I assess whether it can be adapted to cases where the underlying dynamics is not chaotic but truly random—that is, cases where there is no dynamics guiding the time evolution of the system. If this is so, the resulting stable, apparently non-accidental regularities are the fruit of what can be called statistical necessity rather than of a primitive physical necessity. (shrink)
Emergence is much discussed by both philosophers and scientists. But, as noted by Mitchell (2012), there is a significant gulf; philosophers and scientists talk past each other. We contend that this is because philosophers and scientists typically mean different things by emergence, leading us to distinguish being emergence and pattern emergence. While related to distinctions offered by others between, for example, strong/weak emergence or epistemic/ontological emergence (Clayton, 2004, pp. 9–11), we argue that the being vs. pattern distinction better captures what (...) the two groups are addressing. In identifying pattern emergence as the central concern of scientists, however, we do not mean that pattern emergence is of no interest to philosophers. Rather, we argue that philosophers should attend to, and even contribute to, discussions of pattern emergence. But it is important that this discussion be distinguished, not conflated, with discussions of being emergence. In the following section we explicate the notion of being emergence and show how it has been the focus of many philosophical discussions, historical and contemporary. In section 3 we turn to pattern emergence, briefly presenting a few of the ways it figures in the discussions of scientists (and philosophers of science who contribute to these discussions in science). Finally, in sections 4 and 5, we consider the relevance of pattern emergence to several central topics in philosophy of biology: the emergence of complexity, of control, and of goal-directedness in biological systems. (shrink)
The target paper of Schoeller, Perlovsky, and Arseniev is an essential and timely contribution to a current shift of focus in neuroscience aiming to merge neurophysiological, psychological and physical principles in order to build the foundation for the physics of mind. Extending on previous work of Perlovsky et al. and Badre, the authors of the target paper present interesting mathematical models of several basic principles of the physics of mind, such as perception and cognition, concepts and emotions, instincts and learning. (...) Their conceptualization helps to clarify the distinction between conscious and unconscious aspects of mind that is often neglected and further provide a clear description of the mental hierarchy, which extends from physical objects in the physical world to abstract ideas in the mental/subjective realm. While we agree that identification of a few fundamental principles is a first step toward developing the physics of the mind, and we concur with the selection of those principles in the target review paper, we think that the theory of the physics of mind would much benefit from considering also the most basic principles that are common for the physics/matter/brain and the mind/subjectivity/cognition. In this respect, such basic principles as time and space, as well as criticality, self-organization, and emergence seem to be the most interesting. (shrink)
The file on this site provides the slides for a lecture given in Hangzhou in May 2018, and the lecture itself is available at the URL beginning 'sms' in the set of links provided in connection with this item. -/- It is commonly assumed that regular physics underpins biology. Here it is proposed, in a synthesis of ideas by various authors, that in reality structures and mechanisms of a biological character underpin the world studied by physicists, in principle supplying detail (...) in the domain that according to regular physics is of an indeterminate character. In regular physics mathematical equations are primary, but this constraint leads to problems with reconciling theory and reality. Biology on the other hand typically does not characterise nature in quantitative terms, instead investigating in detail important complex interrelationships between parts, leading to an understanding of the systems concerned that is in some respects beyond that which prevails in regular physics. It makes contact with quantum physics in various ways, for example in that both involve interactions between observer and observed, an insight that explains what is special about processes involving observation, justifying in the quantum physics context the replacement of the unphysical many-worlds picture by one involving collapse. The link with biology furthermore clarifies Wheeler’s suggestion that a multiplicity of observations can lead to the ‘fabrication of form’, including the insight that this process depends on very specific ‘structures with power’ related to the 'semiotic scaffolding' of the application of sign theory to biology known as biosemiotics. -/- The observer-observed 'circle' of Wheeler and Yardley is a special case of a more general phenomenon, oppositional dynamics, related to the 'intra-action' of Barad's Agential Realism, involving cooperating systems such as mind and matter, abstract and concrete, observer and observed, that preserve their identities while interacting with one another in such a way as to act as a unit. A third system may also be involved, the mediating system of Peirce linking the two together. Such a situation of changing connections and separations may plausibly lead in the future to an understanding of how complex systems are able to evolve to produce 'life, the universe and everything'. -/- (Added 1 July 2018) The general structure proposed here as an alternative to a mathematics-based physics can be usefully characterised by relating it to different disciplines and the specialised concepts utilised therein. In theoretical physics, the test for the correctness of a theory typically involves numerical predictions, corresponding to which theories are expressed in terms of equations, that is to say assertions that two quantities have identical values. Equations have a lesser significance in biology which typically talks in terms of functional mechanisms, dependent for example on details of chemistry and concepts such as genes, natural selection, signals and geometrical or topologically motivated concepts such as the interconnections between systems and the unfolding of DNA. Biosemiotics adds to this the concept of signs and their interpretation, implying novel concepts such as semiotic scaffolding and the semiosphere, code duality, and appreciation of the different types of signs, including symbols and their capacity for abstraction and use in language systems. Circular Theory adds to this picture, as do the ideas of Barad, considerations such as the idea of oppositional dynamics. The proposals in this lecture can be regarded as the idea that concepts such as those deriving from biosemiotics have more general applicability than just conventional biology and may apply, in some circumstances, to nonlinear systems generally, including the domain new to science hypothesised to underlie the phenomena of present-day physics. -/- The task then has to be to restore the mathematical aspect presumed, in this picture, not to be fundamental as it is in conventional theory. Deacon has invoked a complex sequence of evolutionary steps to account for the emergence over time of human language systems, and correspondingly mathematical behaviour can be subsumed under the general evolutionary mechanisms of biosemiotics (cf. also the proposals of Davis and Hersh regarding the nature of mathematics), so that the mathematical behaviour of physical systems is consistent with the proposed scheme. In conclusion, it is suggested that theoretical physicists should cease expecting to find some universal mathematical ‘theory of everything’, and focus instead on understanding in more detail complex systems exhibiting behaviour of a biological character, extending existing understanding. This may in time provide a more fruitful understanding of the natural world than does the regular approach. The essential concepts have an observational basis from both biology and the little-known discipline of cymatics (a discipline concerned with the remarkable patterns that specific waveforms can give rise to), while again computer simulations also offer promise in providing insight into the complex behaviours involved in the above proposals. -/- References -/- Jesper Hoffmeyer, Semiotic Scaffolding of Living Systems. Commens, a Digital Companion to C. S. Peirce (on Commens web site). Terrence Deacon, The Symbolic Species, W.W. Norton & Co. Karen Barad, Meeting the Universe Halfway: Quantum Physics and the Entanglement of Matter and Meaning, Duke University Press. Philip Davis and Reuben Hersh, The Mathematical Experience, Penguin. Ilexa Yardley, Circular Theory. (shrink)
This paper dealing with extension of the Einstein eld equations using apparatus of contemporary generalization of the classical Lorentzian geometry named in literature Colombeau distributional geometry, see for example , , , , , ,  and . The regularizations of singularities presented in some solutions of the Einstein equations is an important part of this approach. Any singularities present in some solutions of the Einstein equations recognized only in the sense of Colombeau generalized functions ,  and not classically. (...) In this paper essentially new class Colombeau solutions to Einstein eld equations is obtained. The vacuum energy density of free scalar quantum field with a distributional background spacetime also is considered. It has been widely believed that, except in very extreme situations, the influence of gravity on quantum fields should amount to just small, sub-dominant contributions. Here we argue that this belief is false by showing that there exist well-behaved spacetime evolutions where the vacuum energy density of free quantum fields is forced, by the very same background distributional spacetime such distributional BHs, to become dominant over any classical energy density component. This semiclassical gravity effect finds its roots in the singular behavior of quantum fields on curved spacetimes. In particular we obtain that the vacuum fluctuations have a singular behavior on BHs horizon. (shrink)
The fault ride-through capability and fault current issues are the main challenges in doubly fed induction generator- based wind turbines. Application of the bridge-type fault current limiter was recognized as a promising solution to cope with these challenges. This paper proposes a nonlinear sliding mode controller for the BFCL to enhance the FRT performance of the DFIG-based WT. This controller has robust performance in unpredicted voltage sag level and nonlinear features. Theoretical discussions, power circuit, and nonlinear control consideration of the (...) SMC-based BFCL are conducted, and then, its performance is verified through time-domain simulations in the PSCAD/EMTDC environment. To reduce the chattering phenomenon and decrease the reaching time, it used the exponential reaching law for designed SMC. Also, the SMC-based BFCL performance is compared with the conventional and PI controller-based BFCL for both symmetrical and asymmetrical short-circuit faults. Simulation results reveal that the SMC-based BFCL provides better performance compared with the conventional and PI controller-based BFCL to enhance the FRT. (shrink)
The models of the electric charge and magnetic moment are presented based on the nonlinear response of a vacuum on the applied electric and magnetic fields. The model of the electric charge contains one parameter—the radius of charge—and predicts one value of the electric charge for all elementary particles independently on the value of this radius. Different values of this parameter for the electron are discussed.
The big news about chaos is supposed to be that the smallest of changes in a system can result in very large differences in that system's behavior. The so-called butterfly effect has become one of the most popular images of chaos. The idea is that the flapping of a butterfly's wings in Argentina could cause a tornado in Texas three weeks later. By contrast, in an identical copy of the world sans the Argentinian butterfly, no such storm would have arisen (...) in Texas. The mathematical version of this property is known as sensitive dependence. However, it turns out that sensitive dependence is somewhat old news, so some of the implications flowing from it are perhaps not such “big news” after all. Still, chaos studies have highlighted these implications in fresh ways and led to thinking about other implications as well. -/- In addition to exhibiting sensitive dependence, chaotic systems possess two other properties: they are deterministic and nonlinear (Smith 2007). This entry discusses systems exhibiting these three properties and what their philosophical implications might be for theories and theoretical understanding, confirmation, explanation, realism, determinism, free will and consciousness, and human and divine action. (shrink)
This work builds on the Volterra series formalism presented in Dreisigmeyer and Young to model nonconservative systems. Here we treat Lagrangians and actions as ‘time dependent’ Volterra series. We present a new family of kernels to be used in these Volterra series that allow us to derive a single retarded equation of motion using a variational principle.
A dynamical system is called chaotic if small changes to its initial conditions can create large changes in its behavior. By analogy, we call a dynamical system structurally chaotic if small changes to the equations describing the evolution of the system produce large changes in its behavior. Although there are many definitions of “chaos,” there are few mathematically precise candidate definitions of “structural chaos.” I propose a definition, and I explain two new theorems that show that a set of models (...) is structurally chaotic if it contains a chaotic function. I conclude by discussing the relationship between structural chaos and structural stability. (shrink)
Climatology is a paradigmatic complex systems science. Understanding the global climate involves tackling problems in physics, chemistry, economics, and many other disciplines. I argue that complex systems like the global climate are characterized by certain dynamical features that explain how those systems change over time. A complex system's dynamics are shaped by the interaction of many different components operating at many different temporal and spatial scales. Examining the multidisciplinary and holistic methods of climatology can help us better understand the nature (...) of complex systems in general. -/- Questions surrounding climate science can be divided into three rough categories: foundational, methodological, and evaluative questions. "How do we know that we can trust science?" is a paradigmatic foundational question (and a surprisingly difficult one to answer). Because the global climate is so complex, questions like "what makes a system complex?" also fall into this category. There are a number of existing definitions of `complexity,' and while all of them capture some aspects of what makes intuitively complex systems distinctive, none is entirely satisfactory. Most existing accounts of complexity have been developed to work with information-theoretic objects (signals, for instance) rather than the physical and social systems studied by scientists. -/- Dynamical complexity, a concept articulated in detail in the first third of the dissertation, is designed to bridge the gap between the mathematics of contemporary complexity theory (in particular the formalism of "effective complexity" developed by Gell-Mann and Lloyd ) and a more general account of the structure of science generally. Dynamical complexity provides a physical interpretation of the formal tools of mathematical complexity theory, and thus can be used as a framework for thinking about general problems in the philosophy of science, including theories, explanation, and lawhood. -/- Methodological questions include questions about how climate science constructs its models, on what basis we trust those models, and how we might improve those models. In order to answer questions about climate modeling, it's important to understand what climate models look like and how they are constructed. Climate model families are significantly more diverse than are the model families of most other sciences (even sciences that study other complex systems). Existing climate models range from basic models that can be solved on paper to staggeringly complicated models that can only be analyzed using the most advanced supercomputers in the world. I introduce some of the central concepts in climatology by demonstrating how one of the most basic climate models might be constructed. I begin with the assumption that the Earth is a simple featureless blackbody which receives energy from the sun and releases it into space, and show how to model that assumption formally. I then gradually add other factors (e.g. albedo and the greenhouse effect) to the model, and show how each addition brings the model's prediction closer to agreement with observation. After constructing this basic model, I describe the so-called "complexity hierarchy" of the rest of climate models, and argue that the sense of "complexity" used in the climate modeling community is related to dynamical complexity. -/- With a clear understanding of the basics of climate modeling in hand, I then argue that foundational issues discussed early in the dissertation suggest that computation plays an irrevocably central role in climate modeling. "Science by simulation" is essential given the complexity of the global climate, but features of the climate system--the presence of non-linearities, feedback loops, and chaotic dynamics--put principled limits on the effectiveness of computational models. This tension is at the root of the staggering pluralism of the climate model hierarchy, and suggests that such pluralism is here to stay, rather than an artifact of our ignorance. Rather than attempting to converge on a single "best fit" climate model, we ought to embrace the diversity of climate models, and view each as a specialized tool designed to predict and explain a rather narrow range of phenomena. Understanding the climate system as a whole requires examining a number of different models, and correlating their outputs. This is the most significant methodological challenge of climatology. -/- Climatology's role contemporary political discourse raises an unusually high number of evaluative questions for a physical science. The two leading approaches to crafting policy surrounding climate change center on mitigation (i.e. stopping the changes from occurring) and adaptation (making post hoc changes to ameliorate the harm caused by those changes). Crafting an effective socio-political response to the threat of anthropogenic climate change, however, requires us to integrate multiple perspectives and values: the proper response will be just as diverse and pluralistic as the climate models themselves, and will incorporate aspects of both approaches. I conclude by offering some concrete recommendations about how to integrate this value pluralism into our socio-political decision making framework. (shrink)
In this article I examine two mathematical definitions of observational equivalence, one proposed by Charlotte Werndl and based on manifest isomorphism, and the other based on Ornstein and Weiss’s ε-congruence. I argue, for two related reasons, that neither can function as a purely mathematical definition of observational equivalence. First, each definition permits of counterexamples; second, overcoming these counterexamples will introduce non-mathematical premises about the systems in question. Accordingly, the prospects for a broadly applicable and purely mathematical definition of observational equivalence (...) are unpromising. Despite this critique, I suggest that Werndl’s proposals are valuable because they clarify the distinction between provable and unprovable elements in arguments for observational equivalence. (shrink)
This paper briefly review a current trend in neuroscience aiming to combine neurophysiological and physical concepts in order to understand the emergence of spatio-temporal patterns within brain activity by which brain constructs knowledge from multiple streams of information. The authors further suggest that the meanings, which subjectively are experienced as thoughts or perceptions can best be described objectively as created and carried by large fields of neural activity within the operational architectonics of brain functioning.
From his earliest work forward, phenomenologist Maurice Merleau-Ponty attempted to develop a new ontology of nature that would avoid the antinomies of realism and idealism by showing that nature has its own intrinsic sense which is prior to reflection. The key to this new ontology was the concept of form, which he appropriated from Gestalt psychology. However, Merleau-Ponty struggled to give a positive characterization of the phenomenon of form which would clarify its ontological status. Evan Thompson has recently taken up (...) Merleau-Ponty’s ontology as the basis for a new, “enactive” approach to cognitive science, synthesizing it with concepts from dynamic systems theory and Francisco Varela’s theory of autopoiesis. However, Thompson does not quite succeed in resolving the ambiguities in Merleau-Ponty’s account of form. This article builds on an indication from Thompson in order to propose a new account of form as asymmetry, and of the genesis of form in nature as symmetry-breaking. These concepts help us to escape the antinomies of Modern thought by showing how nature is the autoproduction of a sense which can only be known by an embodied perceiver. (shrink)
Individuals make decisions under uncertainty every day based on incomplete information concerning the potential outcome of the choice or chance levels. The choices individuals make often deviate from the rational or mathematically objective solution. Accordingly, the dynamics of human decision-making are difficult to capture using conventional, linear mathematical models. Here, we present data from a two-choice task with variable risk between sure loss and risky loss to illustrate how a simple nonlinear dynamical system can be employed to capture the dynamics (...) of human decision-making under uncertainty (i.e., multi-stability, bifurcations). We test the feasibility of this model quantitatively and demonstrate how the model can account for up to 86% of the observed choice behavior. The implications of using dynamical models for explaining the nonlinear complexities of human decision-making are discussed, as well as the degree to which nonlinear dynamical systems theory might offer an alternative framework for understanding human decision-making processes. (shrink)
Individuals make decisions under uncertainty every day. Decisions are based on in- complete information concerning the potential outcome or the predicted likelihood with which events occur. In addition, individuals’ choices often deviate from the rational or mathematically objective solution. Accordingly, the dynamics of human decision making are difficult to capture using conventional, linear mathematical models. Here, we present data from a 2-choice task with variable risk between sure loss and risky loss to illustrate how a simple nonlinear dynamical system can (...) be employed to capture the dynamics of human decision making under uncertainty (i.e., multistability, bifurcations). We test the feasibility of this model quantitatively and demonstrate how the model can account for up to 86% of the observed choice behavior. The implications of using dynamical models for explaining the nonlin- ear complexities of human decision making are discussed as well as the degree to which the theory of nonlinear dynamical systems might offer an alternative framework for understanding human decision making processes. (shrink)
The nonlinearity of a composite system, whereby certain of its features (including powers and behaviors) cannot be seen as linear or other broadly additive combinations of features of the system's composing entities, has been frequently seen as a mark of metaphysical emergence, coupling the dependence of a composite system on an underlying system of composing entities with the composite system's ontological autonomy from its underlying system. But why think that nonlinearity is a mark of emergence, and moreover, of metaphysical rather (...) than merely epistemological emergence? Are there diverse ways in which nonlinearity might enter into an account of properly metaphysical emergence? And what are the prospects for there actually being phenomena that are metaphysically emergent in any available sense? Here I explore the mutual bearing of nonlinearity and metaphysical emergence, with an eye towards answering these and related questions. (shrink)
This impressive volume of essays that includes contributions from Herbert Dreyfus, Sean Kelly, Mike Wheeler, Dan Zahavi, and Shaun Gallagher reflects an emerging trend in cognitive science, and explores this new approach to cognitive science informed by Heidegger's thoughts on human existence.
In a theoretical simulation the cooperation of two insects is investigated who share a large number of maximally entangled EPR-pairs to correlate their probabilistic actions. Specifically, two distant butterflies must find each other. Each butterfly moves in a chaotic form of short flights, guided only by the weak scent emanating from the other butterfly. The flight directions result from classical random choices. Each such decision of an individual is followed by a read-out of an internal quantum measurement on a spin, (...) the result of which decides whether the individual shall do a short flight or wait. These assumptions reflect the scarce environmental information and the small brains’ limited computational capacity. The quantum model is contrasted to two other cases: In the classical case the coherence between the spin pairs gets lost and the two butterflies act independently. In the super classical case the two butterflies read off their decisions of whether to fly or to wait from the same internal list so that they always take the same decision as if they were super correlated. The numerical simulation reveals that the quantum entangled butterflies find each other with a much shorter total flight path than in both classical models. (shrink)
A (to our knowledge) novel Generalized Nonlinear Schrödinger equation based on the modifications of Nottale-Cresson’s fractal-scale calculus and resulting from the noncommutativity of the phase space coordinates is explicitly derived. The modifications to the ground state energy of a harmonic oscillator yields the observed value of the vacuum energy density. In the concluding remarks we discuss how nonlinear and nonlocal QM wave equations arise naturally from this fractal-scale calculus formalism which may have a key role in the final formulation of (...) Quantum Gravity. (shrink)
We examine a case in which non-computable behavior in a model is revealed by computer simulation. This is possible due to differing notions of computability for sets in a continuous space. The argument originally given for the validity of the simulation involves a simpler simulation of the simulation, still further simulations thereof, and a universality conjecture. There are difficulties with that argument, but there are other, heuristic arguments supporting the qualitative results. It is urged, using this example, that absolute validation, (...) while highly desirable, is overvalued. Simulations also provide valuable insights that we cannot yet (if ever) prove. (shrink)
In 1900, the physicist Henri Bénard exhibited the spontaneous formation of cells in a layer of liquid heated from below. Six or seven decades later, drastic reinterpretations of this experiment formed an important component of ‘chaos theory’. This paper therefore is an attempt at writing the history of this experiment, its long neglect and its rediscovery. It examines Bénard’s experiments from three different perspectives. First, his results are viewed in the light of the relation between experimental and mathematical approaches in (...) fluid mechanics, leading to a re-examination of the long-term reception of Bénard’s results among fluid dynamicists up to the chaos craze, whereby the traditional emphasis placed on mathematical physics is counterbalanced by greater attention to experimental approaches. Second, we focus on Bénard’s own way of using his results as analogies that could help grasp something about the reason why inorganic matter may structure itself in ways reminiscent of living forms. This is shown to resonate strongly with Prigogine’s work in the 1960s and 1970s. Third, Bénard’s adoption of the cinematograph as his preferred experimental instrument is interpreted as having reinforced his long-misunderstood belief that he had exhibited a form of self-organization essential to the understanding of life.Keywords: Bénard cells; Henri Poincaré; Chaos; Fluid mechanics; Cinematograph; Self-organization. (shrink)
Recent developments in nonlinear dynamics have found wide application in many areas of science from physics to neuroscience. Nonlinear phenomena such as feedback loops, inter-level relations, wholes constraining and modifying the behavior of their parts, and memory effects are interesting candidates for emergence and downward causation. Rayleigh–Bénard convection is an example of a nonlinear system that, I suggest, yields important insights for metaphysics and philosophy of science. In this paper I propose convection as a model for downward causation in classical (...) mechanics, far more robust and less speculative than the examples typically provided in the philosophy of mind literature. Although the physics of Rayleigh–Bénard convection is quite complicated, this model provides a much more realistic and concrete example for examining various assumptions and arguments found in emergence and philosophy of mind debates. After reviewing some key concepts of nonlinear dynamics, complex systems and the basic physics of Rayleigh–Bénard convection, I begin that examination here by (1) assessing a recently proposed definition for emergence and downward causation, (2) discussing some typical objections to downward causation and (3) comparing this model with Sperry’s examples. (shrink)
The discovery of sensitive dependence on initial conditions (SDIC) in nonlinear models runs counter to the textbook vision of CM, a vision guided by an almost exclusive focus on linear systems. Therefore, it is important to clearly distinguish between linear and nonlinear systems along with establishing some basic terminology (§I). The notions of SDIC and chaos also need clarification, since they play crucial roles in sensitive dependence (SD) arguments. This will require some discussion of Lyapunov exponents as well as the (...) relationship between nonlinear dynamics and chaos (§II and Appendix). For the sake of concreteness, it will also be useful to focus on the Laplacean vision for classical particle mechanics (e.g., Bishop 2002b, 2003 and 2005a), particularly the crucial notion of unique evolution (§III). The SD argument can then be stated in a clear form and its defenses, criticisms and limitations assessed in the more general context of nonlinear dynamics (§IV). Concluding remarks follow (§V). (shrink)
For Merleau-Ponty,consciousness in skillful coping is a matter of prereflective ‘I can’ and not explicit ‘I think that.’ The body unifies many domain-specific capacities. There exists a direct link between the perceived possibilities for action in the situation (‘affordances’) and the organism’s capacities. From Merleau-Ponty’s descriptions it is clear that in a flow of skillful actions, the leading ‘I can’ may change from moment to moment without explicit deliberation. How these transitions occur, however, is less clear. Given that Merleau-Ponty suggested (...) that a better understanding of the self-organization of brain and behavior is important, I will re-read his descriptions of skillful coping in the light of recent ideas on neurodynamics. Affective processes play a crucial role in evaluating the motivational significance of objects and contribute to the individual’s prereflective responsiveness to relevant affordances. (shrink)
Husserl is well known for his critique of the “mathematizing tendencies” of modern science, and is particularly emphatic that mathematics and phenomenology are distinct and in some sense incompatible. But Husserl himself uses mathematical methods in phenomenology. In the first half of the paper I give a detailed analysis of this tension, showing how those Husserlian doctrines which seem to speak against application of mathematics to phenomenology do not in fact do so. In the second half of the paper I (...) focus on a particular example of Husserl’s “mathematized phenomenology”: his use of concepts from what is today called dynamical systems theory. (shrink)