The Simulation Hypothesis proposes that all of reality is in fact an artificial simulation, analogous to a computer simulation. Outlined here is a method for programming relativistic mass, space and time at the Planck level as applicable for use in Planck Universe-as-a-Simulation Hypothesis. For the virtual universe the model uses a 4-axis hyper-sphere that expands in incremental steps (the simulation clock-rate). Virtual particles that oscillate between an electric wave-state and a mass point-state are mapped within this hyper-sphere, the oscillation driven (...) by this expansion. Particles are assigned an N-S axis which determines the direction in which they are pulled along by the expansion, thus an independent particle motion may be dispensed with. Only in the mass point-state do particles have fixed hyper-sphere co-ordinates. The rate of expansion translates to the speed of light and so in terms of the hyper-sphere co-ordinates all particles (and objects) travel at the speed of light, time (as the clock-rate) and velocity (as the rate of expansion) are therefore constant, however photons, as the means of information exchange, are restricted to lateral movement across the hyper-sphere thus giving the appearance of a 3-D space. Lorentz formulas are used to translate between this 3-D space and the hyper-sphere co-ordinates, relativity resembling the mathematics of perspective. (shrink)
Defined are gravitational formulas in terms of Planck units and units of $\hbar c$. Mass is not assigned as a constant property but is instead treated as a discrete event defined by units of Planck mass with gravity as an interaction between these units, the gravitational orbit as the sum of these mass-mass interactions and the gravitational coupling constant as a measure of the frequency of these interactions and not the magnitude of the gravitational force itself. Each particle that is (...) in the mass-state (defined by a unit of Planck mass) per unit of Planck time is directly linked to every other particle also in the mass-state by a discrete unit of $m_P v^2 r = \hbar c$, the velocity of a gravitational orbit is summed from these individual $v^2$. As this approach presumes a digital time, it is suitable for use in programming Simulation Hypothesis models. As this link is responsible for the particle-particle interaction it is analogous to the graviton. Orbital angular momentum of the planetary orbits derives from the sum of the planet-sun particle-particle orbital angular momentum irrespective of the angular momentum of the sun itself and the rotational angular momentum of a planet includes particle-particle rotational angular momentum. (shrink)
Outlined here is a simulation hypothesis approach that uses an expanding (the simulation clock-rate measured in units of Planck time) 4-axis hyper-sphere and mathematical particles that oscillate between an electric wave-state and a mass (unit of Planck mass per unit of Planck time) point-state. Particles are assigned a spin axis which determines the direction in which they are pulled by this (hyper-sphere pilot wave) expansion, thus all particles travel at, and only at, the velocity of expansion (the origin of $c$), (...) however only the particle point-state has definable co-ordinates within the hyper-sphere. Photons are the mechanism of information exchange, as they lack a mass state they can only travel laterally (in hypersphere co-ordinate terms) between particles and so this hypersphere expansion cannot be directly observed, relativity then becomes the mathematics of perspective translating between the absolute (hypersphere) and the relative motion (3D space) co-ordinate systems. A discrete `pixel' lattice geometry is assigned as the gravitational space. Units of $\hbar c$ `physically' link particles into orbital pairs. As these are direct particle to particle links, a gravitational force between macro objects is not required, the gravitational orbit as the sum of these individual orbiting pairs. A 14.6 billion year old hyper-sphere (the sum of Planck black-hole units) has similar parameters to the cosmic microwave background. The Casimir force is a measure of the background radiation density. (shrink)
Both patched versions of the Bostrom/Kulczycki simulation argument contain serious objective errors, discovered while attempting to formalize them in predicate logic. The English glosses of both versions involve badly misleading meanings of vague magnitude terms, which their impressiveness benefits from. We fix the errors, prove optimal versions of the arguments, and argue that both are much less impressive than they originally appeared. Finally, we provide a guide for readers to evaluate the simulation argument for themselves, using well-justified settings of the (...) argument parameters that have simple, accurate statements in English, which are easier to understand and critique than the statements in the original paper. (shrink)
Forty-two years ago, Capra published “The Tao of Physics” (Capra, 1975). In this book (page 17) he writes: “The exploration of the atomic and subatomic world in the twentieth century has …. necessitated a radical revision of many of our basic concepts” and that, unlike ‘classical’ physics, the sub-atomic and quantum “modern physics” shows resonances with Eastern thoughts and “leads us to a view of the world which is very similar to the views held by mystics of all ages and (...) traditions.“ This article stresses an analogous situation in biology with respect to a new theoretical approach for studying living systems, Integral Biomathics (IB), which also exhibits some resonances with Eastern thought. Stepping on earlier research in cybernetics1 and theoretical biology,2 IB has been developed since 2011 by over 100 scientists from a number of disciplines who have been exploring a substantial set of theoretical frameworks. From that effort, the need for a robust core model utilizing advanced mathematics and computation adequate for understanding the behavior of organisms as dynamic wholes was identified. At this end, the authors of this article have proposed WLIMES (Ehresmann and Simeonov, 2012), a formal theory for modeling living systems integrating both the Memory Evolutive Systems (Ehresmann and Vanbremeersch, 2007) and the Wandering Logic Intelligence (Simeonov, 2002b). Its principles will be recalled here with respect to their resonances to Eastern thought. (shrink)
Starting in the 1950s, computer programs for simulating cognitive processes and intelligent behaviour were the hallmark of Good Old-Fashioned Artificial Intelligence and ‘cognitivist’ cognitive science. This article examines a somewhat neglected case of simulation pursued by one of the founding fathers of simulation methodology, Herbert A. Simon. In the 1970s and 1980s, Simon had repeated contacts with Marxist countries and scientists, in the context of which he advanced the idea that cognitivism could be used as a framework for simulating dialectical (...) materialism. Simon's idea was, in particular, to represent dialectical processes through a ‘symbolic’ version of dialectical logic. This article explores the context of Simon's interaction with Marxist countries—China and the USSR—and also assesses the outcome of the simulation. The difficulty with simulating distinctive features of dialectical materialism is read in light of the underlying assumptions of cognitivism and, ultimately, in light of the attempt to tame a rival world view. (shrink)
We used agent-based modelling to highlight the advantages and disadvantages of several management styles in biology, ranging from centralized to egalitarian ones. In egalitarian groups, all team members are connected with each other, while in centralized ones, they are only connected with the principal investigator. Our model incorporated time constraints, which negatively influenced weakly connected groups such as centralized ones. Moreover, our results show that egalitarian groups outperform others if the questions addressed are relatively simple or when the communication among (...) agents is limited. Complex epistemic spaces are explored best by centralized groups. They outperform other team structures because the individual members can develop their own ideas with less interference of the opinions of others. The optimal ratio between time spent on experimentation and dissemination varies between different organizational structures. Furthermore, if the evidence is shared only after a relevant degree of certainty is reached, all investigated groups epistemically profit. We discovered that the introduction of seminars to the model changes the epistemic performance in favour of weakly connected teams. Finally, the abilities of the principal investigator do not seem to outperform cognitive diversity, as group performances were not strongly influenced by the increase of her abilities. (shrink)
We argue that the appraisal of models in social epistemology requires conceiving of them as argumentative devices, taking into account the argumentative context and adopting a family-of-models perspective. We draw up such an account and show how it makes it easier to see the value and limits of the use of models in social epistemology. To illustrate our points, we document and explicate the argumentative role of epistemic landscape models in social epistemology and highlight their limitations. We also claim that (...) our account could be fruitfully used in appraising other models in philosophy and science. (shrink)
Are acts of violence performed in virtual environments ever morally wrong, even when no other persons are affected? While some such acts surely reflect deficient moral character, I focus on the moral rightness or wrongness of acts. Typically it’s thought that, on Kant’s moral theory, an act of virtual violence is morally wrong (i.e., violate the Categorical Imperative) only if the act mistreats another person. But I argue that, on Kant’s moral theory, some acts of virtual violence can be morally (...) wrong, even when no other persons or their avatars are affected. First, I explain why many have thought that, in general on Kant’s moral theory, virtual acts affecting no other persons or their avatars can’t violate the Categorical Imperative. For there are real world acts that clearly do, but it seems that when we consider the same sorts of acts done alone in a virtual environment, they don’t violate the Categorical Imperative, because no others persons were involved. But then, how could any virtual acts like these, that affect no other persons or their avatars, violate the Categorical Imperative? I then argue that there indeed can be such cases of morally wrong virtual acts—some due to an actor’s having erroneous beliefs about morally relevant facts, and others due not to error, but to the actor’s intention leaving out morally relevant facts while immersed in a virtual environment. I conclude by considering some implications of my arguments for both our present technological context as well as the future. (shrink)
The commercial VR/AR marketplace is gaining ground and is becoming an ever larger and more significant component of the global economy. While much attention has been paid to the commercial promise of VR/AR, comparatively little attention has been given to the ethical issues that VR/AR technologies introduce. We here examine existing codes of ethics proposed by the ACM and IEEE and apply them to the unique ethical facets that VR/AR introduces. We propose a VR/AR code of ethics for developers and (...) apply this code to several commercial applications. (shrink)
This article explores whether and under which circumstances it is ethically viable to include artificial beings worthy of moral consideration in virtual environments. In particular, the article focuses on virtual environments such as those in digital games and training simulations – interactive and persistent digital artifacts designed to fulfill specific purposes, such as entertainment, education, training, or persuasion. The article introduces the criteria for moral consideration that serve as a framework for this analysis. Adopting this framework, the article tackles the (...) question of whether including artificial intelligences that are entitled to moral consideration in virtual environments constitutes an immoral action on the part of human creators. To address this problem, the article draws on three conceptual lenses from the philosophical branch of ethics: the problem of parenthood and procreation, the question concerning the moral status of animals, and the classical problem of evil. Using a thought experiment, the concluding section proposes a contractualist answer to the question posed in this article. The same section also emphasizes the potential need to reframe our understanding of the design of virtual environments and their future stakeholders. (shrink)
The Gamer's Dilemma challenges us to find a distinction between virtual murder and virtual pedophilia. Without such a distinction, we are forced to conclude that either both are morally acceptable or that both should be morally illicit. This paper argues that the best way to solve the dilemma is, in one sense, to dissolve it. The Gamer's Dilemma rests on a misunderstanding in the sense that it does not distinguish between the form of a simulation and its surface content. A (...) greater appreciation of the way structural features of a simulation affect subject experience will help us see why only simulations of murder and pedophilia generating virtually real experiences are likely to be seen as wrong. I argue that a simulation’s structural elements powerfully affect how subjects experience simulated content and hence is an important, and previously neglected, variable necessary to dissolve the Gamer's Dilemma. Properly understood, virtually real simulations of murder and pedophilia are both likely to be treated by players as morally wrong. Similarly, virtually unreal murder and pedophilia will be less likely to be judged as wrong. Subject judgments are thus consistent once a simulation’s structural variables are accounted for. The Gamer's Dilemma dissolves as a dilemma once we acknowledge these structural features of simulations and how they affect experience and moral judgment. (shrink)
Emerging technologies such as cloud computing, augmented and virtual reality, artificial intelligence and robotics, among others, are transforming the field of manufacturing and industry as a whole in unprecedent ways. This fourth industrial revolution is consequentially changing how operators that have been crucial to industry success go about their practices in industrial environments. This short paper briefly introduces the notion of the Operator 4.0 as well as how this novel way of conceptualizing the human operator necessarily implicates human values in (...) the technologies that constitute it. Similarly, the design methodology known as value sensitive design (VSD) is drawn upon to discuss how these Operator 4.0 technologies can be design for human values and, conversely, how a potential value-sensitive Operator 4.0 can be used to strengthen the VSD methodology in developing novel technologies. (shrink)
What is the status of a cat in a virtual reality environment? Is it a real object? Or part of a fiction? Virtual realism, as defended by D. J. Chalmers, takes it to be a virtual object that really exists, that has properties and is involved in real events. His preferred specification of virtual realism identifies the cat with a digital object. The project of this paper is to use a comparison between virtual reality environments and scientific computer simulations to (...) critically engage with Chalmers’s position. I first argue that, if it is sound, his virtual realism should also be applied to objects that figure in scientific computer simulations, e.g. to simulated galaxies. This leads to a slippery slope because it implies an unreasonable proliferation of digital objects. A philosophical analysis of scientific computer simulations suggests an alternative picture: The cat and the galaxies are parts of fictional models for which the computer provides model descriptions. This result motivates a deeper analysis of the way in which Chalmers builds up his realism. I argue that he buys realism too cheap. For instance, he does not really specify what virtual objects are supposed to be. As a result, rhetoric aside, his virtual realism isn’t far from a sort of fictionalism. (shrink)
La, V. P., & Vuong, Q. H. (2019). bayesvl: Visually learning the graphical structure of Bayesian networks and performing MCMC with ‘Stan’. The Comprehensive R Archive Network (CRAN).
This paper draws on the notion of the ‘project,’ as developed in the existential philosophy of Heidegger and Sartre, to articulate an understanding of the existential structure of engagement with virtual worlds. By this philosophical understanding, the individual’s orientation towards a project structures a mechanism of self-determination, meaning that the project is understood essentially as the project to make oneself into a certain kind of being. Drawing on existing research from an existential-philosophical perspective on subjectivity in digital game environments, the (...) notion of a ‘virtual subjectivity’ is proposed to refer to the subjective sense of being-in-the-virtual-world. The paper proposes an understanding of virtual subjectivity as standing in a nested relation to the individual’s subjectivity in the actual world, and argues that it is this relation that allows virtual world experience to gain significance in the light of the individual’s projectual existence. The arguments advanced in this paper pave the way for a comprehensive understanding of the transformative, self-transformative, and therapeutic possibilities and advantages afforded by virtual worlds. (shrink)
Spatially situated opinions that can be held with different degrees of conviction lead to spatiotemporal patterns such as clustering (homophily), polarization, and deadlock. Our goal is to understand how sensitive these patterns are to changes in the local nature of interactions. We introduce two different mixing mechanisms, spatial relocation and nonlocal interaction (“telephoning”), to an earlier fully spatial model (no mixing). Interestingly, the mechanisms that create deadlock in the fully spatial model have the opposite effect when there is a sufficient (...) amount of mixing. With telephoning, not only is polarization and deadlock broken up, but consensus is hastened. The effects of mixing by relocation are even more pronounced. Further insight into these dynamics is obtained for selected parameter regimes via comparison to the mean-field differential equations. (shrink)
This book addresses key conceptual issues relating to the modern scientific and engineering use of computer simulations. It analyses a broad set of questions, from the nature of computer simulations to their epistemological power, including the many scientific, social and ethics implications of using computer simulations. The book is written in an easily accessible narrative, one that weaves together philosophical questions and scientific technicalities. It will thus appeal equally to all academic scientists, engineers, and researchers in industry interested in questions (...) related to the general practice of computer simulations. (shrink)
The Simulation Hypothesis proposes that all of reality, including the earth and the universe, is in fact an artificial simulation, analogous to a computer simulation, and as such our reality is an illusion. In this essay I describe a method for programming mass, length, time and charge (MLTA) as geometrical objects derived from the formula for a virtual electron; $f_e = 4\pi^2r^3$ ($r = 2^6 3 \pi^2 \alpha \Omega^5$) where the fine structure constant $\alpha$ = 137.03599... and $\Omega$ = 2.00713494... (...) are mathematical constants and the MLTA geometries are; M = (1), T = ($2\pi$), L = ($2\pi^2\Omega^2$), A = ($4\pi \Omega)^3/\alpha$. As objects they are independent of any set of units and also of any numbering system, terrestrial or alien. As the geometries are interrelated according to $f_e$, we can replace designations such as ($kg, m, s, A$) with a rule set; mass = $u^{15}$, length = $u^{-13}$, time = $u^{-30}$, ampere = $u^{3}$. The formula $f_e$ is unit-less ($u^0$) and combines these geometries in the following ratio M$^9$T$^{11}$/L$^{15}$ and (AL)$^3$/T, as such these ratio are unit-less. To translate MLTA to their respective SI Planck units requires an additional 2 unit-dependent scalars. We may thereby derive the CODATA 2014 physical constants via the 2 (fixed) mathematical constants ($\alpha, \Omega$), 2 dimensioned scalars and the rule set $u$. As all constants can be defined geometrically, the least precise constants ($G, h, e, m_e, k_B$...) can also be solved via the most precise ($c, \mu_0, R_\infty, \alpha$), numerical precision then limited by the precision of the fine structure constant $\alpha$. (shrink)
The simulation hypothesis proposes that all of reality is an artificial simulation. In this article I describe a simulation model that derives Planck level units as geometrical forms from a virtual (dimensionless) electron formula $f_e$ that is constructed from 2 unit-less mathematical constants; the fine structure constant $\alpha$ and $\Omega$ = 2.00713494... ($f_e = 4\pi^2r^3, r = 2^6 3 \pi^2 \alpha \Omega^5$). The mass, space, time, charge units are embedded in $f_e$ according to these ratio; ${M^9T^{11}/L^{15}} = (AL)^3/T$ (units = (...) 1), giving mass M=1, time T=$2\pi$, length L=$2\pi^2\Omega^2$, ampere A = $(4\pi \Omega)^3/\alpha$. We can thus for example create as much mass M as we wish but with the proviso that we create an equivalent space L and time T to balance the above. The 5 SI units $kg, m, s, A, K$ are derived from a single unit u = sqrt(velocity/mass) that also defines the relationships between the SI units; kg = $u^{15}$, m = $u^{-13}$, s = $u^{-30}$, A = $u^{3}$, $k_B = u^{29}$. To convert MLTA from the above $\alpha, \Omega$ geometries to their respective SI Planck unit numerical values (and thus solve the dimensioned physical constants $G, h, e, c, m_e, k_B$) requires an additional 2 unit-dependent scalars. Results are consistent with CODATA 2014. The rationale for the virtual electron was derived using the sqrt of momentum P and a black-hole electron model as a function of magnetic-monopoles AL (ampere-meters) and time T. (shrink)
De acuerdo algunos investigadores el ruido es concebido típicamente como factor perjudicial en el desempeño cognitivo afectando la percepción, toma de decisiones y la función motora. No obstante, en estudios recientes se asocia al ruido blanco con la concentración y la calma, por lo tanto, esta investigación busca establecer el impacto del ruido blanco binaural en el desempeño de la memoria de trabajo y visual a corto plazo, la actividad cerebral alfa – beta y la atención – meditación, mediante el (...) uso de dos estímulos auditivos con rangos de frecuencia de (100 a 450hz) y (100 a 750hz). Este estudio se realizó en la ciudad de Montes Claros, República de Brasil, donde se evaluó a siete participantes (n = 7) con una edad promedio de 36.71±, y dos grupos de edad (GP1) 21 a 30 y (GP2) 41 a 50 de escolaridad media a universitaria. Dentro del proceso experimental se realizaron pruebas de memoria visual a corto plazo mediante el uso de la batería de evaluación cognitiva general CAB de CogniFit™, así como el registro de actividades cerebrales mediante el uso de Electroencefalograma monopolar y los algoritmos eSense™. Con los resultados obtenidos y mediante el uso de pruebas estadísticas podemos inferir que el ruido blanco binaural con oscilaciones de 100 a 750 Hz contribuyeron con el rendimiento de la memoria visual de trabajo a corto plazo. (shrink)
En este trabajo presento un estudio sobre el estado del arte de la llamada ‘epistemología de las simulaciones computacionales’. En particular, me centro en los varios trabajos de Eric Winsberg quién es uno de los filósofos más fructíferos y sistemáticos en este tema. Además de analizar la obra de Winsberg, y basándome en sus trabajos y en el de otros filósofos, mostraré que hay buenas razones para pensar que la epistemología tradicional de la ciencia no es suficiente para el análisis (...) de las simulaciones computacionales. (shrink)
This article aims to develop a new account of scientific explanation for computer simulations. To this end, two questions are answered: what is the explanatory relation for computer simulations? And what kind of epistemic gain should be expected? For several reasons tailored to the benefits and needs of computer simulations, these questions are better answered within the unificationist model of scientific explanation. Unlike previous efforts in the literature, I submit that the explanatory relation is between the simulation model and the (...) results of the simulation. I also argue that our epistemic gain goes beyond the unificationist account, encompassing a practical dimension as well. (shrink)
Within dramatherapy and psychodrama, the term ‘de-roling’ indicates a set of activities that assist the subjects of therapy in ‘disrobing’ themselves from their fictional characters. Starting from the psychological needs and the therapeutic goals that ‘de-roling’ techniques address in dramatherapy and psychodrama, this text provides a broader understanding of procedures and exercises that define and ease transitional experiences across cultural practices such as religious rituals and spatial design. After this introductory section, we propose a tentative answer as to why game (...) studies and virtual world research largely ignored processes of ‘roling’ and ‘de-roling’ that separate the lived experience of role-play from our everyday sense of the self. The concluding sections argue that de-roling techniques are likely to become more relevant, both academically and in terms of their practical applications, with the growing diffusion of virtual technologies in social practices. The relationships we can establish with ourselves and with our surroundings in digital virtual worlds are, we argue, only partially comparable with similar occurrences in pre-digital practices of subjectification. We propose a perspective according to which the accessibility and immersive phenomenological richness of virtual reality technologies are likely to exacerbate the potentially dissociative effects of virtual reality applications. This text constitutes an initial step towards framing specific socio-technical concerns and starting a timely conversation that binds together dramatherapy, psychodrama, game studies, and the design of digital virtual worlds. (shrink)
Opinions are rarely binary; they can be held with different degrees of conviction, and this expanded attitude spectrum can affect the influence one opinion has on others. Our goal is to understand how different aspects of influence lead to recognizable spatio-temporal patterns of opinions and their strengths. To do this, we introduce a stochastic spatial agent-based model of opinion dynamics that includes a spectrum of opinion strengths and various possible rules for how the opinion strength of one individual affects the (...) influence that this individual has on others. Through simulations, we find that even a small amount of amplification of opinion strength through interaction with like-minded neighbors can tip the scales in favor of polarization and deadlock. (shrink)
Despite the impressive amount of financial resources recently invested in carrying out large-scale brain simulations, it is controversial what the pay-offs are of pursuing this project. One idea is that from designing, building, and running a large-scale neural simulation, scientists acquire knowledge about the computational performance of the simulating system, rather than about the neurobiological system represented in the simulation. It has been claimed that this knowledge may usher in a new era of neuromorphic, cognitive computing systems. This study elucidates (...) this claim and argues that the main challenge this era is facing is not the lack of biological realism. The challenge lies in identifying general neurocomputational principles for the design of artificial systems, which could display the robust flexibility characteristic of biological intelligence. (shrink)
Problems and questions originally raised by Robert Nozick in his famous thought experiment ‘The Experience Machine’ are frequently invoked in the current discourse concerning virtual worlds. Having conceptualized his Gedankenexperiment in the early seventies, Nozick could not fully anticipate the numerous and profound ways in which the diffusion of computer simulations and video games came to affect the Western world. -/- This article does not articulate whether or not the virtual worlds of video games, digital simulations, and virtual technologies currently (...) actualize (or will actualize) Nozick’s thought experiment. Instead, it proposes a philosophical reflection that focuses on human experiences in the upcoming age of their ‘technical reproducibility’. -/- In pursuing that objective, this article integrates and supplements some of the interrogatives proposed in Robert Nozick’s thought experiment. More specifically, through the lenses of existentialism and philosophy of technology, this article tackles the technical and cultural heritage of virtual reality, and unpacks its potential to function as a tool for self-discovery and self-construction. Ultimately, it provides an interpretation of virtual technologies as novel existential domains. Virtual worlds will not be understood as the contexts where human beings can find completion and satisfaction, but rather as instruments that enable us to embrace ourselves and negotiate with various aspects of our (individual as well as collective) existence in previously-unexperienced guises. (shrink)
Making good decisions depends on having accurate information – quickly, and in a form in which it can be readily communicated and acted upon. Two features of medical practice can help: deliberation in groups and the use of scores and grades in evaluation. We study the contributions of these features using a multi-agent computer simulation of groups of physicians. One might expect individual differences in members’ grading standards to reduce the capacity of the group to discover the facts on which (...) well-informed decisions depend. Observations of the simulated groups suggest on the contrary that this kind of diversity can in fact be conducive to epistemic performance. Sometimes, it is adopting common standards that may be expected to result in poor decisions. (shrink)
У статті досліджено феномен віртуальної тілесності, яка не тільки посідає важливе місце у сфері зацікавлення гуманітарних наук, а й увійшла як елемент повсякденності в життя значної частини сучасного людства завдяки мережі Інтернет. Розглянуто концепції поєднання віртуальності та тілесності, ключові підходи до аналізу цього поєднання. Предметом аналізу стали анонімні форуми як яскравий приклад конфігурації віртуального тіла, радикально відмінний від інших через анонімний спосіб репрезентації. Інформація, яку індивід викладає в публічний доступ, сприймається як його втілення в буквальному сенсі слова, а цифровий формат (...) інформації впливає на характеристики буття віртуального тіла індивіда. (shrink)
An overview of my work arguing that peer-to-peer computer networking (the Peer-to-Peer Simulation Hypothesis) may be the best explanation of quantum phenomena and a number of perennial philosophical problems.
Critical Gaming: Interactive History and Virtual Heritage can be seen as a collection of chapters designed to provoke thought and discussion, or it can be seen and used as separate chapters that may help class debate in courses dealing with the digital humanities, game studies (especially in the areas of serious games and game-based learning) or aspects of virtual heritage. While there are very few books in this intersecting area, the range of topics that could be investigated and debated is (...) huge. So instead I have concentrated on questions in areas that appear to be central to the intersection of these three areas, have not currently been debated to any great extent or are topics that would suit either individual reflection or classroom discussion and debate. (shrink)
Berbagai macam aktifitas manusia tidak bisa dipisahkan dari teknologi sebagai wujud alat bantu untuk melaksanakan tugas-tugasnya dengan lebih efektif dan efisien dan terobosan teknologi di bidang informatika telah menghasilkan bentukan teknologi yang berbasis komunikasi yang selain efektif juga bersifat menyenangkan. Media sosial dan game online adalah dua diantara berbagai macam produk hasil pengembangan teknologi di bidang tersebut. Dalam kedua media ini manusia mengalami sebuah perpindahan realitas dari dunia nyata ke dalam dunia virtual. Filsafat teknologi yang dicetuskan oleh Don Ihde digunakan (...) sebagai pisau analisis untuk mengulas dan memahami fenomena ini. Hasil kajian menunjukkan bahwa perubahan persepsi manusia melalui kedua teknologi tersebut dapat mempengaruhi perkembangan budaya secara signifikan. (shrink)
Could a person ever transcend what it is like to be in the world as a human being? Could we ever know what it is like to be other creatures? Questions about the overcoming of a human perspective are not uncommon in the history of philosophy. In the last century, those very interrogatives were notably raised by American philosopher Thomas Nagel in the context of philosophy of mind. In his 1974 essay What is it Like to Be a Bat?, Nagel (...) offered reflections on human subjectivity and its constraints. Nagel’s insights were elaborated before the social diffusion of computers and could not anticipate the cultural impact of technological artefacts capable of materializing interactive simulated worlds as well as disclosing virtual alternatives to the “self.” In this sense, this article proposes an understanding of computers as epistemological and ontological instruments. The embracing of a phenomenological standpoint entails that philosophical issues are engaged and understood from a fundamentally practical perspective. In terms of philosophical praxis, or “applied philosophy,” I explored the relationship between human phenomenologies and digital mediation through the design and the development of experimental video games. For instance, I have conceptualized the first-person action-adventure video game Haerfest (Technically Finished 2009) as a digital re-formulation of the questions posed in Nagel’s famous essay. Experiencing a bat’s perceptual equipment in Haerfest practically corroborates Nagel’s conclusions: there is no way for humans to map, reproduce, or even experience the consciousness of an actual bat. Although unverifiable in its correspondence to that of bats, Haerfest still grants access to experiences and perceptions that, albeit still inescapably within the boundaries of human kinds of phenomenologies, were inaccessible to humans prior to the advent of computers. Phenomenological alterations and virtual experiences disclosed by interactive digital media cannot take place without a shift in human kinds of ontologies, a shift which this study recognizes as the fundamental ground for the development of a new humanism (I deem it necessary to specify that I am not utilizing the term “humanism” in its common connotation, that is to say the one that emerged from the encounter between the Roman civilization and the late Hellenistic culture. According to this conventional acceptation, humanism indicates the realization of the human essence through “scholarship and training in good conduct” (Heidegger 1998, p. 244). However, Heidegger observed that this understanding of humanism does not truly cater to the original essence of human beings, but rather “is determined with regard to an already established interpretation of nature, history, world, and […] beings as a whole.” (Heidegger 1998, p. 245) The German thinker found this way of embracing humanism reductive: a by-product of Western metaphysics. As Heidegger himself specified in his 1949 essay Letter on Humanism, his opposition to the traditional acceptation of the term humanism does not advocate for the “inhuman” or a return to the “barbaric” but stems instead from the belief that the humanism can only be properly understood and restored in culture as more original way of meditating and caring for humanity and understanding its relationship with Being.). Additionally, this study explicitly proposes and exemplifies the use of interactive digital technology as a medium for testing, developing and disseminating philosophical notions, problems and hypotheses in ways which are alternative to the traditional textual one. Presented as virtual experiences, philosophical concepts can be accessed without the filter of subjective imagination. In a persistent, interactive, simulated environment, I claim that the crafting and the mediation of thought takes a novel, projective (In Martin Heidegger’s 1927 Being and Time, the term “projectivity” indicates the way a Being opens to the world in terms of its possibilities of being (Heidegger 1962, pp. 184–185, BT 145). Inspired by Heidegger’s and Vilem Flusser’s work in the field of philosophy of technology as well as Helmuth Plessner’s anthropological position presented in his 1928 book Die Stufen des Organischen und der Mensch. Einleitung in die philosophische Anthropologie, this study understands the concept of projectivity as the innate openness of human beings to construct themselves and their world by means of technical artefacts. In this sense, this study proposes a fundamental understanding of technology as the materialization of mankind’s tendency to overcome its physical, perceptual and communicative limitations.) dimension which I propose to call “augmented ontology.”. (shrink)
En philosophie, l'ontologie étudie ce qui pourrait exister : le type et la structure des objets, les propriétés, évènements, processus et relations. En ingénierie des connaissances, c'est la spécification de la conceptualisation d'un domaine de savoir. Ce domaine concerne ici la modélisation à base d'agents (ABM) pour les sciences de l'homme et de la société (SHS) en vue de la simulation par systèmes multi-agents (SMA). La modélisation SMA en SHS propose la formalisation d'une pluralité de points de vue dans un (...) cadre général qui permet de comparer et de combiner différents angles de réflexion. Agrémenté d'exemples issus de trois domaines (la géographie, l'économie et la sociologie), cet ouvrage utilise des formalismes principalement basés sur UML. L'exercice de description d'une ontologie est présenté comme un dialogue conceptuel interdisciplinaire et l'ontologie comme le médiateur de ce dialogue. (shrink)
When can macroscopic data about a system be used to set parameters in a microfoundational simulation? We examine the epistemic viability of tweaking parameter values to generate a better fit between the outcome of a simulation and the available observational data. We restrict our focus to microfoundational simulations—those simulations that attempt to replicate the macrobehavior of a target system by modeling interactions between microentities. We argue that tweaking can be effective but that there are two central risks. First, tweaking risks (...) overfitting the simulation to the data and thus compromising predictive accuracy; and second, it risks compromising the microfoundationality of the simulation. We evaluate standard responses to tweaking and propose strategies to guard against these risks. (shrink)
Over the past decade, teaching and learning in virtual worlds has been at the forefront of many higher education institutions around the world. The DEHub Virtual Worlds Working Group (VWWG) consisting of Australian and New Zealand higher education academics was formed in 2009. These educators are investigating the role that virtual worlds play in the future of education and actively changing the direction of their own teaching practice and curricula. 47 academics reporting on 28 Australian higher education institutions present an (...) overview of how they have changed directions through the effective use of virtual worlds for diverse teaching and learning activities such as business scenarios and virtual excursions, role-play simulations, experimentation and language development. The case studies offer insights into the ways in which institutions are continuing to change directions in their teaching to meet changing demands for innovative teaching, learning and research in virtual worlds. This paper highlights the ways in which the authors are using virtual worlds to create opportunities for rich, immersive and authentic activities that would be difficult or not possible to achieve through more traditional approaches. (shrink)
So you‘re leaving the cinema—you've just been blown away by Inception—and your mind is buzzing. There is a buzz around you too. Everyone‘s asking each other: ‗Does Cobb‘s spinning top fall?‘ Throughout Inception, Cobb has been struggling to achieve two things: to get back home so he can see his kids again and to keep a grip on reality in the process. What ends up happening to Cobb‘s totem bears on both of these struggles. So, most people who watch Inception (...) think that the whole point of the movie hinges on whether or not Cobb‘s top keeps spinning. Unfortunately for most people, they missed the point! The correct answer to 'Does Cobb‘s spinning top fall?‘ is: 'Who cares!‘ The truth, and in my opinion the main point of Inception, is that reality doesn't really matter. (shrink)
With the rapidly growing amounts of information, visualization is becoming increasingly important, as it allows users to easily explore and understand large amounts of information. However the field of information visualiza- tion currently lacks sufficient theoretical foundations. This article addresses foundational questions connecting information visualization with computing and philosophy studies. The idea of multiscale information granula- tion is described based on two fundamental concepts: information (structure) and computation (process). A new information processing paradigm of Granular Computing enables stepwise increase of (...) granulation/aggregation of information on different levels of resolution, which makes possible dynamical viewing of data. Information produced by Google Earth is an illustration of visualization based on clustering (granulation) of information on a succession of layers. Depending on level, specific emergent properties become visible as a result of different ways of aggregation of data/information. As information visualization ultimately aims at amplifying cognition, we discuss the process of simulation and emulation in relation to cognition, and in particular visual cognition. (shrink)
Agent-Based Models are useful to describe and understand social, economic and spatial systems' dynamics. But, beside the facilities which this methodology offers, evaluation and comparison of simulation models are sometimes problematic. A rigorous conceptual frame needs to be developed. This is in order to ensure the coherence in the chain linking at the one extreme the scientist's hypotheses about the modeled phenomenon and at the other the structure of rules in the computer program. This also systematizes the model design from (...) the thematician conceptual framework as well. The aim is to reflect upon the role that a well defined ontology, based on the crossing of the philosophical and the computer science insights, can play to solve such questions and help the model building. We analyze different conceptions of ontology, introduce the 'ontological test' and show its usefulness to compare models. Then we focus on the model building and show the place of a systematic ABM ontology. The latter process is situated within a larger framework called the 'knowledge framework' in which not only the ontologies but also the notions of theory, model and empirical data take place. At last the relation between emergence and ontology is discussed. (shrink)
The terms ‘verification’ and ‘validation’ are widely used in science, both in the natural and the social sciences. They are extensively used in simulation, often associated with the need to evaluate models in different stages of the simulation development process. Frequently, terminological ambiguities arise when researchers conflate, along the simulation development process, the technical meanings of both terms with other meanings found in the philosophy of science and the social sciences. This article considers the problem of verification and validation in (...) social science simulation along five perspectives: The reasons to address terminological issues in simulation; the meaning of the terms in the philosophical sense of the problem of “truth”; the observation that some debates about these terms in simulation are inadvertently more terminological than epistemological; the meaning of the terms in the technical context of the simulation development process; and finally, a comprehensive outline of the relation between terminology used in simulation, different types of models used in the development process and different epistemological perspectives. (shrink)