We present a new approach to the old problem of how to incorporate the role of the observer in statistics. We show classical probability theory to be inadequate for this task and take refuge in the epsilon-model, which is the only model known to us caapble of handling situations between quantum and classical statistics. An example is worked out and some problems are discussed as to the new viewpoint that emanates from our approach.
We show that Bell inequalities can be violated in the macroscopic world. The macroworld violation is illustrated using an example involving connected vessels of water. We show that whether the violation of inequalities occurs in the microworld or the macroworld, it is the identification of nonidentical events that plays a crucial role. Specifically, we prove that if nonidentical events are consistently differentiated, Bell-type Pitowsky inequalities are no longer violated, even for Bohm's example of two entangled spin 1/2 quantum particles. We (...) show how Bell inequalities can be violated in cognition, specifically in the relationship between abstract concepts and specific instances of these concepts. This supports the hypothesis that genuine quantum structure exists in the mind. We introduce a model where the amount of nonlocality and the degree of quantum uncertainty are parameterized, and demonstrate that increasing nonlocality increases the degree of violation, while increasing quantum uncertainty decreases the degree of violation. (shrink)
Sven Bernecker presents an analysis of the concept of propositional (or factual) memory, and examines a number of metaphysical and epistemological issues crucial to the understanding of memory. -/- Bernecker argues that memory, unlike knowledge, implies neither belief nor justification. There are instances where memory, though hitting the mark of truth, succeeds in an epistemically defective way. This book shows that, contrary to received wisdom in epistemology, memory not only preserves epistemic features generated by other epistemic sources but also (...) functions as a source of justification and knowledge. According to the causal theory of memory argued for in this book, the dependence of memory states on past representations supports counterfactuals of the form: if the subject hadn't represented a given proposition in the past he wouldn't represent it in the present. The book argues for a version of content externalism whereupon the individuation of memory contents depends on relations the subject bears to his past physical or social environment. Moreover, Bernecker shows that memory doesn't require identity, but only similarity, of past and present attitudes and contents. The notion of content similarity is explicated in terms of the entailment relation. (shrink)
An alarming number of philosophers and cognitive scientists have argued that mind extends beyond the brain and body. This book evaluates these arguments and suggests that, typically, it does not. A timely and relevant study that exposes the need to develop a more sophisticated theory of cognition, while pointing to a bold new direction in exploring the nature of cognition Articulates and defends the “mark of the cognitive”, a common sense theory used to distinguish between cognitive and non-cognitive processes Challenges (...) the current popularity of extended cognition theory through critical analysis and by pointing out fallacies and shortcoming in the literature Stimulates discussions that will advance debate about the nature of cognition in the cognitive sciences. (shrink)
This article investigates the properties of multistate top revision, a dichotomous model of belief revision that is based on an underlying model of probability revision. A proposition is included in the belief set if and only if its probability is either 1 or infinitesimally close to 1. Infinitesimal probabilities are used to keep track of propositions that are currently considered to have negligible probability, so that they are available if future information makes them more plausible. Multistate top revision satisfies a (...) slightly modified version of the set of basic and supplementary AGM postulates, except the inclusion and success postulates. This result shows that hyperreal probabilities can provide us with efficient tools for overcoming the well known difficulties in combining dichotomous and probabilistic models of belief change. (shrink)
This book argues that we need to explore how human beings can best coordinate and collaborate with robots in responsible ways. It investigates ethically important differences between human agency and robot agency to work towards an ethics of responsible human-robot interaction.
According to T. M. Scanlon's 'buck-passing' analysis of value, x is good means that x has properties that provide reasons to take up positive attitudes vis-à-vis x. Some authors have claimed that this idea can be traced back to Franz Brentano, who said in 1889 that the judgement that x is good is the judgement that a positive attitude to x is correct ('richtig'). The most discussed problem in the recent literature on buckpassing is known as the 'wrong kind of (...) reason' problem (the WKR problem): it seems quite possible that there is sometimes reason to favour an object although that object is not good and possibly very evil. The problem is to delineate exactly what distinguishes reasons of the right kind from reasons of the wrong kind. In this paper we offer a Brentano-style solution. We also note that one version of the WKR problem was put forward by G. E. Moore in his review of the English translation of Brentano's Vom Ursprung sittlicher Erkenntnis. Before getting to how our Brentano-style approach might offer a way out for Brentano and the buck-passers, we briefly consider and reject an interesting attempt to solve the WKR problem recently proposed by John Skorupski. (shrink)
What is the nature of consciousness? How is consciousness related to brain processes? This volume collects thirteen new papers on these topics: twelve by leading and respected philosophers and one by a leading color-vision scientist. All focus on consciousness in the "phenomenal" sense: on what it's like to have an experience. Consciousness has long been regarded as the biggest stumbling block for physicalism, the view that the mind is physical. The controversy has gained focus over the last few decades, and (...) phenomenal knowledge and phenomenal concepts--knowledge of consciousness and the associated concepts--have come to play increasingly prominent roles in this debate. Consider Frank Jackson's famous case of Mary, the super-scientist who learns all the physical information while confined in a black-and-white room. According to Jackson, if physicalism is true, then Mary's physical knowledge should allow her to deduce what it's like to see in color. Yet it seems intuitively clear that she learns something when she leaves the room. But then how can consciousness be physical? Arguably, whether this sort of reasoning is sound depends on how phenomenal concepts and phenomenal knowledge are construed. For example, some argue that the Mary case reveals something about phenomenal concepts but has no implications for the nature of consciousness itself. Are responses along these lines adequate? Or does the problem arise again at the level of phenomenal concepts? The papers in this volume engage with the latest developments in this debate. The authors' perspectives range widely. For example, Daniel Dennett argues that anti-physicalist arguments such as the knowledge argument are simply confused; David Papineau grants that such arguments at least reveal important features of phenomenal concepts; and David Chalmers defends the anti-physicalist arguments, arguing that the "phenomenal concept strategy" cannot succeed. (shrink)
The paper explores a structural account of propositional justification in terms of the notion of being in a position to know and negation. Combined with a non-normal logic for being in a position to know, the account allows for the derivation of plausible principles of justification. The account is neutral on whether justification is grounded in internally individuated mental states, and likewise on whether it is grounded in facts that are already accessible by introspection or reflection alone. To this extent, (...) it is compatible both with internalism and with externalism about justification. Even so, the account allows for the proof of principles that are commonly conceived to depend on an internalist conception of justification. The account likewise coheres both with epistemic contextualism and with its rejection, and is compatible both with the knowledge-first approach and with its rejection. Despite its neutrality on these issues, the account makes propositional justification luminous and so is controversial. However, it proves quite resilient in the light of recent anti-luminosity arguments. (shrink)
The public defence of science has never been more important than now. However, it is a difficult task with many pitfalls, and there are mechanisms that can make it counterproductive. This article offers advice for science defenders, summarized in ten commandments that warn against potentially ineffective or even backfiring practices in the defence of science: Do not portray science as a unique type of knowledge. Do not underestimate scientific uncertainty. Do not describe science as infallible. Do not deny the value-ladenness (...) of science. Do not associate with power. Do not blame the victims of disinformation. Do not aim at convincing the anti-scientific propagandists. Do not contribute to the legitimization of pseudoscience. Do not attack religion when it does not conflict with science. Do not call yourself a “sceptic”. (shrink)
Self-driving cars hold out the promise of being safer than manually driven cars. Yet they cannot be a 100 % safe. Collisions are sometimes unavoidable. So self-driving cars need to be programmed for how they should respond to scenarios where collisions are highly likely or unavoidable. The accident-scenarios self-driving cars might face have recently been likened to the key examples and dilemmas associated with the trolley problem. In this article, we critically examine this tempting analogy. We identify three important ways (...) in which the ethics of accident-algorithms for self-driving cars and the philosophy of the trolley problem differ from each other. These concern: the basic decision-making situation faced by those who decide how self-driving cars should be programmed to deal with accidents; moral and legal responsibility; and decision-making in the face of risks and uncertainty. In discussing these three areas of disanalogy, we isolate and identify a number of basic issues and complexities that arise within the ethics of the programming of self-driving cars. (shrink)
The contributors to this volume address global, regional, and local landscapes, cosmopolitan and indigenous cultures, and human and more-than-human ecology as they work to reveal place-specific tensional dynamics. This unusual book, which covers a wide-ranging array of topics, coheres into a work that will be a valuable reference for scholars of geography and the philosophy of place.
This book investigates central issues in the philosophy of memory. Does remembering require a causal process connecting the past representation to its subsequent recall and, if so, what is the nature of the causal process? Of what kind are the primary intentional objects of memory states? How do we know that our memory experiences portray things the way they happened in the past? Given that our memory is not only a passive device for reproducing thoughts but also an active device (...) for processing stored thoughts, when are thoughts sufficiently similar to be memory-related? The Metaphysics of Memory defends a version of the causal theory of memory, argues for direct realism about memory, proposes an externalist response to skepticism about memory knowledge, and develops a contextualist account of the factivity constraint on memory. (shrink)
Many ethicists writing about automated systems attribute agency to these systems. Not only that; they seemingly attribute an autonomous or independent form of agency to these machines. This leads some ethicists to worry about responsibility-gaps and retribution-gaps in cases where automated systems harm or kill human beings. In this paper, I consider what sorts of agency it makes sense to attribute to most current forms of automated systems, in particular automated cars and military robots. I argue that whereas it indeed (...) makes sense to attribute different forms of fairly sophisticated agency to these machines, we ought not to regard them as acting on their own, independently of any human beings. Rather, the right way to understand the agency exercised by these machines is in terms of human–robot collaborations, where the humans involved initiate, supervise, and manage the agency of their robotic collaborators. This means, I argue, that there is much less room for justified worries about responsibility-gaps and retribution-gaps than many ethicists think. (shrink)
PurposeIn this article, we aim to present and defend a contextual approach to mathematical explanation.MethodTo do this, we introduce an epistemic reading of mathematical explanation.ResultsThe epistemic reading not only clarifies the link between mathematical explanation and mathematical understanding, but also allows us to explicate some contextual factors governing explanation. We then show how several accounts of mathematical explanation can be read in this approach.ConclusionThe contextual approach defended here clears up the notion of explanation and pushes us towards a pluralist vision (...) on mathematical explanation. (shrink)
We argue that non-epistemic values, including moral ones, play an important role in the construction and choice of models in science and engineering. Our main claim is that non-epistemic values are not only “secondary values” that become important just in case epistemic values leave some issues open. Our point is, on the contrary, that non-epistemic values are as important as epistemic ones when engineers seek to develop the best model of a process or problem. The upshot is that models are (...) neither value-free, nor depend exclusively on epistemic values or use non-epistemic values as tie-breakers. (shrink)
The global method safety account of knowledge states that an agent’s true belief that p is safe and qualifies as knowledge if and only if it is formed by method M, such that her beliefs in p and her beliefs in relevantly similar propositions formed by M in all nearby worlds are true. This paper argues that global method safety is too restrictive. First, the agent may not know relevantly similar propositions via M because the belief that p is the (...) only possible outcome of M. Second, there are cases where there is a fine-grained belief that is unsafe and a relevantly similar coarse-grained belief that is safe and where both beliefs are based on the same method M. Third, the reliability of conditional reasoning, a basic belief-forming method, seems to be sensitive to fine-grained contents, as suggested by the wide variation in success rates for thematic versions of the Wason selection task. (shrink)
We model a piece of text of human language telling a story by means of the quantum structure describing a Bose gas in a state close to a Bose–Einstein condensate near absolute zero temperature. For this we introduce energy levels for the words used in the story and we also introduce the new notion of ‘cogniton’ as the quantum of human thought. Words are then cognitons in different energy states as it is the case for photons in different energy states, (...) or states of different radiative frequency, when the considered boson gas is that of the quanta of the electromagnetic field. We show that Bose–Einstein statistics delivers a very good model for these pieces of texts telling stories, both for short stories and for long stories of the size of novels. We analyze an unexpected connection with Zipf’s law in human language, the Zipf ranking relating to the energy levels of the words, and the Bose–Einstein graph coinciding with the Zipf graph. We investigate the issue of ‘identity and indistinguishability’ from this new perspective and conjecture that the way one can easily understand how two of ‘the same concepts’ are ‘absolutely identical and indistinguishable’ in human language is also the way in which quantum particles are absolutely identical and indistinguishable in physical reality, providing in this way new evidence for our conceptuality interpretation of quantum theory. (shrink)
Within certain philosophical debates, most notably those concerning the limits of our knowledge, agnosticism seems a plausible, and potentially the right, stance to take. Yet, in order to qualify as a proper stance, and not just the refusal to adopt any, agnosticism must be shown to be in opposition to both endorsement and denial and to be answerable to future evidence. This paper explicates and defends the thesis that agnosticism may indeed define such a third stance that is weaker than (...) scepticism and hence offers a genuine alternative to realism and anti-realism about our cognitive limits. (shrink)
Memory occupies a fundamental place in philosophy, playing a central role not only in the history of philosophy but also in philosophy of mind, epistemology, and ethics. Yet the philosophy of memory has only recently emerged as an area of study and research in its own right. -/- The Routledge Handbook of Philosophy of Memory is an outstanding reference source on the key topics, problems and debates in this exciting area, and is the first philosophical collection of its kind. The (...) forty-eight chapters are written by an international team of contributors, and divided into nine parts: -/- The nature of memory The metaphysics of memory Memory, mind and meaning Memory and the self Memory and time The social dimension of memory The epistemology of memory Memory and morality History of philosophy of memory. -/- Within these sections, central topics and problems are examined, including: truth, consciousness, imagination, emotion, self-knowledge, narrative, personal identity, time, collective and social memory, internalism and externalism, and the ethics of memory. The final part examines figures in the history of philosophy, including Aristotle, Augustine, Freud, Bergson, Wittgenstein and Heidegger, as well as perspectives on memory in Indian and Chinese philosophy. -/- Essential reading for students and researchers in philosophy, particularly philosophy of mind and psychology, the Handbook will also be of interest to those in related fields, such as psychology and anthropology. (shrink)
Following the recent call for advancement in knowledge about business ethics in East Asia, this study proposes a complementary perspective on business ethics in South Korea. We challenge the conventional view that South Korea is a strictly collectivist country, where group norms and low trust determine the norms and values of behavior. Using the concept of civil religion, we suggest that the center of the South Korean civil religion can be seen in the affective ties and networks pervading the economic, (...) political, and social institutions, embedded in and guided by Confucian ideals. We argue that South Korea should be seen not as a collectivist low-trust society, but rather as an affective-relational society, in which the relational context determines whether collectivism or individualism prevails. Further, we assert that trust, the cohesive factor of affective ties and networks, has until now been inadequately captured by conventional surveys. Our proposed perspective contributes to a more holistic picture and a more firmly grounded understanding of business ethics in South Korea. (shrink)
In this paper, we discuss the ethics of automated driving. More specifically, we discuss responsible human-robot coordination within mixed traffic: i.e. traffic involving both automated cars and conventional human-driven cars. We do three main things. First, we explain key differences in robotic and human agency and expectation-forming mechanisms that are likely to give rise to compatibility-problems in mixed traffic, which may lead to crashes and accidents. Second, we identify three possible solution-strategies for achieving better human-robot coordination within mixed traffic. Third, (...) we identify important ethical challenges raised by each of these three possible strategies for achieving optimized human-robot cordination in this domain. Among other things, we argue that we should not just explore ways of making robotic driving more like human driving. Rather, we ought also to take seriously potential ways of making human driving more like robotic driving. Nor should we assume that complete automation is always the ideal to aim for; in some traffic-situations, the best results may be achieved through human-robot collaboration. Ultimately, our main aim in this paper is to argue that the new field of the ethics of automated driving needs take seriously the ethics of mixed traffic and responsible human-robot coordination. (shrink)
Ockhamism implies that future contingents may be true, their historical contingency notwithstanding. It is thus opposed to both the Peircean view according to which all future contingents are false, and Supervaluationist Indeterminism according to which all future contingents are neither true nor false. The paper seeks to defend Ockhamism against two charges: the charge that it cannot meet the requirement that truths be grounded in reality, and the charge that it proves incompatible with objective indeterminism about the future. In each (...) case, the defence draws on the idea that certain truths are truths only courtesy of others and of what makes the latter true. After introduction of the Ockhamist view, its competitors and implications, a suitable definition of grounded truth is being devised that both is faithful to the spirit of the grounding-requirement and allows the Ockhamist to heed that requirement quite comfortably. Then two senses in which the future might be open are being introduced, indeterminacy as failure of predetermination by past and present facts, and indeterminacy as failure of entailment by past and present truths. It is argued that while openness in the former sense, but not in the latter sense, coheres with the Ockhamist view, it is only openness in the former sense that matters for objective indeterminism. (shrink)
We analyse the way in which the principle that ‘the whole is greater than the sum of its parts’ manifests itself with phenomena of visual perception. For this investigation we use insights and techniques coming from quantum cognition, and more specifically we are inspired by the correspondence of this principle with the phenomenon of the conjunction effect in human cognition. We identify entities of meaning within artefacts of visual perception and rely on how such entities are modelled for corpuses of (...) texts such as the webpages of the World-Wide Web for our study of how they appear in phenomena of visual perception. We identify concretely the conjunction effect in visual artefacts and analyse its structure in the example of a photograph. We also analyse quantum entanglement between different aspects of meaning in artefacts of visual perception. We confirm its presence by showing that well elected experiments on images retrieved accordingly by Google Images give rise to probabilities and expectation values violating the Clauser Horne Shimony Holt version of Bell’s inequalities. We point out how this approach can lead to a mathematical description of the meaning content of a visual artefact such as a photograph. (shrink)
The formalism of abstracted quantum mechanics is applied in a model of the generalized Liar Paradox. Here, the Liar Paradox, a consistently testable configuration of logical truth properties, is considered a dynamic conceptual entity in the cognitive sphere (Aerts, Broekaert, & Smets, [Foundations of Science 1999, 4, 115–132; International Journal of Theoretical Physics, 2000, 38, 3231–3239]; Aerts and colleagues[Dialogue in Psychology, 1999, 10; Proceedings of Fundamental Approachs to Consciousness, Tokyo ’99; Mind in Interaction]. Basically, the intrinsic contextuality of (...) the truth-value of the Liar Paradox is appropriately covered by the abstracted quantum mechanical approach. The formal details of the model are explicited here for the generalized case. We prove the possibility of constructing a quantum model of the m-sentence generalizations of the Liar Paradox. This includes (i) the truth–falsehood state of the m-Liar Paradox can be represented by an embedded 2m-dimensional quantum vector in a (2m) m -dimensional complex Hilbert space, with cognitive interactions corresponding to projections, (ii) the construction of a continuous ‘time’ dynamics is possible: typical truth and falsehood value oscillations are described by Schrödinger evolution, (iii) Kirchoff and von Neumann axioms are satisfied by introduction of ‘truth-value by inference’ projectors, (iv) time invariance of unmeasured state. (shrink)
This paper attempts to answer the question of what defines mnemonic confabulation vis-à-vis genuine memory. The two extant accounts of mnemonic confabulation as “false memory” and as ill-grounded memory are shown to be problematic, for they cannot account for the possibility of veridical confabulation, ill-grounded memory, and wellgrounded confabulation. This paper argues that the defining characteristic of mnemonic confabulation is that it lacks the appropriate causal history. In the confabulation case, there is no proper counterfactual dependence of the state of (...) seeming to remember on the corresponding past representation. (shrink)
We analyze different aspects of our quantum modeling approach of human concepts and, more specifically, focus on the quantum effects of contextuality, interference, entanglement, and emergence, illustrating how each of them makes its appearance in specific situations of the dynamics of human concepts and their combinations. We point out the relation of our approach, which is based on an ontology of a concept as an entity in a state changing under influence of a context, with the main traditional concept theories, (...) that is, prototype theory, exemplar theory, and theory theory. We ponder about the question why quantum theory performs so well in its modeling of human concepts, and we shed light on this question by analyzing the role of complex amplitudes, showing how they allow to describe interference in the statistics of measurement outcomes, while in the traditional theories statistics of outcomes originates in classical probability weights, without the possibility of interference. The relevance of complex numbers, the appearance of entanglement, and the role of Fock space in explaining contextual emergence, all as unique features of the quantum modeling, are explicitly revealed in this article by analyzing human concepts and their dynamics. (shrink)
So far overlooked by the international business ethics literature, we introduce, characterize, and normatively analyze the use of affective ties and networks in South Korea from an ethical point of view. Whereas the ethics of using Guanxi in China has been comprehensively discussed, Korean informal networks remain difficult to manage for firms in South Korea due to the absence of existing academic debate and research in this field. In this study, we concentrate mainly on the question of whether foreign firms (...) will and can use affective ties in Korea. The informal social network forms are classified and contrasted with the conventional ethical approaches used in international business ethics to assess which categories can be regarded as ethical or unethical. Finally, foreign firms are advised how to cope with and use different affective network types. Although the nature of affective ties and networks in Korea differs from that found for instance in China, consistent with the conclusion of prior research, we recommend particularistic analysis and decision making regarding the circumstances in which to conclude affective ties and networks and when to opt out. We conclude that foreign firms in Korea should invest in establishing Inmaek, refrain from engaging in Yonjul, and support host country nationals’ Yongo ties. Moreover, it is suggested that foreign firms should find ways to monitor and manage informal ties effectively. (shrink)
Dass Hegels Theorie der bürgerlichen Gesellschaft nicht nur von antiquarischem Interesse ist, belegt ihre Vorbildfunktion für Axel Honneths normative Rekonstruktion des Marktes. Sven Ellmers zeigt, dass sich Hegels anspruchsvoller Versuch, die atomistische Marktgesellschaft in seine Theorie der Sittlichkeit zu integrieren, als einerseits zwar wenig überzeugend erweist, andererseits aber dennoch instruktiv ist. Unter analytischen Gesichtspunkten ist Hegels Theorie Marx' Kritik der politischen Ökonomie unterlegen, unter normativen Gesichtspunkten bereichert sie die Diskussion um zwei Grundfragen kritischer Theorie: Welche Gründe sprechen gegen den (...) Kapitalismus, und welche Formen der Sozialität zeichnen die Wirtschaft eines freien Gemeinwesens aus? (shrink)
We model a piece of text of human language telling a story by means of the quantum structure describing a Bose gas in a state close to a Bose–Einstein condensate near absolute zero temperature. For this we introduce energy levels for the words used in the story and we also introduce the new notion of ‘cogniton’ as the quantum of human thought. Words are then cognitons in different energy states as it is the case for photons in different energy states, (...) or states of different radiative frequency, when the considered boson gas is that of the quanta of the electromagnetic field. We show that Bose–Einstein statistics delivers a very good model for these pieces of texts telling stories, both for short stories and for long stories of the size of novels. We analyze an unexpected connection with Zipf’s law in human language, the Zipf ranking relating to the energy levels of the words, and the Bose–Einstein graph coinciding with the Zipf graph. We investigate the issue of ‘identity and indistinguishability’ from this new perspective and conjecture that the way one can easily understand how two of ‘the same concepts’ are ‘absolutely identical and indistinguishable’ in human language is also the way in which quantum particles are absolutely identical and indistinguishable in physical reality, providing in this way new evidence for our conceptuality interpretation of quantum theory. (shrink)
Many of our sources of knowledge only afford us knowledge that is inexact. When trying to see how tall something is, or to hear how far away something is, or to remember how long something lasted, we may come to know some facts about the approximate size, distance or duration of the thing in question but we don’t come to know exactly what its size, distance or duration is. In some such situations we also have some pointed knowledge of how (...) inexact our knowledge is. That is, we can knowledgeably pinpoint some exact claims that we do not know. We show that standard models of inexact knowledge leave little or no room for such pointed knowledge. We devise alternative models that are not afflicted by this shortcoming. (shrink)
Reuben Hersh confided to us that, about forty years ago, the late Paul Cohen predicted to him that at some unspecified point in the future, mathematicians would be replaced by computers. Rather than focus on computers replacing mathematicians, however, our aim is to consider the (im)possibility of human mathematicians being joined by “artificial mathematicians” in the proving practice—not just as a method of inquiry but as a fellow inquirer.