Conformity is an often criticized feature of human belief formation. Although generally regarded as a negative influence on reliability, it has not been widely studied. This paper attempts to determine the epistemic effects of conformity by analyzing a mathematical model of this behavior. In addition to investigating the effect of conformity on the reliability of individuals and groups, this paper attempts to determine the optimal structure for conformity. That is, supposing that conformity is inevitable, what is the best way for (...) conformity effects to occur? The paper finds that in some contexts conformity effects are reliability inducing and, more surprisingly even when it is counterproductive, not all methods for reducing its effect are helpful. These conclusions contribute to a larger discussion in social epistemology regarding the effect of social behavior on individual reliability. (shrink)
Ken Wilber’s AQAL model offers a way to synthesize the partial truths of many theories across various fields of knowledge such as evolutionary biology and sociology, developmental psychology, and perennial and contemporary philosophy to name only a few. Despite its reconciling power and influence, the model has been validly criticized for its static nature and its overemphasis on the ascendant, versus descendant, path of development. This paper points out areas of Wilber’s writing that suggest a way to overcome these criticisms. (...) Doing so allows for the refinement of AQAL’s Twenty Tenets for an extension of its formal, dynamic features. This is accomplished first by relating Wilber’s original dynamic drives to the quadrants and levels enabling the quadrants and levels to then predict additional drives not specified by Wilber. The full set of drives then suggests clarifications of assumptions and applications of the model regarding transcendence and inclusion in order for the refined model to be internally consistent. The result helps correct for AQAL’s ascending bias, a bias which overemphasizes a linear path from lower to higher stages of development. Instead, more possibilities emerge such as those in which ascending development is overly dependent on a higher capacity with inclusion of only basic, lower core capacities. This is in contrast to more fully realizing the potential for development of individuals or societies in the more fundamental, lower levels, through deeper inclusion within higher capacities. Also, given the other horizontal drives that are predicted by the model, further possibilities are explored for differing directions of, and emphasis in, development. (shrink)
There is growing interest in understanding and eliciting division of labor within groups of scientists. This paper illustrates the need for this division of labor through a historical example, and a formal model is presented to better analyze situations of this type. Analysis of this model reveals that a division of labor can be maintained in two different ways: by limiting information or by endowing the scientists with extreme beliefs. If both features are present however, cognitive diversity is maintained indefinitely, (...) and as a result agents fail to converge to the truth. Beyond the mechanisms for creating diversity suggested here, this shows that the real epistemic goal is not diversity but transient diversity. (shrink)
Increasingly, epistemologists are becoming interested in social structures and their effect on epistemic enterprises, but little attention has been paid to the proper distribution of experimental results among scientists. This paper will analyze a model first suggested by two economists, which nicely captures one type of learning situation faced by scientists. The results of a computer simulation study of this model provide two interesting conclusions. First, in some contexts, a community of scientists is, as a whole, more reliable when its (...) members are less aware of their colleagues' experimental results. Second, there is a robust tradeoff between the reliability of a community and the speed with which it reaches a correct conclusion. ‡The author would like to thank Brian Skyrms, Kyle Stanford, Jeffrey Barrett, Bruce Glymour, and the participants in the Social Dynamics Seminar at University of California–Irvine for their helpful comments. Generous financial support was provided by the School of Social Science and Institute for Mathematical Behavioral Sciences at UCI. †To contact the author, please write to: Department of Philosophy, Baker Hall 135, Carnegie Mellon University, Pittsburgh, PA 15213-3890; e-mail: [email protected] (shrink)
Theories of scientific rationality typically pertain to belief. In this paper, the author argues that we should expand our focus to include motivations as well as belief. An economic model is used to evaluate whether science is best served by scientists motivated only by truth, only by credit, or by both truth and credit. In many, but not all, situations, scientists motivated by both truth and credit should be judged as the most rational scientists.
Much of contemporary knowledge is generated by groups not single individuals. A natural question to ask is, what features make groups better or worse at generating knowledge? This paper surveys research that spans several disciplines which focuses on one aspect of epistemic communities: the way they communicate internally. This research has revealed that a wide number of different communication structures are best, but what is best in a given situation depends on particular details of the problem being confronted by the (...) group. (shrink)
Written in a jargon-free way, Body Matters provides a clear and accessible phenomenological critique of core assumptions in mainstream biomedicine and explores ways in which health and illness are experienced and interpreted differently in various socio-historical situations. By drawing on the disciplines of literature, cultural anthropology, sociology, medical history, and philosophy, the authors attempt to dismantle common presuppositions we have about human afflictions and examine how the methods of phenomenology open up new ways to interpret the body and to re-envision (...) therapy. (shrink)
This paper approaches the problem of testimony from a new direction. Rather than focusing on the epistemic grounds for testimony, it considers the problem from the perspective of an individual who must choose whom to trust from a population of many would-be testifiers. A computer simulation is presented which illustrates that in many plausible situations, those who trust without attempting to judge the reliability of testifiers outperform those who attempt to seek out the more reliable members of the community. In (...) so doing, it presents a novel defense for the credulist position that argues one should trust testimony without considering the underlying reliability of the testifier. (shrink)
Those who comment on modern scientific institutions are often quick to praise institutional structures that leave scientists to their own devices. These comments reveal an underlying presumption that scientists do best when left alone—when they operate in what we call the ‘scientific state of nature’. Through computer simulation, we challenge this presumption by illustrating an inefficiency that arises in the scientific state of nature. This inefficiency suggests that one cannot simply presume that science is most efficient when institutional control is (...) absent. In some situations, actively encouraging unpopular, risky science would improve scientific outcomes. 1 Introduction2 Scientists and Bandits3 Choosing an ϵ4 Structure of Communication5 Discussion. (shrink)
Costly signalling theory has become a common explanation for honest communication when interests conflict. In this paper, we provide an alternative explanation for partially honest communication that does not require significant signal costs. We show that this alternative is at least as plausible as traditional costly signalling, and we suggest a number of experiments that might be used to distinguish the two theories.
In seeking to explain the evolution of social cooperation, many scholars are using increasingly complex game-theoretic models. These complexities often model readily observable features of human and animal populations. In the case of previous games analyzed in the literature, these modifications have had radical effects on the stability and efficiency properties of the models. We will analyze the effect of adding spatial structure to two communication games: the Lewis Sender-Receiver game and a modified Stag Hunt game. For the Stag Hunt, (...) we find that the results depart strikingly from previous models. In all cases, the departures increase the explanatory value of the models for social phenomena. (shrink)
It is widely believed that bringing parties with differing opinions together to discuss their differences will help both in securing consensus and also in ensuring that this consensus closely approximates the truth. This paper investigates this presumption using two mathematical and computer simulation models. Ultimately, these models show that increased contact can be useful in securing both consensus and truth, but it is not always beneficial in this way. This suggests one should not, without qualification, support policies which increase interpersonal (...) contact if one seeks to improve the epistemic performance of groups. (shrink)
This article presents the evolutionary dynamics of three games: the Nash bargaining game, the ultimatum game, and a hybrid of the two. One might expect that the probability that some behavior evolves in an environment with two games would be near the probability that the same behavior evolves in either game alone. This is not the case for the ultimatum and Nash bargaining games. Fair behavior is more likely to evolve in a combined game than in either game taken individually. (...) This result confirms a conjecture that the complexity of our actual environment provides an explanation for the evolution of fair behavior. Key Words: evolutionary game theory Nash bargaining game ultimatum game fairness. (shrink)
In the latter half of the twentieth century, philosophers of science have argued (implicitly and explicitly) that epistemically rational individuals might compose epistemically irrational groups and that, conversely, epistemically rational groups might be composed of epistemically irrational individuals. We call the conjunction of these two claims the Independence Thesis, as they together imply that methodological prescriptions for scientific communities and those for individual scientists might be logically independent of one another. We develop a formal model of scientific inquiry, define four (...) criteria for individual and group epistemic rationality, and then prove that the four definitions diverge, in the sense that individuals will be judged rational when groups are not and vice versa. We conclude by explaining implications of the inconsistency thesis for (i) descriptive history and sociology of science and (ii) normative prescriptions for scientific communities. (shrink)
In recent years, many scholars have suggested that the Baldwin effect may play an important role in the evolution of language. However, the Baldwin effect is a multifaceted and controversial process and the assessment of its connection with language is difficult without a formal model. This paper provides a first step in this direction. We examine a game-theoretic model of the interaction between plasticity and evolution in the context of a simple language game. Additionally, we describe three distinct aspects of (...) the Baldwin effect: the Simpson– Baldwin effect, the Baldwin expediting effect and the Baldwin optimizing effect. We find that a simple model of the evolution of language lends theoretical plausibility to the existence of the Simpson– Baldwin and the Baldwin optimizing effects in this arena, but not the Baldwin expediting effect. (shrink)
The Handicap Principle represents a central theory in the biological understanding of signaling. This paper presents a number of alternative theories to the Handicap Principle and argues that some of these theories may provide a better explanation for the evolution and stability of honest communication.
Fraud and misleading research represent serious impediments to scientific progress. We must uncover the causes of fraud in order to understand how science functions and in order to develop strategies for combating epistemically detrimental behavior. This paper investigates how the incentive to commit fraud is enhanced by the structure of the scientific reward system. Science is an "accumulation process:" success begets resources which begets more success. Through a simplified mathematical model, I argue that this cyclic relationship enhances the appeal of (...) fraud and makes combating it extremely difficult. (shrink)
Journals regulate a significant portion of the communication between scientists. This paper devises an agent-based model of scientific practice and uses it to compare various strategies for selecting publications by journals. Surprisingly, it appears that the best selection method for journals is to publish relatively few papers and to select those papers it publishes at random from the available “above threshold” papers it receives. This strategy is most effective at maintaining an appropriate type of diversity that is needed to solve (...) a particular type of scientific problem. This problem and the limitation of the model is discussed in detail. (shrink)
Traditionally, epistemologists have distinguished between epistemic and pragmatic goals. In so doing, they presume that much of game theory is irrelevant to epistemic enterprises. I will show that this is a mistake. Even if we restrict attention to purely epistemic motivations, members of epistemic groups will face a multitude of strategic choices. I illustrate several contexts where individuals who are concerned solely with the discovery of truth will nonetheless face difficult game theoretic problems. Examples of purely epistemic coordination problems and (...) social dilemmas will be presented. These show that there is a far deeper connection between economics and epistemology than previous appreciated. (shrink)
Lewis signaling games illustrate how language might evolve from random behavior. The probability of evolving an optimal signaling language is, in part, a function of what learning strategy the agents use. Here we investigate three learning strategies, each of which allows agents to forget old experience. In each case, we find that forgetting increases the probability of evolving an optimal language. It does this by making it less likely that past partial success will continue to reinforce suboptimal practice. The learning (...) strategies considered here show how forgetting past experience can promote learning in the context of games with suboptimal equilibria. (shrink)
In this paper, we develop the notion of a natural convention, and illustrate its usefulness in a detailed examination of indirect requests in English. Our treatment of convention is grounded in Lewis’s seminal account; we do not here redefine convention, but rather explore the space of possibilities within Lewis’s definition, highlighting certain types of variation that Lewis de-emphasized. Applied to the case of indirect requests, which we view through a Searlean lens, the notion of natural convention allows us to give (...) a nuanced answer to the question: Are indirect requests conventional? In conclusion, we reflect on the consequences of our view for the understanding of the semantics/pragmatics divide. (shrink)
Modeling and computer simulations, we claim, should be considered core philosophical methods. More precisely, we will defend two theses. First, philosophers should use simulations for many of the same reasons we currently use thought experiments. In fact, simulations are superior to thought experiments in achieving some philosophical goals. Second, devising and coding computational models instill good philosophical habits of mind. Throughout the paper, we respond to the often implicit objection that computer modeling is “not philosophical.”.
We evaluate the asymptotic performance of boundedly-rational strategies in multi-armed bandit problems, where performance is measured in terms of the tendency (in the limit) to play optimal actions in either (i) isolation or (ii) networks of other learners. We show that, for many strategies commonly employed in economics, psychology, and machine learning, performance in isolation and performance in networks are essentially unrelated. Our results suggest that the appropriateness of various, common boundedly-rational strategies depends crucially upon the social context (if any) (...) in which such strategies are to be employed. (shrink)
In this article, we aim to illustrate evolutionary explanations for the emergence of framing effects, discussed in detail in Cristina Bicchieri’s The Grammar of Society . We show how framing effects might evolve which coalesce two economically distinct interactions into a single one, leading to apparently irrational behavior in each individual interaction. Here we consider the now well-known example of the ultimatum game, and show how this ‘irrational’ behavior might result from a single norm which governs behavior in multiple games. (...) We also show how framing effects might result in radically different play in strategically identical situations. We consider the Hawk-Dove game (the game of chicken) and also the Nash bargaining game. Here arbitrary tags or signals might result in one party doing better than another. (shrink)
Game theory has a prominent role in evolutionary biology, in particular in the ecological study of various phenomena ranging from conflict behaviour to altruism to signalling and beyond. The two central methodological tools in biological game theory are the concepts of Nash equilibrium and evolutionarily stable strategy. While both were inspired by a dynamic conception of evolution, these concepts are essentially static—they only show that a population is uninvadable, but not that a population is likely to evolve. In this article, (...) we argue that a static methodology can lead to misleading views about dynamic evolutionary processes. We advocate, instead, a more pluralistic methodology, which includes both static and dynamic game theoretic tools. Such an approach provides a more complete picture of the evolution of strategic behaviour. 1 Introduction2 The Equilibrium Methodology3 Common Interest Signalling3.1 Lewis’s signalling game3.2 Static analysis3.3 Dynamic analysis4 The Sir Philip Sidney Game4.1 Static analysis4.2 Other equilibria4.3 Dynamic analysis5 Related Literature6 Static and Dynamic Approaches. (shrink)
We study the handicap principle in terms of the Sir Philip Sidney game. The handicap principle asserts that cost is required to allow for honest signalling in the face of conflicts of interest. We show that the significance of the handicap principle can be challenged from two new directions. Firstly, both the costly signalling equilibrium and certain states of no communication are stable under the replicator dynamics ; however, the latter states are more likely in cases where honest signalling should (...) apply. Secondly, we prove the existence and stability of polymorphisms where players mix between being honest and being deceptive and where signalling costs can be very low. Neither the polymorphisms nor the states of no communication are evolutionarily stable, but they turn out to be more important for standard evolutionary dynamics than the costly signalling equilibrium. (shrink)
Recent research into the evolution of higher cognition has piqued an interest in the effect of natural selection on the ability of creatures to respond to their environment. It is believed that environmental variation is required for plasticity to evolve in cases where the ability to be plastic is costly. We investigate one form of environmental variation: frequency dependent selection. Using tools in game theory, we investigate a few models of plasticity and outline the cases where selection would be expected (...) to maintain it. Ultimately we conclude that frequency dependent selection is likely insuffcient to maintain plasticity given reasonable assumptions about its costs. This result is very similar to one aspect of the well-discussed Baldwin effect, where plasticity is first selected for and then later selected against. We show how in these models one would expect plasticity to grow in the population and then be later reduced. Ultimately we conclude that if one is to account for the evolution of behavioral plasticity in this way, one must appeal to a very particular sort of external environmental variation. (shrink)
The Repugnant Conclusion served an important purpose in catalyzing and inspiring the pioneering stage of population ethics research. We believe, however, that the Repugnant Conclusion now receives too much focus. Avoiding the Repugnant Conclusion should no longer be the central goal driving population ethics research, despite its importance to the fundamental accomplishments of the existing literature.
The handicap principle has come under significant challenge both from empirical studies and from theoretical work. As a result, a number of alternative explanations for honest signaling have been proposed. This paper compares the evolutionary plausibility of one such alternative, the "hybrid equilibrium," to the handicap principle. We utilize computer simulations to compare these two theories as they are instantiated in Maynard Smith's Sir Philip Sidney game. We conclude that, when both types of communication are possible, evolution is unlikely to (...) lead to handicap signaling and is far more likely to result in the partially honest signaling predicted by hybrid equilibrium theory. (shrink)
Recent research into the evolution of higher cognition has piqued an interest in the eﬀect of natural selection on the ability of creatures to respond to their environment (behavioral plasticity). It is believed that environmental variation is required for plasticity to evolve in cases where the ability to be plastic is costly. We investigate one form of environmental variation: frequency dependent selection. Using tools in game theory, we investigate a few models of plasticity and outline the cases where selection would (...) be expected to maintain it. Ultimately we conclude that frequency dependent selection is likely insuﬃcient to maintain plasticity given reasonable assumptions about its costs. This result is very similar to one aspect of the well-discussed Baldwin eﬀect, where plasticity is ﬁrst selected for and then later selected against. We show how in these models one would expect plasticity to grow in the population and then be later reduced. Ultimately we conclude that if one is to account for the evolution of behavioral plasticity in this way, one must appeal to a very particular sort of external environmental variation. (shrink)
Why do voters seek to change the political landscape or to retain it? System justification theory proposes that a separate system motive to preserve the existing order drives support for the status-quo, and that this motivation operates independently from personal and collective interests. But how does this explanation apply to recent populist shifts in the political order such as Brexit and the emergence of Donald Trump? While the system motive may seem useful in understanding why the usual progressives may want (...) to stick with an established order, it seems insufficient to explain why the more conservative voters would want to upend the establishment. Thus, we compared SJT’s system motive explanation for the system attitudes of voters on both sides of the political divide to an alternative explanation drawn from the newer social identity model of system attitudes. According to SIMSA, the difficulty in explaining the system attitudes of Brexit/Trump and Remain/Clinton voters from SJT’s system motive standpoint can be resolved by focusing instead on the collective interests that both camps seek to satisfy with their votes. We examined these explanations in two studies conducted soon after Brexit and Trump’s election in 2016, with results providing more support for SIMSA than for SJT. (shrink)
Transfer of information between senders and receivers, of one kind or another, is essential to all life. David Lewis introduced a game theoretic model of the simplest case, where one sender and one receiver have pure common interest. How hard or easy is it for evolution to achieve information transfer in Lewis signaling?. The answers involve surprising subtleties. We discuss some if these in terms of evolutionary dynamics in both finite and infinite populations, with and without mutation.
This paper examines and contrasts two closely related evolutionary explanations in human behaviour: signalling theory, and the theory of Credibility Enhancing Displays. Both have been proposed to explain costly, dangerous, or otherwise ‘extravagant’ social behaviours, especially in the context of religious belief and practice, and each have spawned significant lines of empirical research. However, the relationship between these two theoretical frameworks is unclear, and research which engages both of them is largely absent. In this paper we seek to address this (...) gap at the theoretical level, examining the core differences between the two approaches and prospects and conditions for future empirical testing. We clarify the dynamical and mechanistic bases of signalling and CREDs as explanatory models and contrast the previous uses to which they have been put in the human sciences. Because of idiosyncrasies regarding those uses, several commonly supposed differences and comparative advantages are actually misleading and not in fact generalisable. We also show that signalling and CREDs theories as explanatory models are not interchangeable, because of deep structural differences. As we illustrate, the proposed causal networks of each theory are distinct, with important differences in the endogeneity of various phenomena within each model and their explanatory targets. As a result, they can be seen as complementary rather than in competition. We conclude by surveying the current state of the literature and identifying the differential predictions which could underpin more comprehensive empirical comparison in future research. (shrink)
In this multi-faceted volume, Christian and other religiously committed theorists find themselves at an uneasy point in history—between premodernity, modernity, and postmodernity—where disciplines and methods, cultural and linguistic traditions, and religious commitments tangle and cross. Here, leading theorists explore the state of the art of the contemporary hermeneutical terrain. As they address the work of Gadamer, Ricoeur, and Derrida, the essays collected in this wide-ranging work engage key themes in philosophical hermeneutics, hermeneutics and religion, hermeneutics and the other arts, hermeneutics (...) and literature, and hermeneutics and ethics. Readers will find lively exchanges and reflections that meet the intellectual and philosophical challenges posed by hermeneutics at the crossroads. Contributors are Bruce Ellis Benson, Christina Bieber Lake, John D. Caputo, Eduardo J. Echeverria, Benne Faber, Norman Lillegard, Roger Lundin, Brian McCrea, James K. A. Smith, Michael VanderWeele, Kevin Vanhoozer, and Nicholas Wolterstorff. (shrink)
A series of imitation games involving 3-participant (simultaneous comparison of two hidden entities) and 2-participant (direct interrogation of a hidden entity) were conducted at Bletchley Park on the 100th anniversary of Alan Turing’s birth: 23 June 2012. From the ongoing analysis of over 150 games involving (expert and non-expert, males and females, adults and child) judges, machines and hidden humans (foils for the machines), we present six particular conversations that took place between human judges and a hidden entity that produced (...) unexpected results. From this sample we focus on features of Turing’s machine intelligence test that the mathematician/code breaker did not consider in his examination for machine thinking: the subjective nature of attributing intelligence to another mind. (shrink)
We introduce a dynamic model for evolutionary games played on a network where strategy changes are correlated according to degree of influence between players. Unlike the notion of stochastic stability, which assumes mutations are stochastically independent and identically distributed, our framework allows for the possibility that agents correlate their strategies with the strategies of those they trust, or those who have influence over them. We show that the dynamical properties of evolutionary games, where such influence neighborhoods appear, differ dramatically from (...) those where all mutations are stochastically independent, and establish some elementary convergence results relevant for the evolution of social institutions. (shrink)
In this paper we show that there are certain limits as to what applications of Maynard Smith’s concept of evolutionarily stable strategy can tell us about evolutionary processes. We shall argue that ESS is very similar in spirit to a particular branch of rational choice game theory, namely, the literature on refinements of Nash equilibrium. In the first place, ESS can also be viewed as a Nash equilibrium refinement. At a deeper level, ESS shares a common structure with other rational (...) choice equilibrium refinements. An equilibrium is evaluated according to whether it persists under specific kinds of perturbations. In the case of ESS, these perturbations are mutations. However, from a dynamical point of view, focusing exclusively on perturbations of equilibria provides only a partial account of the system under consideration. We will show that this has important consequences when it comes to analyzing game-theoretic models of evolutionary processes. In particular, there are non-ESS states which are significant for evolutionary dynamics. (shrink)