In the literature of collective intentions, the ‘we-intentions’ that lie behind cooperative actions are analysed in terms of individual mental states. The core forms of these analyses imply that all Nash equilibrium behaviour is the result of collective intentions, even though not all Nash equilibria are cooperative actions. Unsatisfactorily, the latter cases have to be excluded either by stipulation or by the addition of further, problematic conditions. We contend that the cooperative aspect of collective intentions is not a property of (...) the intentions themselves, but of the mode of reasoning by which they are formed. We analyse collective intentions as the outcome of team reasoning, a mode of practical reasoning used by individuals as members of groups. We describe this mode of reasoning in terms of formal schemata, discuss a range of possible accounts of group agency, and show how existing theories of collective intentions fit into this framework. (shrink)
Trolley problems have been used in the development of moral theory and the psychological study of moral judgments and behavior. Most of this research has focused on people from the West, with implicit assumptions that moral intuitions should generalize and that moral psychology is universal. However, cultural differences may be associated with differences in moral judgments and behavior. We operationalized a trolley problem in the laboratory, with economic incentives and real-life consequences, and compared British and Chinese samples on moral behavior (...) and judgment. We found that Chinese participants were less willing to sacrifice one person to save five others, and less likely to consider such an action to be right. In a second study using three scenarios, including the standard scenario where lives are threatened by an on-coming train, fewer Chinese than British participants were willing to take action and sacrifice one to save five, and this cultural difference was more pronounced when the consequences were less severe than death. (shrink)
We explore the idea that a group or ‘team’ of individuals can be an agent in its own right and that, when this is the case, individual team members use team reasoning, a distinctive mode of reasoning from that of standard decision theory. Our approach is to represent team reasoning explicitly, by means of schemata of practical reasoning in which conclusions about what actions should be taken are inferred from premises about the decision environment and about what agents are seeking (...) to achieve. We use this theoretical framework to compare team reasoning with the individual reasoning of standard decision theory, and to compare various theories of team agency and collective intentionality. (shrink)
A framing effect occurs when an agent's choices are not invariant under changes in the way a decision problem is presented, e.g. changes in the way options are described (violation of description invariance) or preferences are elicited (violation of procedure invariance). Here we identify those rationality violations that underlie framing effects. We attribute to the agent a sequential decision process in which a “target” proposition and several “background” propositions are considered. We suggest that the agent exhibits a framing effect if (...) and only if two conditions are met. First, different presentations of the decision problem lead the agent to consider the propositions in a different order (the empirical condition). Second, different such “decision paths” lead to different decisions on the target proposition (the logical condition). The second condition holds when the agent's initial dispositions on the propositions are “implicitly inconsistent,” which may be caused by violations of “deductive closure.” Our account is consistent with some observations made by psychologists and provides a unified framework for explaining violations of description and procedure invariance. (shrink)
Game theory is central to modern understandings of how people deal with problems of coordination and cooperation. Yet, ironically, it cannot give a straightforward explanation of some of the simplest forms of human coordination and cooperation--most famously, that people can use the apparently arbitrary features of "focal points" to solve coordination problems, and that people sometimes cooperate in "prisoner's dilemmas." Addressing a wide readership of economists, sociologists, psychologists, and philosophers, Michael Bacharach here proposes a revision of game theory that resolves (...) these long-standing problems. In the classical tradition of game theory, Bacharach models human beings as rational actors, but he revises the standard definition of rationality to incorporate two major new ideas. He enlarges the model of a game so that it includes the ways agents describe to themselves their decision problems. And he allows the possibility that people reason as members of groups, each taking herself to have reason to perform her component of the combination of actions that best achieves the group's common goal. Bacharach shows that certain tendencies for individuals to engage in team reasoning are consistent with recent findings in social psychology and evolutionary biology. As the culmination of Bacharach's long-standing program of pathbreaking work on the foundations of game theory, this book has been eagerly awaited. Following Bacharach's premature death, Natalie Gold and Robert Sugden edited the unfinished work and added two substantial chapters that allow the book to be read as a coherent whole. (shrink)
Standard game theory cannot explain the selection of payoff-dominant outcomes that are best for all players in common-interest games. Theories of team reasoning can explain why such mutualistic cooperation is rational. They propose that teams can be agents and that individuals in teams can adopt a distinctive mode of reasoning that enables them to do their part in achieving Pareto-dominant outcomes. We show that it can be rational to play payoff-dominant outcomes, given that an agent group identifies. We compare team (...) reasoning to other theories that have been proposed to explain how people can achieve payoff-dominant outcomes, especially with respect to rationality. Some authors have hoped that it would be possible to develop an argument that it is rational to group identify. We identify some large—probably insuperable—problems with this project and sketch some more promising approaches, whereby the normativity of group identification rests on morality. (shrink)
There is a long-standing debate in philosophy about whether it is morally permissible to harm one person in order to prevent a greater harm to others and, if not, what is the moral principle underlying the prohibition. Hypothetical moral dilemmas are used in order to probe moral intuitions. Philosophers use them to achieve a reflective equilibrium between intuitions and principles, psychologists to investigate moral decision-making processes. In the dilemmas, the harms that are traded off are almost always deaths. However, the (...) moral principles and psychological processes are supposed to be broader than this, encompassing harms other than death. Further, if the standard pattern of intuitions is preserved in the domain of economic harm, then that would open up the possibility of studying behavior in trolley problems using the tools of experimental economics. We report the results of two studies designed to test whether the standard patterns of intuitions are preserved when the domain and severity of harm are varied. Our findings show that the difference in moral intuitions between bystander and footbridge scenarios is replicated across different domains and levels of physical and non-physical harm, including economic harms. (shrink)
Decision theory explains weakness of will as the result of a conflict of incentives between different transient agents. In this framework, self-control can only be achieved by the I-now altering the incentives or choice-sets of future selves. There is no role for an extended agency over time. However, it is possible to extend game theory to allow multiple levels of agency. At the inter-personal level, theories of team reasoning allow teams to be agents, as well as individuals. I apply team (...) reasoning at the intra-personal level, taking the self as a team of transient agents over time. This allows agents to ask, not just “what should I-now do?’, but also ‘What should I, the person over time do?’, which may enable agents to achieve self-control. The resulting account is Aristotelian in flavour, as it involves reasoning schemata and perception, and it is compatible with some of the psychological findings about self-control. (shrink)
I connect commodification arguments to an empirical literature, present a mechanism by which commodification may occur, and show how this may restrict the range of goods and services that are subject to commodification, therefore having implications for the use of commodification arguments in political theory. Commodification arguments assert that some people’s trading a good or service can debase it for third parties. They consist of a normative premise, a theory of value, and an empirical premise, a mechanism whereby some people’s (...) market exchange affects how goods can be valued by others. Hence, their soundness depends on the existence of a suitable candidate mechanism for the empirical premise. The ‘motivation crowding effect’ has been cited as the empirical base of commodification. I show why the main explanations of motivation crowding – signaling and over-justification – do not provide mechanisms that could underpin the empirical premise. In doing this, I reveal some requirements on any candidate me... (shrink)
There is a long-standing debate in philosophy about whether it is morally permissible to harm one person in order to prevent a greater harm to others and, if not, what is the moral principle underlying the prohibition. Hypothetical moral dilemmas are used in order to probe moral intuitions. Philosophers use them to achieve a reflective equilibrium between intuitions and principles, psychologists to investigate moral decision-making processes. In the dilemmas, the harms that are traded off are almost always deaths. However, the (...) moral principles and psychological processes are supposed to be broader than this, encompassing harms other than death. Further, if the standard pattern of intuitions is preserved in the domain of economic harm, then that would open up the possibility of studying behaviour in trolley problems using the tools of experimental economics. We report the results of two studies designed to test whether the standard patterns of intuitions are preserved when the domain and severity of harm are varied. Our findings show that the difference in moral intuitions between bystander and footbridge scenarios is replicated across different domains and levels of physical and non-physical harm, including economic harms. (shrink)
Hypothetical trolley problems are widely used to elicit moral intuitions, which are employed in the development of moral theory and the psychological study of moral judgments. The scenarios used are outlandish, and some philosophers and psychologists have questioned whether the judgments made in such unrealistic and unfamiliar scenarios are a reliable basis for theory-building. We present two experiments that investigate whether differences in moral judgment due to the role of the agent, previously found in a standard trolley scenario, persist when (...) the structure of the problem is transplanted to a more familiar context. Our first experiment compares judgments in hypothetical scenarios; our second experiment operationalizes some of those scenarios in the laboratory, allowing us to observe judgments about decisions that are really being made. In the hypothetical experiment, we found that the role effect reversed in our more familiar context, both in judgments about what the actor ought to do and in judgments about the moral rightness of the action. However, in our laboratory experiment, the effects reversed back or disappeared. Among judgments of what the actor ought to do, we found the same role effect as in the standard hypothetical trolley scenario, but the effect of role on moral judgments disappeared. (shrink)
Hypothetical trolley problems are widely used to elicit moral intuitions, which are employed in the development of moral theory and the psychological study of moral judgments. The scenarios used are outlandish, and some philosophers and psychologists have questioned whether the judgments made in such unrealistic and unfamiliar scenarios are a reliable basis for theory-building. We present two experiments that investigate whether differences in moral judgment due to the role of the agent, previously found in a standard trolley scenario, persist when (...) the structure of the problem is transplanted to a more familiar context. Our first experiment compares judgments in hypothetical scenarios; our second experiment operationalizes some of those scenarios in the laboratory, allowing us to observe judgments about decisions that are really being made. In the hypothetical experiment, we found that the role effect reversed in our more familiar context, both in judgments about what the actor ought to do and in judgments about the moral rightness of the action. However, in our laboratory experiment, the effects reversed back or disappeared. Among judgments of what the actor ought to do, we found the same role effect as in the standard hypothetical trolley scenario, but the effect of role on moral judgments disappeared. (shrink)
Sometimes we make a decision about an action we will undertake later and form an intention, but our judgment of what it is best to do undergoes a temporary shift when the time for action comes round. What makes it rational not to give in to temptation? Many contemporary solutions privilege diachronic rationality; in some “rational non-reconsideration” (RNR) accounts once the agent forms an intention, it is rational not to reconsider. This leads to other puzzles: how can someone be motivated (...) to follow a plan that is contrary to their current judgment? How can it be rational to form a plan to resist if we can predict that our judgment will shift? I show how these puzzles can be solved in a framework where there are multiple units of agency, distinguishing between the judgments of the timeslice and those of the person over time, and allowing that the timeslice can “self identify”, taking the person over time as the relevant unit of agency and doing intrapersonal team reasoning (with a different causal role for intentions than RNR accounts). On my account, resisting temptation is compatible with synchronic rationality, so synchronic and diachronic rationality are aligned. However, either resisting or succumbing to temptation can be instrumentally rational, depending on the unit of agency that is identified with. In order to show why we ought to resist temptation, we need to draw on a non-instrumental rationale. I sketch possible routes for doing this. (shrink)
Trust can be thought of as a three place relation: A trusts B to do X. Trustworthiness has two components: competence (does the trustee have the relevant skills, knowledge and abilities to do X?) and willingness (is the trustee intending or aiming to do X?). This chapter is about the willingness component, and the different motivations that a trustee may have for fulfilling trust. The standard assumption in economics is that agents are self-regarding, maximizing their own consumption of goods and (...) services. This is too restrictive. In particular, people may be concerned with the outcomes of others, and they may be concerned to follow ethical principles. I distinguish weak trustworthiness, which places no restrictions on B’s motivation for doing X, from strong trustworthiness, where the behaviour must have a particular non-selfish motivation, in finance the fiduciary commitment to promote the interests of the truster. I discuss why strong trustworthiness may be more efficient and also normatively preferable to weak. In finance, there is asymmetric information between buyer and seller, which creates a need for trustworthy assessment of products. It also creates an ambiguity about whether the relationship is one of buyer and seller, governed by caveat emptor, or a fiduciary relationship of advisor and client. This means that there are two possible reasons why trust may be breached: because the trustee didn’t realise that the truster framed the relationship as a fiduciary one, or because the trustee did realise but actively sought to take advantage of the trust. Correspondingly, there are two possible types of agent: normal people who are not always self-regarding and who are trust responsive (if they believe that they are being trusted then they are likely to fulfill that trust), and knaves, after Hume’s character who is always motivated by his own private interest. We can increase the trustworthiness of normal people by getting them to re-frame the situation as one of trust, so they will be strongly trustworthy (i.e. change of institutional culture), and by providing non-monetary incentives (the correct choice of incentive will depend on exactly what their non-selfish motivation is). Knaves need sanctions, which can make them weakly trustworthy. However, this is a delicate balance because sanctions can crowd out normative frames. We can also increase the trustworthiness of financiers by making finance less attractive to knaves; changing the mix of types in finance could help support the necessary cultural change. (shrink)
Normative theories can be useful in developing descriptive theories, as when normative subjective expected utility theory is used to develop descriptive rational choice theory and behavioral game theory. “Ought” questions are also the essence of theories of moral reasoning, a domain of higher mental processing that could not survive without normative considerations.
Abstract: We examine how trustworthy behaviour can be achieved in the financial sector. The task is to ensure that firms are motivated to pursue long-term interests of customers rather than pursuing short-term profits. Firms’ self-interested pursuit of reputation, combined with regulation, is often not sufficient to ensure that this happens. We argue that trustworthy behaviour requires that at least some actors show a concern for the wellbeing of clients, or a respect for imposed standards, and that the behaviour of these (...) actors is copied in such a way that it becomes a behavioural norm. We briefly suggest what such behavioural norms might need to be if trustworthy behaviour is to be achieved, and consider how they might be supported; we describe the research that is necessary in order to understand these norms in more detail. We argue that the norms of traders are different from the norms of those engaged in other activities, since they are inevitably self-interested, and we consider the risk that traders’ norms might undermine those of other actors. We analyse the task for governance in dealing with this problem, and the role which leadership by a corporate board and management might play in doing this. We describe the need for further research in to describe how this might be done. (shrink)
Theories of collective intentions must distinguish genuinely collective intentions from coincidentally harmonized ones. Two apparently equally apt ways of doing so are the ‘neo-reductionism’ of Bacharach (2006) and Gold and Sugden (2007a) and the ‘non-reductionism’ of Searle (1990, 1995). Here, we present findings from theoretical linguistics that show that we is not a cognitive primitive, but is composed of notions of I and grouphood. The ramifications of this finding on the structure both of grammatical and lexical systems suggests that an (...) understanding of collective intentionality does not require a primitive we-intention, but the notion of grouphood implicit in team reasoning, coupled with the individual concept I. This, we argue, supports neo-reductionism but poses difficulties for non-reductionism. (shrink)
Normative theories can be useful in developing descriptive theories, as when normative subjective expected utility theory is used to develop descriptive rational choice theory and behavioral game theory. questions are also the essence of theories of moral reasoning, a domain of higher mental processing that could not survive without normative considerations.
“Das Adam Smith Problem” is the name given by eighteenth-century German scholars to the question of how to reconcile the role of self-interest in the Wealth of Nations with Smith’s advocacy of sympathy in Theory of Moral Sentiments. As the discipline of economics developed, it focused on the interaction of selfish agents, pursuing their private interests. However, behavioral economists have rediscovered the existence and importance of multiple motivations, and a new Das Adam Smith Problem has arisen, of how to accommodate (...) self-regarding and pro-social motivations in a single system. This question is particularly important because of evidence of motivation crowding, where paying people can backfire, with payments achieving the opposite effects of those intended. Psychologists have proposed a mechanism for the crowding out of “intrinsic motivations” for doing a task, when payment is used to incentivize effort. However, they argue that pro-social motivations are different from these intrinsic motivations, implying that crowding out of pro-social motivations requires a different mechanism. In this essay I present an answer to the new Das Adam Smith problem, proposing a mechanism that can underpin the crowding out of both pro-social and intrinsic motivations, whereby motivations are prompted by frames and motivation crowding is underpinned by the crowding out of frames. I explore some of the implications of this mechanism for research and policy. (shrink)