Some of the most celebrated images of nineteenth-century American photography emerged from government-sponsored geological surveys whose purpose was to study and document western territories. Timothy H. O'Sullivan and William Bell, two survey photographers who joined expeditions in the 1860s and 1870s, opened the eyes of nineteenth-century Americans to the western frontier. Highlighting a recent Smart Museum of Art acquisition, One/Many brings together an exquisite group of photographs by Bell and O'Sullivan. Particularly noteworthy are their photographic panoramas, assemblages of individual images (...) joined together to form a continuous, horizontal landscape view. These panoramas have not been exhibited in well over a century and have never before been published. For the first time, One/Many investigates their role and purpose both within and outside of the surveys, taking into account the larger context of nineteenth-century modes of viewing. The volume also allows the little-known Bell's work to be better understood next to that of his more famous colleague. (shrink)
Judgment aggregation problems are language dependent in that they may be framed in different yet equivalent ways. We formalize this dependence via the notion of translation invariance, adopted from the philosophy of science, and we argue for the normative desirability of translation invariance. We characterize the class of translation invariant aggregation functions in the canonical judgment aggregation model, which requires collective judgments to be complete. Since there are reasonable translation invariant aggregation functions, our result can be viewed as a possibility (...) theorem. At the same time, we show that translation invariance does have certain normatively undesirable consequences (e.g. failure of anonymity). We present a way of circumventing them by moving to a more general model of judgment aggregation, one that allows for incomplete collective judgments. (shrink)
Corroborating Testimony, Probability and Surprise’, Erik J. Olsson ascribes to L. Jonathan Cohen the claims that if two witnesses provide us with the same information, then the less probable the information is, the more confident we may be that the information is true (C), and the stronger the information is corroborated (C*). We question whether Cohen intends anything like claims (C) and (C*). Furthermore, he discusses the concurrence of witness reports within a context of independent witnesses, whereas the witnesses in (...) Olsson's model are not independent in the standard sense. We argue that there is much more than, in Olsson's words, ‘a grain of truth’ to claim (C), both on his own characterization as well as on Cohen's characterization of the witnesses. We present an analysis for independent witnesses in the contexts of decision-making under risk and decision-making under uncertainty and generalize the model for n witnesses. As to claim (C*), Olsson's argument is contingent on the choice of a particular measure of corroboration and is not robust in the face of alternative measures. Finally, we delimit the set of cases to which Olsson's model is applicable. 1 Claim (C) examined for Olsson's characterization of the relationship between the witnesses 2 Claim (C) examined for two or more independent witnesses 3 Robustness and multiple measures of corroboration 4 Discussion. (shrink)
Initiatives relying on artificial intelligence (AI) to deliver socially beneficial outcomes—AI for social good (AI4SG)—are on the rise. However, existing attempts to understand and foster AI4SG initiatives have so far been limited by the lack of normative analyses and a shortage of empirical evidence. In this Perspective, we address these limitations by providing a definition of AI4SG and by advocating the use of the United Nations’ Sustainable Development Goals (SDGs) as a benchmark for tracing the scope and spread of AI4SG. (...) We introduce a database of AI4SG projects gathered using this benchmark, and discuss several key insights, including the extent to which different SDGs are being addressed. This analysis makes possible the identification of pressing problems that, if left unaddressed, risk hampering the effectiveness of AI4SG initiatives. (shrink)
That AI will have a major impact on society is no longer in question. Current debate turns instead on how far this impact will be positive or negative, for whom, in which ways, in which places, and on what timescale. In order to frame these questions in a more substantive way, in this prolegomena we introduce what we consider the four core opportunities for society offered by the use of AI, four associated risks which could emerge from its overuse or (...) misuse, and the opportunity costs associated with its under use. We then offer a high-level view of the emerging advantages for organisations of taking an ethical approach to developing and deploying AI. Finally, we introduce a set of five principles which should guide the development and deployment of AI technologies. The development of laws, policies and best practices for seizing the opportunities and minimizing the risks posed by AI technologies would benefit from building on ethical frameworks such as the one offered here. (shrink)
In this article we analyse the role that artificial intelligence (AI) could play, and is playing, to combat global climate change. We identify two crucial opportunities that AI offers in this domain: it can help improve and expand current understanding of climate change and it contribute to combating the climate crisis effectively. However, the development of AI also raises two sets of problems when considering climate change: the possible exacerbation of social and ethical challenges already associated with AI, and the (...) contribution to climate change of the greenhouse gases emitted by training data and computation-intensive AI systems. We assess the carbon footprint of AI research, and the factors that influence AI’s greenhouse gas (GHG) emissions in this domain. We find that the carbon footprint of AI research may be significant and highlight the need for more evidence concerning the trade-off between the GHG emissions generated by AI research and the energy and resource efficiency gains that AI can offer. In light of our analysis, we argue that leveraging the opportunities offered by AI for global climate change whilst limiting its risks is a gambit which requires responsive, evidence-based and effective governance to become a winning strategy. We conclude by identifying the European Union as being especially well-placed to play a leading role in this policy response and provide 13 recommendations that are designed to identify and harness the opportunities of AI for combating climate change, while reducing its impact on the environment. (shrink)
Esports competitions have become a world-wide phenomenon with millions of viewers and fans. Learn about how esports competitions deal with things from gambling to cheating software. Aligned with curriculum standards, these books also highlight key 21st Century content including information, media, and technology skills. Engaging content and hands-on activities encourage creative and design thinking. Book includes table of contents, glossary, index, author biography, and sidebars.
As part of Timothy Williamson’s inquiry into how we gain knowledge from thought experiments he submits various ways of representing the argument underlying Gettier cases in modal and counterfactual terms. But all of these ways run afoul of the problem of deviance - that there are cases that might satisfy the descriptions given by a Gettier text but still fail to be counterexamples to the justified true belief model of knowledge). Problematically, this might mean that either it is too hard (...) to know the truth of the premises of the arguments Williamson presents or that the relevant premises might be false. I argue that the Gettier-style arguments can make do with weaker premises (and a slightly weaker conclusion) that suffice to show that “necessarily, if one justifiably believes some true proposition p, then one knows p” is not true. The modified version of the argument is preferable because it is not troubled by the existence of deviant Gettier cases. (shrink)
Artificial Intelligence (AI) is already having a major impact on society. As a result, many organizations have launched a wide range of initiatives to establish ethical principles for the adoption of socially beneficial AI. Unfortunately, the sheer volume of proposed principles threatens to overwhelm and confuse. How might this problem of ‘principle proliferation’ be solved? In this paper, we report the results of a fine-grained analysis of several of the highest-profile sets of ethical principles for AI. We assess whether these (...) principles converge upon a set of agreed-upon principles, or diverge, with significant disagreement over what constitutes ‘ethical AI.’ Our analysis finds a high degree of overlap among the sets of principles we analyze. We then identify an overarching framework consisting of five core principles for ethical AI. Four of them are core principles commonly used in bioethics: beneficence, non-maleficence, autonomy, and justice. On the basis of our comparative analysis, we argue that a new principle is needed in addition: explicability, understood as incorporating both the epistemological sense of intelligibility (as an answer to the question ‘how does it work?’) and in the ethical sense of accountability (as an answer to the question: ‘who is responsible for the way it works?’). In the ensuing discussion, we note the limitations and assess the implications of this ethical framework for future efforts to create laws, rules, technical standards, and best practices for ethical AI in a wide range of contexts. (shrink)
Destined to be classic: a tale from the Buddhist sutras told in the memorable and engaging rhyming verse in the tradition of Dr. Seuss and Shel Silverstein. Children and their parents will both love it, and be encouraged. Illustrated in a style that brings both humor and tradition, by the renowned and award-winning illustrator of Wisdom's Illustrated Lotus Sutra, and many other books. I See You, Buddha will help children (and their parents) difficulty with patience and learn to see the (...) good in everyone-including themselves! I See You, Buddha is based on a chapter in the Lotus Sutra, one of the most influential Buddhist texts worldwide-a classical scripture that has inspired a whole genre of works, especially in Japan, known as Lotus Literature. The Lotus Sutra teaches the Way of the Bodhisattva-beings engaged in compassionate, enlightened activity in the service of all-by offering examples of what this activity might look like in the world. One such model in the text is Bodhisattva Never Disrespectful (or Never Disparaging), who, despite troubling encounters with and even harsh treatment from others, bows down respectfully to everyone, recognizing their Buddha Nature and honoring their own journeys along the bodhisattva path to enlightenment-whether they know they're future Buddhas or not! (shrink)
Science and mathematics continually change in their tools, methods, and concepts. Many of these changes are not just modifications but progress---steps to be admired. But what constitutes progress? This dissertation addresses one central source of intellectual advancement in both disciplines: reformulating a problem-solving plan into a new, logically compatible one. For short, I call these cases of compatible problem-solving plans "reformulations." Two aspects of reformulations are puzzling. First, reformulating is often unnecessary. Given that we could already solve a problem using (...) an older formulation, what do we gain by reformulating? Second, some reformulations are genuinely trivial or insignificant. Merely replacing one symbol with another does not lead to intellectual progress. What distinguishes significant reformulations from trivial ones? According to what I call "conceptualism", reformulations are intellectually significant when they provide a different plan for solving problems. Significant reformulations provide inferentially different routes to the same solution. In contrast, trivial reformulations provide exactly the same problem-solving plans, and hence they do not change our understanding. This answers the second question about what distinguishes trivial from significant reformulations. However, the first question remains: what makes a new way of solving an old problem valuable? Here, a bevy of practical considerations come to mind: one formulation might be faster, less complicated, or use more familiar concepts. According to "instrumentalism," these practical benefits are all there is to reformulating. Some reformulations are simply more instrumentally valuable for meeting the aims of science than others. At another extreme, "fundamentalism" contends that a reformulation is valuable when it provides a more fundamental description of reality. According to this view, some reformulations directly contribute to the metaphysical aim of carving reality at its joints. Conceptualism develops a middle ground between instrumentalism and fundamentalism, preserving their benefits without their costs. I argue that the epistemic value of significant reformulations does not reduce to either practical or metaphysical value. Reformulations are valuable because they are a constitutive part of problem-solving. Both science and mathematics aim at solving all possible problems within their respective domains. Meeting this aim requires being able to plan for any possible problem-solving context, and this requires reformulating. By reformulating, we clarify what we need to know to solve problems. Still, one might wonder whether the value of reformulations requires underlying differences in explanatory power. According to "explanationism," a reformulation is valuable only when it provides a better explanation. Explanationism stands as a rival middle ground position to my own. However, it faces numerous counterexamples. In many cases, two reformulations provide the same explanation while nonetheless providing different ways of understanding a phenomenon. Hence, reformulating can be valuable even when neither formulation is more explanatory. Methodologically, I draw on a variety of case studies to support my account of reformulation. These range from classical mechanics to quantum chemistry, along with examples from mathematics. Symmetry arguments provide a paradigmatic example: the mathematics of symmetry groups radically recasts quantum mechanics and quantum chemistry. Nevertheless, elementary approaches exist that eschew this additional mathematical apparatus, solving problems in a more tedious but less mathematically-demanding manner. Further examples include reformulations of quantum field theory, Arabic vs. Roman numerals, and Fermat's little theorem in number theory. In each case, my account identifies how reformulations change and improve our understanding of science and mathematics. (shrink)
This paper originally expands the orthodox conception of moral blameworthiness to account for blameworthiness for conduct and outcomes across normative domains, showcases the account’s power to explain epistemic blameworthiness for behavior and belief in particular, and highlights the account’s significance for theorizing about normativity and responsibility. Notably, the account challenges the prevailing polarization between deontic, axiological, and aretaic approaches to moral and epistemic normativity by suggesting that these so-called “competitors” serve as cooperators in explaining responsibility. The account also highlights the (...) way forgotten Socratic conceptions of epistemic normativity, which put forth epistemic duties to behave instead of more fashionable duties to believe, play a central role in explaining epistemic responsibility. By proposing this paradigm shift from belief-centered to behavior-centered theorizing about epistemic normativity and responsibility, the account reveals the doxastic freedom problem to be a pseudo-problem. The paper answers an objection to this approach to the problem raised by Neil Levy in this journal, an objection which has important implications for cases of culpable ignorance. The paper challenges the standard view of such cases that moral blameworthiness for ignorant conduct requires doxastic blameworthiness for ignorant belief. (shrink)
Many feminists (e.g. T. Bettcher and B. R. George) argue for a principle of first person authority (FPA) about gender, i.e. that we should (at least) not disavow people's gender self-categorisations. However, there is a feminist tradition resistant to FPA about gender, which I call "radical feminism”. Feminists in this tradition define gender-categories via biological sex, thus denying non-binary and trans self-identifications. Using a taxonomy by B. R. George, I begin to demystify the concept of gender. We are also able (...) to use the taxonomy to model various feminist approaches. It becomes easier to see how conceptualisations ofgender which allow for FPA often do not allow for understanding female subjugation as being rooted in reproductive biology. I put forward a conceptual scheme: radical FPA feminism. If we accept FPA, but also radical feminist concerns, radical FPA feminism is an attractive way of conceptualising gender. (shrink)
In Studies in Ideology, poet and theorist J.M. Beach delivers a comprehensive analysis of the history and theory of "ideology." Beach offers his theory of ideology in conjunction with an extensive reading of history and contemporary affairs and ends the book with a brief biographical sketch of his own intellectual maturation, which is imbedded within a daring and timely critique of Christianity.
In a series of papers, Donald Davidson :3–17, 1984, The philosophical grounds of rationality, 1986, Midwest Stud Philos 16:1–12, 1991) developed a powerful argument against the claim that linguistic conventions provide any explanatory purchase on an account of linguistic meaning and communication. This argument, as I shall develop it, turns on cases of what I call lexical innovation: cases in which a speaker uses a sentence containing a novel expression-meaning pair, but nevertheless successfully communicates her intended meaning to her audience. (...) I will argue that cases of lexical innovation motivate a dynamic conception of linguistic conventions according to which background linguistic conventions may be rapidly expanded to incorporate new word meanings or shifted to revise the meanings of words already in circulation. I argue that this dynamic account of conventions both resolves the problem raised by cases of lexical innovation and that it does so in a way that is preferable to those who—like Davidson—deny important explanatory roles for linguistic conventions. (shrink)
This paper explores the significance of intelligent social behavior among non-human animals for philosophical theories of communication. Using the alarm call system of vervet monkeys as a case study, I argue that interpersonal communication (or what I call “minded communication”) can and does take place in the absence of the production and recognition of communicative intentions. More generally, I argue that evolutionary theory provides good reasons for maintaining that minded communication is both temporally and explanatorily prior to the use of (...) communicative intentions. After developing these negative points about the place of communicative intentions in detail, I provide a novel alternative account according to which minded communication is characterized in terms of patterns of action and response that function to coordinate the representational mental states of agents. I show that an account which centers on patterns of representational coordination of this sort is well suited to capture the theoretical roles associated with minded communication and that it does so in away that provides a good fit with comparative facts about the presence of minded communication among non-human animals. (shrink)
Neil Tennant and Joseph Salerno have recently attempted to rigorously formalize Michael Dummett's argument for logical revision. Surprisingly, both conclude that Dummett commits elementary logical errors, and hence fails to offer an argument that is even prima facie valid. After explicating the arguments Salerno and Tennant attribute to Dummett, I show how broader attention to Dummett's writings on the theory of meaning allows one to discern, and formalize, a valid argument for logical revision. Then, after correctly providing a rigorous statement (...) of the argument, I am able to delineate four possible anti-Dummettian responses. Following recent work by Stewart Shapiro and Crispin Wright, I conclude that progress in the anti-realist's dialectic requires greater clarity about the key modal notions used in Dummett's proof. (shrink)
Contributors to the literature on gamesmanship typically assume that gamesmanship can be clearly distinguished from other legal strategies used in sports. In this article, we argue that this is a m...
An important objection to the "higher-order" theory of consciousness turns on the possibility of higher-order misrepresentation. I argue that the objection fails because it illicitly assumes a characterization of consciousness explicitly rejected by HO theory. This in turn raises the question of what justifies an initial characterization of the data a theory of consciousness must explain. I distinguish between intrinsic and extrinsic characterizations of consciousness, and I propose several desiderata a successful characterization of consciousness must meet. I then defend the (...) particular extrinsic characterization of the HO theory, the "transitivity principle," against its intrinsic rivals, thereby showing that the misrepresentation objection conclusively falls short. (shrink)
In this paper, I explore two contrasting conceptions of the social character of language. The first takes language to be grounded in social convention. The second, famously developed by Donald Davidson, takes language to be grounded in a social relation called triangulation. I aim both to clarify and to evaluate these two conceptions of language. First, I propose that Davidson’s triangulation-based story can be understood as the result of relaxing core features of conventionalism pertaining to both common-interest and diachronic stability—specifically, (...) Davidson does not require uses of language to be self-perpetuating, in the way required by conventionalism, in order to be bona fide components of linguistic systems. Second, I argue that Davidson’s objections to conventionalism from language innovation and language variation fail, and that certain kinds of negative data in language use require an appeal to diachronic social relations. However, I also argue that recent work on communication in the a.. (shrink)
The concept of minimal risk has been used to regulate and limit participation by adolescents in clinical trials. It can be understood as setting an absolute standard of what risks are considered minimal or it can be interpreted as relative to the actual risks faced by members of the host community for the trial. While commentators have almost universally opposed a relative interpretation of the environmental risks faced by potential adolescent trial participants, we argue that the ethical concerns against the (...) relative standard may not be as convincing as these commentators believe. Our aim is to present the case for a relative standard of environmental risk in order to open a debate on this subject. We conclude by discussing how a relative standard of environmental risk could be defended in the specific case of an HIV vaccine trial among adolescents in South Africa. (shrink)
Perhaps the most pressing challenge for singularism—the predominant view that definite plurals like ‘the students’ singularly refer to a collective entity, such as a mereological sum or set—is that it threatens paradox. Indeed, this serves as a primary motivation for pluralism—the opposing view that definite plurals refer to multiple individuals simultaneously through the primitive relation of plural reference. Groups represent one domain in which this threat is immediate. After all, groups resemble sets in having a kind of membership-relation and iterating: (...) we can have groups of groups, groups of groups of groups, etc. Yet there cannot be a group of all non-self-membered groups. In response, we develop a potentialist theory of groups according to which we always can, but do not have to, form a group from any sum. Modalizing group-formation makes it a species of potential, as opposed to actual or completed, infinity. This allows for a consistent, plausible, and empirically adequate treatment of natural language plurals, one which is motivated by the iterative nature of syntactic and semantic processes more generally. (shrink)
The following quotation, from Frank Jackson, is the beginning of a typical exposition of the debate between those metaphysicians who believe in temporal parts, and those who do not: The dispute between three-dimensionalism and four-dimensionalism, or more precisely, that part of the dispute we will be concerned with, concerns what persistence, and correllatively, what change, comes to. Three-dimensionalism holds that an object exists at a time by being wholly present at that time, and, accordingly, that it persists if it is (...) wholly present at more than one time. For short, it persists by enduring. Four-dimensionalism holds that an object exists at a time by having a temporal part at that time, and it persists if it has distinct temporal parts at more than one time. For short, it persists by perduring. In the light of these comments, some readers will perhaps find the question that forms the title of this paper a little puzzling. They may have learned to use the terms ‘fourdimensionalism’ ‘perdurantism’ and ‘belief in temporal parts’ interchangeably; or perhaps even to define one in terms of the other. Such a usage, however, is inapposite. We might imagine a Flatland-like world of two spatial dimensions and one temporal, whose philosophers are divided between a theory of persistence on which they persist by having temporal parts, and a theory on which they persist by being wholly located in each of several times. This is just the same issue we face, but at least the label ‘four-dimensionalism’ seems inapposite: the four-dimensionalist Flatlanders believe in only three dimensions! (shrink)
Modern payment cards encompass a bewildering array of consumer technologies, from credit and debit cards to stored-value and loyalty cards. But what unites all of these financial media is their connection to recordkeeping systems. Each swipe sends data hurtling through invisible infrastructures to verify accounts, record purchase details, exchange funds, and update balances. With payment cards, banks and merchants have been able to amass vast archives of transactional data. This information is a valuable asset in itself. It can be used (...) for in-house data analytics programs or sold as marketing intelligence to third parties. This research examines the development of payment cards in the United States from the late 19th century to present, drawing attention to their fundamental relationship to identification, recordkeeping, and data mining. The history of payment cards, I argue, is not just a history of financial innovation and computing; it is also a history of Big Data and consumer surveillance. This history, moreover, provides insight into the growth of transactional data and the datafication of money in the digital economy. (shrink)
At the outset of the Republic, Polemarchus advances the bold thesis that “justice is the art which gives benefit to friends and injury to enemies”. He quickly rejects the hypothesis, and what follows is a long tradition of neglecting the ethics of enmity. The parallel issue of how friendship affects the moral sphere has, by contrast, been greatly illuminated by discussions both ancient and contemporary. This article connects this existing work to the less explored topic of the normative significance of (...) our negative relationships. I explain how negative partiality should be conceptualized through reference to the positive analogue, and argue that at least some forms of negative partiality are justified. I further explore the connection between positive and negative relationships by showing how both are justified by ongoing histories of encounter. However, I also argue that these relationships are in some important ways asymmetrical. (shrink)
The subject of consciousness, long shunned by mainstream psychology and the scientific community, has over the last two decades become a legitimate topic of scientific research. One of the most thorough attempts to formulate a theory of consciousness has come from Bernard Baars, a psychologist working at the Wright Institute. Baars proposes that consciousness is the result of a Global Workspace in the brain that distributes information to the huge number of parallel unconscious processors that form the rest of the (...) brain. This paper critiques the central hypothesis of Baars' theory of consciousness. (shrink)
Through real-life scenarios and practical illustrations, the authors address complex ethical dilemmas to everyday moral decisions. Learn to make moral choices based on God's love and His absolute truth.
In this chapter, we analyze intellectual patience as a character trait. We look at the contexts that call for patience and at what patience demands in those contexts. Together these constitute our account of patience, though the focus is on patience in the life of the mind. We also consider how patience and perseverance differ, which offers a better understanding of the former and sheds light on how character traits can cooperate. We then consider how to become virtuously patient. We (...) conclude reflecting on some remarks made by the poet Rilke. (shrink)
Robert Nozick’s oft-quoted review of Tom Regan’s The Case for Animal Rights levels a range of challenges to Regan’s philosophy. Many commentators have focussed on Nozick’s putative defence of speciesism, but this has led to them overlooking other aspects of the critique. In this paper, I draw attention to two. First is Nozick’s criticism of Regan’s political theory, which is best understood relative to Nozick’s libertarianism. Nozick’s challenge invites the possibility of a libertarian account of animal rights – which is (...) not as oxymoronic as it may first sound. Second is Nozick’s criticism of Regan’s axiological theory, which is best understood relative to Nozick’s own axiological inegalitarianism. While Nozick’s axiology has distasteful consequences, it should not be dismissed out-of-hand. Nozick’s challenges to Regan – and Nozick’s wider animal ethics – are rich and original, warranting attention from contemporary theorists for reasons beyond mere historical interest. (shrink)
In this book the analysis of the relationship between Whewell and Mill is extended from the theme of induction, the topic the author starts with, to the comparison between the two projects of an overall reform of knowledge. These programmes announce themselves to the general public as proclamations of war for or against the academic, political and religious establishment; however, when viewed from the inside, they more or less consciously share very similar objectives. This applies both to the scientific method (...) and to ethics and politics. (shrink)
The same-order representation theory of consciousness holds that conscious mental states represent both the world and themselves. This complex representational structure is posited in part to avoid a powerful objection to the more traditional higher-order representation theory of consciousness. The objection contends that the higher-order theory fails to account for the intimate relationship that holds between conscious states and our awareness of them--the theory 'divides the phenomenal labor' in an illicit fashion. This 'failure of intimacy' is exposed by the possibility (...) of misrepresentation by higher-order states. In this paper, I argue that despite appearances, the same-order theory fails to avoid the objection, and thus also has troubles with intimacy. (shrink)
The buyer–supplier relationship is the nexus of the economic partnership of many commercial transactions and is founded upon the reciprocal trust of the two parties that participate in this economic exchange. In this article, we identify how six ethical elements play a key role in framing the buyer–supplier relationship, incorporating a model articulated by Hosmer (The ethics of management, McGraw-Hill, New York, 2008 ). We explain how trust is a behavior, the relinquishing of personal control in the expectant hope that (...) the other party will honor the duties of a psychological contract. Presenting information about six factors of organizational trustworthiness, we offer insights about the relationship between ethics and trust in the buyer–supplier relationship. (shrink)
• The Static Conception of Semantics (Preliminary Version): A semantic theory should assign a proposition, conceived of as some carrier of meaning that can play the role of truth condition determination, to each (or at least each declarative) sentence.