Initiatives relying on artificial intelligence (AI) to deliver socially beneficial outcomes—AI for social good (AI4SG)—are on the rise. However, existing attempts to understand and foster AI4SG initiatives have so far been limited by the lack of normative analyses and a shortage of empirical evidence. In this Perspective, we address these limitations by providing a definition of AI4SG and by advocating the use of the United Nations’ Sustainable Development Goals (SDGs) as a benchmark for tracing the scope and spread of AI4SG. (...) We introduce a database of AI4SG projects gathered using this benchmark, and discuss several key insights, including the extent to which different SDGs are being addressed. This analysis makes possible the identification of pressing problems that, if left unaddressed, risk hampering the effectiveness of AI4SG initiatives. (shrink)
In this article we analyse the role that artificial intelligence (AI) could play, and is playing, to combat global climate change. We identify two crucial opportunities that AI offers in this domain: it can help improve and expand current understanding of climate change and it contribute to combating the climate crisis effectively. However, the development of AI also raises two sets of problems when considering climate change: the possible exacerbation of social and ethical challenges already associated with AI, and the (...) contribution to climate change of the greenhouse gases emitted by training data and computation-intensive AI systems. We assess the carbon footprint of AI research, and the factors that influence AI’s greenhouse gas (GHG) emissions in this domain. We find that the carbon footprint of AI research may be significant and highlight the need for more evidence concerning the trade-off between the GHG emissions generated by AI research and the energy and resource efficiency gains that AI can offer. In light of our analysis, we argue that leveraging the opportunities offered by AI for global climate change whilst limiting its risks is a gambit which requires responsive, evidence-based and effective governance to become a winning strategy. We conclude by identifying the European Union as being especially well-placed to play a leading role in this policy response and provide 13 recommendations that are designed to identify and harness the opportunities of AI for combating climate change, while reducing its impact on the environment. (shrink)
That AI will have a major impact on society is no longer in question. Current debate turns instead on how far this impact will be positive or negative, for whom, in which ways, in which places, and on what timescale. In order to frame these questions in a more substantive way, in this prolegomena we introduce what we consider the four core opportunities for society offered by the use of AI, four associated risks which could emerge from its overuse or (...) misuse, and the opportunity costs associated with its under use. We then offer a high-level view of the emerging advantages for organisations of taking an ethical approach to developing and deploying AI. Finally, we introduce a set of five principles which should guide the development and deployment of AI technologies. The development of laws, policies and best practices for seizing the opportunities and minimizing the risks posed by AI technologies would benefit from building on ethical frameworks such as the one offered here. (shrink)
In a series of papers, Donald Davidson :3–17, 1984, The philosophical grounds of rationality, 1986, Midwest Stud Philos 16:1–12, 1991) developed a powerful argument against the claim that linguistic conventions provide any explanatory purchase on an account of linguistic meaning and communication. This argument, as I shall develop it, turns on cases of what I call lexical innovation: cases in which a speaker uses a sentence containing a novel expression-meaning pair, but nevertheless successfully communicates her intended meaning to her audience. (...) I will argue that cases of lexical innovation motivate a dynamic conception of linguistic conventions according to which background linguistic conventions may be rapidly expanded to incorporate new word meanings or shifted to revise the meanings of words already in circulation. I argue that this dynamic account of conventions both resolves the problem raised by cases of lexical innovation and that it does so in a way that is preferable to those who—like Davidson—deny important explanatory roles for linguistic conventions. (shrink)
Reformulating a scientific theory often leads to a significantly different way of understanding the world. Nevertheless, accounts of both theoretical equivalence and scientific understanding have neglected this important aspect of scientific theorizing. This essay provides a positive account of how reformulation changes our understanding. My account simultaneously addresses a serious challenge facing existing accounts of scientific understanding. These accounts have failed to characterize understanding in a way that goes beyond the epistemology of scientific explanation. By focusing on cases in which (...) we have differences in understanding without differences in explanation, I show that understanding does not reduce to explanation. (shrink)
Artificial Intelligence (AI) is already having a major impact on society. As a result, many organizations have launched a wide range of initiatives to establish ethical principles for the adoption of socially beneficial AI. Unfortunately, the sheer volume of proposed principles threatens to overwhelm and confuse. How might this problem of ‘principle proliferation’ be solved? In this paper, we report the results of a fine-grained analysis of several of the highest-profile sets of ethical principles for AI. We assess whether these (...) principles converge upon a set of agreed-upon principles, or diverge, with significant disagreement over what constitutes ‘ethical AI.’ Our analysis finds a high degree of overlap among the sets of principles we analyze. We then identify an overarching framework consisting of five core principles for ethical AI. Four of them are core principles commonly used in bioethics: beneficence, non-maleficence, autonomy, and justice. On the basis of our comparative analysis, we argue that a new principle is needed in addition: explicability, understood as incorporating both the epistemological sense of intelligibility (as an answer to the question ‘how does it work?’) and in the ethical sense of accountability (as an answer to the question: ‘who is responsible for the way it works?’). In the ensuing discussion, we note the limitations and assess the implications of this ethical framework for future efforts to create laws, rules, technical standards, and best practices for ethical AI in a wide range of contexts. (shrink)
Technologies to rapidly alert people when they have been in contact with someone carrying the coronavirus SARS-CoV-2 are part of a strategy to bring the pandemic under control. Currently, at least 47 contact-tracing apps are available globally. They are already in use in Australia, South Korea and Singapore, for instance. And many other governments are testing or considering them. Here we set out 16 questions to assess whether — and to what extent — a contact-tracing app is ethically justifiable.
In this paper, I explore two contrasting conceptions of the social character of language. The first takes language to be grounded in social convention. The second, famously developed by Donald Davidson, takes language to be grounded in a social relation called triangulation. I aim both to clarify and to evaluate these two conceptions of language. First, I propose that Davidson’s triangulation-based story can be understood as the result of relaxing core features of conventionalism pertaining to both common-interest and diachronic stability—specifically, (...) Davidson does not require uses of language to be self-perpetuating, in the way required by conventionalism, in order to be bona fide components of linguistic systems. Second, I argue that Davidson’s objections to conventionalism from language innovation and language variation fail, and that certain kinds of negative data in language use require an appeal to diachronic social relations. However, I also argue that recent work on communication in the a.. (shrink)
This article reports the findings of AI4People, an Atomium—EISMD initiative designed to lay the foundations for a “Good AI Society”. We introduce the core opportunities and risks of AI for society; present a synthesis of five ethical principles that should undergird its development and adoption; and offer 20 concrete recommendations—to assess, to develop, to incentivise, and to support good AI—which in some cases may be undertaken directly by national or supranational policy makers, while in others may be led by other (...) stakeholders. If adopted, these recommendations would serve as a firm foundation for the establishment of a Good AI Society. (shrink)
The idea of artificial intelligence for social good is gaining traction within information societies in general and the AI community in particular. It has the potential to tackle social problems through the development of AI-based solutions. Yet, to date, there is only limited understanding of what makes AI socially good in theory, what counts as AI4SG in practice, and how to reproduce its initial successes in terms of policies. This article addresses this gap by identifying seven ethical factors that are (...) essential for future AI4SG initiatives. The analysis is supported by 27 case examples of AI4SG projects. Some of these factors are almost entirely novel to AI, while the significance of other factors is heightened by the use of AI. From each of these factors, corresponding best practices are formulated which, subject to context and balance, may serve as preliminary guidelines to ensure that well-designed AI is more likely to serve the social good. (shrink)
The following quotation, from Frank Jackson, is the beginning of a typical exposition of the debate between those metaphysicians who believe in temporal parts, and those who do not: The dispute between three-dimensionalism and four-dimensionalism, or more precisely, that part of the dispute we will be concerned with, concerns what persistence, and correllatively, what change, comes to. Three-dimensionalism holds that an object exists at a time by being wholly present at that time, and, accordingly, that it persists if it is (...) wholly present at more than one time. For short, it persists by enduring. Four-dimensionalism holds that an object exists at a time by having a temporal part at that time, and it persists if it has distinct temporal parts at more than one time. For short, it persists by perduring. In the light of these comments, some readers will perhaps ﬁnd the question that forms the title of this paper a little puzzling. They may have learned to use the terms ‘fourdimensionalism’ ‘perdurantism’ and ‘belief in temporal parts’ interchangeably; or perhaps even to deﬁne one in terms of the other. Such a usage, however, is inapposite. We might imagine a Flatland-like world of two spatial dimensions and one temporal, whose philosophers are divided between a theory of persistence on which they persist by having temporal parts, and a theory on which they persist by being wholly located in each of several times. This is just the same issue we face, but at least the label ‘four-dimensionalism’ seems inapposite: the four-dimensionalist Flatlanders believe in only three dimensions! (shrink)
In this article, we analyse the role that artificial intelligence could play, and is playing, to combat global climate change. We identify two crucial opportunities that AI offers in this domain: it can help improve and expand current understanding of climate change, and it can contribute to combatting the climate crisis effectively. However, the development of AI also raises two sets of problems when considering climate change: the possible exacerbation of social and ethical challenges already associated with AI, and the (...) contribution to climate change of the greenhouse gases emitted by training data and computation-intensive AI systems. We assess the carbon footprint of AI research, and the factors that influence AI’s greenhouse gas emissions in this domain. We find that the carbon footprint of AI research may be significant and highlight the need for more evidence concerning the trade-off between the GHG emissions generated by AI research and the energy and resource efficiency gains that AI can offer. In light of our analysis, we argue that leveraging the opportunities offered by AI for global climate change whilst limiting its risks is a gambit which requires responsive, evidence-based, and effective governance to become a winning strategy. We conclude by identifying the European Union as being especially well-placed to play a leading role in this policy response and provide 13 recommendations that are designed to identify and harness the opportunities of AI for combatting climate change, while reducing its impact on the environment. (shrink)
An important objection to the "higher-order" theory of consciousness turns on the possibility of higher-order misrepresentation. I argue that the objection fails because it illicitly assumes a characterization of consciousness explicitly rejected by HO theory. This in turn raises the question of what justifies an initial characterization of the data a theory of consciousness must explain. I distinguish between intrinsic and extrinsic characterizations of consciousness, and I propose several desiderata a successful characterization of consciousness must meet. I then defend the (...) particular extrinsic characterization of the HO theory, the "transitivity principle," against its intrinsic rivals, thereby showing that the misrepresentation objection conclusively falls short. (shrink)
Cappelen and Dever present a forceful challenge to the standard view that perspective, and in particular the perspective of the first person, is a philosophically deep aspect of the world. Their goal is not to show that we need to explain indexical and other perspectival phenomena in different ways, but to show that the entire topic is an illusion.
At the outset of the Republic, Polemarchus advances the bold thesis that “justice is the art which gives benefit to friends and injury to enemies”. He quickly rejects the hypothesis, and what follows is a long tradition of neglecting the ethics of enmity. The parallel issue of how friendship affects the moral sphere has, by contrast, been greatly illuminated by discussions both ancient and contemporary. This article connects this existing work to the less explored topic of the normative significance of (...) our negative relationships. I explain how negative partiality should be conceptualized through reference to the positive analogue, and argue that at least some forms of negative partiality are justified. I further explore the connection between positive and negative relationships by showing how both are justified by ongoing histories of encounter. However, I also argue that these relationships are in some important ways asymmetrical. (shrink)
In his two recent books on ontology, Universals: an Opinionated Introduction, and A World of States of Affairs, David Armstrong gives a new argument against nominalism. That argument seems, on the face of it, to be similar to another argument that he used much earlier against Rylean behaviourism: the Truthmaker Argument, stemming from a certain plausible premise, the Truthmaker Principle. Other authors have traced the history of the truthmaker principle, its appearance in the work of Aristotle , Bradley , and (...) even Husserl . But that is not my task — in this paper I argue that Armstrong’s new argument is not logically analogous to the old, and, in particular, that it is quite possible to be a thoroughgoing nominalist, and hold a truthmaker principle. (shrink)
This paper explores the significance of intelligent social behavior among non-human animals for philosophical theories of communication. Using the alarm call system of vervet monkeys as a case study, I argue that interpersonal communication (or what I call “minded communication”) can and does take place in the absence of the production and recognition of communicative intentions. More generally, I argue that evolutionary theory provides good reasons for maintaining that minded communication is both temporally and explanatorily prior to the use of (...) communicative intentions. After developing these negative points about the place of communicative intentions in detail, I provide a novel alternative account according to which minded communication is characterized in terms of patterns of action and response that function to coordinate the representational mental states of agents. I show that an account which centers on patterns of representational coordination of this sort is well suited to capture the theoretical roles associated with minded communication and that it does so in away that provides a good fit with comparative facts about the presence of minded communication among non-human animals. (shrink)
In July 2017, China’s State Council released the country’s strategy for developing artificial intelligence, entitled ‘New Generation Artificial Intelligence Development Plan’. This strategy outlined China’s aims to become the world leader in AI by 2030, to monetise AI into a trillion-yuan industry, and to emerge as the driving force in defining ethical norms and standards for AI. Several reports have analysed specific aspects of China’s AI policies or have assessed the country’s technical capabilities. Instead, in this article, we focus on (...) the socio-political background and policy debates that are shaping China’s AI strategy. In particular, we analyse the main strategic areas in which China is investing in AI and the concurrent ethical debates that are delimiting its use. By focusing on the policy backdrop, we seek to provide a more comprehensive and critical understanding of China’s AI policy by bringing together debates and analyses of a wide array of policy documents. (shrink)
ABSTRACT This paper proposes a novel answer to the Special Composition Question. In some respects it agrees with brutalism about composition; in others with universalism. The main novel feature of this answer is the insight I think it gives into what the debate over the Special Composition Question is about.
Continental Philosophy Beyond "the" Continent / Brian Treanor -- Prometheus' Gift of Fire and Technics: Contemplating the Meaning of Fire, Affect, and Californian Pyrophytes in the Pyrocene / Marjolein Oele -- The West as Slaughterbench: Thinking without Revolutions in the American West / Christopher Lauer -- The End of the West: The Time of Apocalypse in the Westerns of Cormac McCarthy / Amanda Parris -- The Trees of the West: Our Elders, Our Teachers / Andrew Jussaume -- Thinking Wolves / (...) Thomas Thorpe -- Robert Smith, Entropic Art, and the West / Shannon M. Mussett -- "Westering" and "BreakingThrough": Zen Buddhism on Cannery Row / Gerard Kuperus -- Life in Interregnum: Deleuze, Guattari, and Atleo / Russell Duvernoy -- Monstrous Topologies: Edward Abbey, Reiner Schürmann, and the Fate of the American West / Josh Hayes -- Turtle Island Anarchy / Jason Wirth. (shrink)
What we could call ‘relational non-interventionism’ holds that we have no general obligation to alleviate animal suffering, and that we do not typically have special obligations to alleviate wild animals’ suffering. Therefore, we do not usually have a duty to intervene in nature to alleviate wild animal suffering. However, there are a range of relationships that we may have with wild animals that do generate special obligations to aid—and the consequences of these obligations can be surprising. In this paper, it (...) is argued that we have special obligations to those animals we have historically welcomed or encouraged into our spaces. This includes many wild animals. One of the consequences of this is that we may sometimes possess obligations to actively prevent rewilding—or even to dewild—for the sake of welcomed animals who thrive in human-controlled spaces. (shrink)
In this essay, I argue that certain injustices faced by mentally disabled persons are epistemic injustices by drawing upon epistemic injustice literature, especially as it is developed by Miranda Fricker. First, I explain the terminology and arguments developed by Fricker, Gaile Pohlhaus, Jr., and Kristie Dotson that are useful in theorizing epistemic injustices against mentally disabled people. Second, I consider some specific cases of epistemic injustice to which mentally disabled persons are subject. Third, I turn to a discussion of severely (...) mentally disabled persons who, because they are unable to share information or develop interpretations of shared social experiences, may fall outside Fricker’s discussion of epistemic injustice. Fourth and finally, following arguments given by Kristie Dotson and Christopher Hookway, I define and explain a type of epistemic injustice: intimate hermeneutical injustice that I believe supplements other discussions of epistemic injustice. (shrink)
Nevertheless, any competent speaker will know what it means. What explains our ability to understand sentences we have never before encountered? One natural hypothesis is that those novel sentences are built up out of familiar parts, put together in familiar ways. This hypothesis requires the backing hypothesis that English has a compositional semantic theory.
This article gives a brief history of chance in the Christian tradition, from casting lots in the Hebrew Bible to the discovery of laws of chance in the modern period. I first discuss the deep-seated skepticism towards chance in Christian thought, as shown in the work of Augustine, Aquinas, and Calvin. The article then describes the revolution in our understanding of chance—when contemporary concepts such as probability and risk emerged—that occurred a century after Calvin. The modern ability to quantify chance (...) has transformed ideas about the universe and human nature, separating Christians today from their predecessors, but has received little attention by Christian historians and theologians. (shrink)
It has been widely believed since the nineteenth century that modern science provides a serious challenge to religion, but less agreement as to the reason. One main complication is that whenever there has been broad consensus for a scientific theory that challenges traditional religious doctrines, one finds religious believers endorsing the theory or even formulating it. As a result, atheists who argue for the incompatibility of science and religion often go beyond the religious implications of individual scientific theories, arguing that (...) the sciences taken together provide a comprehensive challenge to religious belief. Scientific theories, on this view, can be integrated to form a general vision of humans and our place in nature, one that excludes the existence of supernatural phenomena to which many religious traditions refer. The most common name given to this general vision is the scientific worldview. The purpose of my paper is to argue that the relation of a worldview to science is more complex and ambiguous than this position allows, drawing upon recent work in the history and philosophy of science. While there are other ways to complicate the picture, this paper will focus on differing views that scientists and philosophers have on the proper scope and limits of scientific inquiry. I will identify two different types of science—Baconian and Cartesian—that have different ambitions with respect to scientific theories, and thus different answers about the possibility of a scientific worldview. The paper will conclude by showing how their differing intuitions about scientific inquiry are evident in contemporary debates about reductionism, drawing upon the work of two physicists, Steven Weinberg and John Polkinghorne. History is more complex than this simple schema allows, of course, but these types provide a useful first approximation into the ambiguities of modern science. (shrink)
In this paper, I criticize the view that non-conscious entities—such as plants and bacteria—have well-being. Plausible sources of well-being include pleasure, the satisfaction of consciously held desires, and achievement. Since nonconscious entities cannot obtain well-being from these sources, the most plausible source of well-being for them is the exercise of natural capacities. Plants and bacteria, for example, certainly do exercise natural capacities. But I argue that exercising natural capacities does not in fact contribute to well-being. I do so by presenting (...) cases in which human beings exercise natural capacities that they do not enjoy exercising and that they do not desire to exercise. I also argue that plausible views about fortune—how one’s well-being ranks on an appropriate scale—do not support the claim that exercising natural capacities contributes to well-being. (shrink)
Bad Language is the first textbook on an emerging area in the study of language: non-idealized language use, the linguistic behaviour of people who exploit language for malign purposes. This lively, accessible introduction offers theoretical frameworks for thinking about such topics as lies and bullshit, slurs and insults, coercion and silencing.
As tends to be the way with philosophical positions, there are at least as many two-dimensionalisms as there are two-dimensionalists. But painting with a broad brush, there are core epistemological and metaphysical commitments which underlie the two-dimensionalist project, commitments for which I have no sympathies. A sketch of three signi?cant points of disagreement.
How should common schools in a liberal pluralist society approach sex education in the face of deep disagreement about sexual morality? Should they eschew sex education altogether? Should they narrow its focus to facts about biology, reproduction, and disease prevention? Should they, in addition to providing a broad palette of information about sex, attempt to cover a range of alternative views about sexual morality in a “value-neutral” manner? Should they seek to impart a “thick” conception of sexual morality, which precisely (...) articulates how individuals should lead their sexual lives? In this essay, Josh Corngold cautions against the adoption of each of these various approaches. He argues that schools should instead adopt an “autonomy-promoting” approach, which will aim to empower students, cognitively and emotionally, to exercise sovereignty over their own sexuality. (shrink)
The same-order representation theory of consciousness holds that conscious mental states represent both the world and themselves. This complex representational structure is posited in part to avoid a powerful objection to the more traditional higher-order representation theory of consciousness. The objection contends that the higher-order theory fails to account for the intimate relationship that holds between conscious states and our awareness of them--the theory 'divides the phenomenal labor' in an illicit fashion. This 'failure of intimacy' is exposed by the possibility (...) of misrepresentation by higher-order states. In this paper, I argue that despite appearances, the same-order theory fails to avoid the objection, and thus also has troubles with intimacy. (shrink)
Despite its potential for radically reducing the harm inflicted on nonhuman animals in the pursuit of food, there are a number of objections grounded in animal ethics to the development of in vitro meat. In this paper, I defend the possibility against three such concerns. I suggest that worries about reinforcing ideas of flesh as food and worries about the use of nonhuman animals in the production of in vitro meat can be overcome through appropriate safeguards and a fuller understanding (...) of the interests that nonhuman animals actually possess. Worries about the technology reifying speciesist hierarchies of value are more troublesome, however. In response to this final challenge, I suggest that we should be open not just to the production of in vitro nonhuman flesh, but also in vitro human flesh. This leads to a consideration of the ethics of cannibalism. The paper ultimately defends the position that cannibalism simpliciter is not morally problematic, though a great many practices typically associated with it are. The consumption of in vitro human flesh, however, is able to avoid these problematic practices, and so should be considered permissible. I conclude that animal ethicists and vegans should be willing to cautiously embrace the production of in vitro flesh. (shrink)
Animal rights positions face the ‘predator problem’: the suggestion that if the rights of nonhuman animals are to be protected, then we are obliged to interfere in natural ecosystems to protect prey from predators. Generally, rather than embracing this conclusion, animal ethicists have rejected it, basing this objection on a number of different arguments. This paper considers but challenges three such arguments, before defending a fourth possibility. Rejected are Peter Singer’s suggestion that interference will lead to more harm than good, (...) Sue Donaldson and Will Kymlicka’s suggestion that respect for nonhuman sovereignty necessitates non-interference in normal circumstances, and Alasdair Cochrane’s solution based on the claim that predators cannot survive without killing prey. The possibility defended builds upon Tom Regan’s suggestion that predators, as moral patients but not moral agents, cannot violate the rights of their prey, and so the rights of the prey, while they do exist, do not call for intervention. This idea is developed by a consideration of how moral agents can be more or less responsible for a given event, and defended against criticisms offered by thinkers including Alasdair Cochrane and Dale Jamieson. (shrink)
The possibility of “clean milk”—dairy produced without the need for cows—has been championed by several charities, companies, and individuals. One can ask how those critical of the contemporary dairy industry, including especially vegans and others sympathetic to animal rights, should respond to this prospect. In this paper, I explore three kinds of challenges that such people may have to clean milk: first, that producing clean milk fails to respect animals; second, that humans should not consume dairy products; and third, that (...) the creation of clean milk would affirm human superiority over cows. None of these challenges, I argue, gives us reason to reject clean milk. I thus conclude that the prospect is one that animal activists should both welcome and embrace. (shrink)
I want to join Dummett in saying that the reality of the past (and, by analogy, the reality of the future) is an issue of realism versus anti-realism: (Dummett 1969) If you affirm the reality of the past, you are a realist about the past. If you deny the reality of the past, you are an anti-realist about the past. (And likewise, in each case, for the future). It makes sense to think of these issues by analogy with realism about (...) the external world, unobservable objects, mathematical objects, universals, and so on. These are all properly described as ontological issues. (shrink)
Interpreters of Robert Nozick’s political philosophy fall into two broad groups concerning his application of the ‘Lockean proviso’. Some read his argument in an undemanding way: individual instances of ownership which make people worse off than they would have been in a world without any ownership are unjust. Others read the argument in a demanding way: individual instances of ownership which make people worse off than they would have been in a world without that particular ownership are unjust. While I (...) argue that the former reading is correct as an interpretive matter, I suggest that this reading is nonetheless highly demanding. In particular, I argue that it is demanding when it is expanded to include the protection of nonhuman animals; if such beings are right bearers, as more and more academics are beginning to suggest, then there is no nonarbitrary reason to exclude them from the protection of the proviso. (shrink)
Cognitivism about imperatives is the thesis that sentences in the imperative mood are truth-apt: have truth values and truth conditions. This allows cognitivists to give a simple and powerful account of consequence relations between imperatives. I argue that this account of imperative consequence has counterexamples that cast doubt on cognitivism itself.