References in:
Add references
You must login to add references.
|
|
We are sceptical of concerns over the opacity of algorithmic decision tools. While transparency and explainability are certainly important desiderata in algorithmic governance, we worry that automated decision-making is being held to an unrealistically high standard, possibly owing to an unrealistically high estimate of the degree of transparency attainable from human decision-makers. In this paper, we review evidence demonstrating that much human decision-making is fraught with transparency problems, show in what respects AI fares little worse or better and argue that (...) No categories |
|
|
|
R. Jay Wallace argues in this book that moral accountability hinges on questions of fairness: When is it fair to hold people morally responsible for what they do? Would it be fair to do so even in a deterministic world? To answer these questions, we need to understand what we are doing when we hold people morally responsible, a stance that Wallace connects with a central class of moral sentiments, those of resentment, indignation, and guilt. To hold someone responsible, he (...) |
|
Responsibility and the Moral Sentiments offers an account of moral responsibility. It addresses the question: what are the forms of capacity or ability that render us morally accountable for the things we do? A traditional answer has it that the conditions of moral responsibility include freedom of the will, where this in turn involves the availability of robust alternative possibilities. I reject this answer, arguing that the conditions of moral responsibility do not include any condition of alternative possibilities. In the (...) |
|
|
|
The United States Army’s Future Combat Systems Project, which aims to manufacture a “robot army” to be ready for deployment by 2012, is only the latest and most dramatic example of military interest in the use of artificially intelligent systems in modern warfare. This paper considers the ethics of a decision to send artificially intelligent robots into war, by asking who we should hold responsible when an autonomous weapon system is involved in an atrocity of the sort that would normally (...) |
|
|
|
Recently T. M. Scanlon and others have advanced an ostensibly comprehensive theory of moral responsibility—a theory of both being responsible and being held responsible—that best accounts for our moral practices. I argue that both aspects of the Scanlonian theory fail this test. A truly comprehensive theory must incorporate and explain three distinct conceptions of responsibility—attributability, answerability, and accountability—and the Scanlonian view conflates the first two and ignores the importance of the third. To illustrate what a truly comprehensive theory might look (...) |
|
The illusory appeal of double effect -- The significance of intent -- Means and ends -- Blame. |
|
The notion of “responsibility gap” with artificial intelligence was originally introduced in the philosophical debate to indicate the concern that “learning automata” may make more difficult or impossible to attribute moral culpability to persons for untoward events. Building on literature in moral and legal philosophy, and ethics of technology, the paper proposes a broader and more comprehensive analysis of the responsibility gap. The responsibility gap, it is argued, is not one problem but a set of at least four interconnected problems (...) No categories |
|
When a person acts from ignorance, he is culpable for his action only if he is culpable for the ignorance from which he acts. The paper defends the view that this principle holds, not just for actions done from ordinary factual ignorance, but also for actions done from moral ignorance. The question is raised whether the principle extends to action done from ignorance about what one has most reason to do. It is tentatively proposed that the principle holds in full (...) |
|
The optimality approach to modeling natural selection has been criticized by many biologists and philosophers of biology. For instance, Lewontin (1979) argues that the optimality approach is a shortcut that will be replaced by models incorporating genetic information, if and when such models become available. In contrast, I think that optimality models have a permanent role in evolutionary study. I base my argument for this claim on what I think it takes to best explain an event. In certain contexts, optimality (...) |
|
|
|
This essay warns of eroding accountability in computerized societies. It argues that assumptions about computing and features of situations in which computers are produced create barriers to accountability. Drawing on philosophical analyses of moral blame and responsibility, four barriers are identified: 1) the problem of many hands, 2) the problem of bugs, 3) blaming the computer, and 4) software ownership without liability. The paper concludes with ideas on how to reverse this trend. |
|
|
|
Philosophy and Phenomenological Research, EarlyView. No categories |
|
In this paper, in line with the general framework of value-sensitive design, we aim to operationalize the general concept of “Meaningful Human Control” in order to pave the way for its translation into more specific design requirements. In particular, we focus on the operationalization of the first of the two conditions investigated: the so-called ‘tracking’ condition. Our investigation is led in relation to one specific subcase of automated system: dual-mode driving systems. First, we connect and compare meaningful human control with (...) |
|
Traditionally, the manufacturer/operator of a machine is held (morally and legally) responsible for the consequences of its operation. Autonomous, learning machines, based on neural networks, genetic algorithms and agent architectures, create a new situation, where the manufacturer/operator of the machine is in principle not capable of predicting the future machine behaviour any more, and thus cannot be held morally responsible or liable for it. The society must decide between not using this kind of machine any more (which is not a (...) |
|
Traditionally, the manufacturer/operator of a machine is held responsible for the consequences of its operation. Autonomous, learning machines, based on neural networks, genetic algorithms and agent architectures, create a new situation, where the manufacturer/operator of the machine is in principle not capable of predicting the future machine behaviour any more, and thus cannot be held morally responsible or liable for it. The society must decide between not using this kind of machine any more, or facing a responsibility gap, which cannot (...) |
|
In this article I advocate a worldly account of normative reasons according to which there is an ontological gap between these and the premises of practical thought, i.e. motivating considerations. While motivating considerations are individuated fine-grainedly, normative reasons should be classified as coarse-grained entities, e.g. as states of affairs, in order to explain certain necessary truths about them and to make sense of how we count and weigh them. As I briefly sketch, acting for normative reasons is nonetheless possible if (...) |
|
This essay is concerned with the relation between motivating and normative reasons. According to a common and influential thesis, a normative reason is identical with a motivating reason when an agent acts for that normative reason. I will call this thesis the ‘Identity Thesis’. Many philosophers treat the Identity Thesis as a commonplace or a truism. Accordingly, the Identity Thesis has been used to rule out certain ontological views about reasons. I distinguish a deliberative and an explanatory version of the (...) |
|
The combination of increased availability of large amounts of fine-grained human behavioral data and advances in machine learning is presiding over a growing reliance on algorithms to address complex societal problems. Algorithmic decision-making processes might lead to more objective and thus potentially fairer decisions than those made by humans who may be influenced by greed, prejudice, fatigue, or hunger. However, algorithmic decision-making has been criticized for its potential to enhance discrimination, information and power asymmetry, and opacity. In this paper, we (...) No categories |
|
Previous research in Explainable Artificial Intelligence (XAI) suggests that a main aim of explainability approaches is to satisfy specific interests, goals, expectations, needs, and demands regarding artificial systems (we call these “stakeholders' desiderata”) in a variety of contexts. However, the literature on XAI is vast, spreads out across multiple largely disconnected disciplines, and it often remains unclear how explainability approaches are supposed to achieve the goal of satisfying stakeholders' desiderata. This paper discusses the main classes of stakeholders calling for explainability (...) |
|
Decision-making assisted by algorithms developed by machine learning is increasingly determining our lives. Unfortunately, full opacity about the process is the norm. Would transparency contribute to restoring accountability for such systems as is often maintained? Several objections to full transparency are examined: the loss of privacy when datasets become public, the perverse effects of disclosure of the very algorithms themselves, the potential loss of companies’ competitive edge, and the limited gains in answerability to be expected since sophisticated algorithms usually are (...) No categories |
|
|
|
Donald Davidson opens ‘Actions, Reasons, and Causes’ by asking, ‘What is the relation between a reason and an action when the reason explains the action by giving the agent's reason for doing what he did?’ His answer has generated some confusion about reasons for action and made for some difficulty in understanding the place for the agent's own reasons for acting, in the explanation of an action. I offer here a different account of the explanation of action, one that, though (...) |
|
We propose a new definition of actual causes, using structural equations to model counterfactuals. We show that the definition yields a plausible and elegant account of causation that handles well examples which have caused problems for other definitions and resolves major difficulties in the traditional account. |
|
This article reports the findings of AI4People, an Atomium—EISMD initiative designed to lay the foundations for a “Good AI Society”. We introduce the core opportunities and risks of AI for society; present a synthesis of five ethical principles that should undergird its development and adoption; and offer 20 concrete recommendations—to assess, to develop, to incentivise, and to support good AI—which in some cases may be undertaken directly by national or supranational policy makers, while in others may be led by other (...) |
|
If understanding is factive, the propositions that express an understanding are true. I argue that a factive conception of understanding is unduly restrictive. It neither reflects our practices in ascribing understanding nor does justice to contemporary science. For science uses idealizations and models that do not mirror the facts. Strictly speaking, they are false. By appeal to exemplification, I devise a more generous, flexible conception of understanding that accommodates science, reflects our practices, and shows a sufficient but not slavish sensitivity (...) |
|
Decision-making assisted by algorithms developed by machine learning is increasingly determining our lives. Unfortunately, full opacity about the process is the norm. Would transparency contribute to restoring accountability for such systems as is often maintained? Several objections to full transparency are examined: the loss of privacy when datasets become public, the perverse effects of disclosure of the very algorithms themselves, the potential loss of companies’ competitive edge, and the limited gains in answerability to be expected since sophisticated algorithms usually are (...) No categories |
|
|
|
We can gain fresh insights into aspects of criminal liability by focusing first on the prior topic of criminal responsibility, and on the relational dimensions of responsibility: responsibility is responsibility for something, to someone. We are criminally responsible as citizens, to our fellow citizens, for committing 'public' wrongs: I discuss the difficulty of giving determinate content to this idea of public wrongs, and the way in which, whereas moral responsibility is typically strict, criminal responsibility is not. Finally, I explore the (...) |
|
Decision-making assisted by algorithms developed by machine learning is increasingly determining our lives. Unfortunately, full opacity about the process is the norm. Would transparency contribute to restoring accountability for such systems as is often maintained? Several objections to full transparency are examined: the loss of privacy when datasets become public, the perverse effects of disclosure of the very algorithms themselves, the potential loss of companies’ competitive edge, and the limited gains in answerability to be expected since sophisticated algorithms usually are (...) No categories |
|
|
|
|
|
What is the relation between a reason and an action when the reason explains the action by giving the agent's reason for doing what he did? We may call such explanations rationalizations, and say that the reason rationalizes the action. In this paper I want to defend the ancient - and common-sense - position that rationalization is a species of ordinary causal explanation. The defense no doubt requires some redeployment, but not more or less complete abandonment of the position, as (...) |
|
Practical Reality is a lucid original study of the relation between the reasons why we do things and the reasons why we should. Jonathan Dancy maintains that current philosophical orthodoxy bowdlerizes this relation, making it impossible to understand how anyone can act for a good reason. By giving a fresh account of values and reasons, he finds a place for normativity in philosophy of mind and action, and strengthens the connection between these areas and ethics. |
|
In this paper, I develop Mauricio Suárez’s distinction between denotation, epistemic representation, and faithful epistemic representation. I then outline an interpretational account of epistemic representation, according to which a vehicle represents a target for a certain user if and only if the user adopts an interpretation of the vehicle in terms of the target, which would allow them to perform valid (but not necessarily sound) surrogative inferences from the model to the system. The main difference between the interpretational conception I (...) |
|
In some previous work, I tried to give a concept-based account of the nature of our entitlement to certain very basic inferences (see the papers in Part III of Boghossian 2008b). In this previous work, I took it for granted, along with many other philosophers, that we understood well enough what it is for a person to infer. In this paper, I turn to thinking about the nature of inference itself. This topic is of great interest in its own right (...) |
|
There is no strong reason to believe that human-level intelligence represents an upper limit of the capacity of artificial intelligence, should it be realized. This poses serious safety issues, since a superintelligent system would have great power to direct the future according to its possibly flawed motivation system. Solving this issue in general has proven to be considerably harder than expected. This paper looks at one particular approach, Oracle AI. An Oracle AI is an AI that does not act in (...) |
|
|
|
This is a welcome reprint of a book that continues to grow in importance. |
|
The doyen of living English philosophers, by these reflections, took hold of and changed the outlook of a good many other philosophers, if not quite enough. He did so, essentially, by assuming that talk of freedom and responsibility is talk not of facts or truths, in a certain sense, but of our attitudes. His more explicit concern was to look again at the question of whether determinism and freedom are consistent with one another -- by shifting attention to certain personal (...) |
|
When many people are involved in an activity, it is often difficult, if not impossible, to pinpoint who is morally responsible for what, a phenomenon known as the ‘problem of many hands.’ This term is increasingly used to describe problems with attributing individual responsibility in collective settings in such diverse areas as public administration, corporate management, law and regulation, technological development and innovation, healthcare, and finance. This volume provides an in-depth philosophical analysis of this problem, examining the notion of moral (...) |
|
Derk Pereboom articulates and defends an original, forward-looking conception of moral responsibility. He argues that although we may not possess the kind of free will that is normally considered necessary for moral responsibility, this does not jeopardize our sense of ourselves as agents, or a robust sense of achievement and meaning in life. |
|
In this book Michael McKenna advances a new theory of moral responsibility, one that builds upon the work of P.F. Strawson. |
|
Understanding human beings and their distinctive rational and volitional capacities requires a clear account of such things as reasons, desires, emotions, and motives, and how they combine to produce and explain human behaviour. Maria Alvarez presents a fresh and incisive study of these concepts, centred on reasons and their role in human agency. |
|
Suitable for students and scholars, this title challenges the assumption that skepticism, rather than established belief, lies at the heart of scientific discovery. |
|
This book provides a comprehensive, systematic theory of moral responsibility. The authors explore the conditions under which individuals are morally responsible for actions, omissions, consequences, and emotions. The leading idea in the book is that moral responsibility is based on 'guidance control'. This control has two components: the mechanism that issues in the relevant behavior must be the agent's own mechanism, and it must be appropriately responsive to reasons. The book develops an account of both components. The authors go on (...) |