We are sceptical of concerns over the opacity of algorithmic decision tools. While transparency and explainability are certainly important desiderata in algorithmic governance, we worry that automated decision-making is being held to an unrealistically high standard, possibly owing to an unrealistically high estimate of the degree of transparency attainable from human decision-makers. In this paper, we review evidence demonstrating that much human decision-making is fraught with transparency problems, show in what respects AI fares little worse or better and argue that (...) at least some regulatory proposals for explainable AI could end up setting the bar higher than is necessary or indeed helpful. The demands of practical reason require the justification of action to be pitched at the level of practical reason. Decision tools that support or supplant practical reasoning should not be expected to aim higher than this. We cast this desideratum in terms of Daniel Dennett’s theory of the “intentional stance” and argue that since the justification of action for human purposes takes the form of intentional stance explanation, the justification of algorithmic decisions should take the same form. In practice, this means that the sorts of explanations for algorithmic decisions that are analogous to intentional stance explanations should be preferred over ones that aim at the architectural innards of a decision tool. (shrink)
The operations of deep networks are widely acknowledged to be inscrutable. The growing field of Explainable AI has emerged in direct response to this problem. However, owing to the nature of the opacity in question, XAI has been forced to prioritise interpretability at the expense of completeness, and even realism, so that its explanations are frequently interpretable without being underpinned by more comprehensive explanations faithful to the way a network computes its predictions. While this has been taken to be a (...) shortcoming of the field of XAI, I argue that it is broadly the right approach to the problem. (shrink)
A concise but informative overview of AI ethics and policy. -/- Artificial intelligence, or AI for short, has generated a staggering amount of hype in the past several years. Is it the game-changer it's been cracked up to be? If so, how is it changing the game? How is it likely to affect us as customers, tenants, aspiring homeowners, students, educators, patients, clients, prison inmates, members of ethnic and sexual minorities, and voters in liberal democracies? Authored by experts in fields (...) ranging from computer science and law to philosophy and cognitive science, this book offers a concise overview of moral, political, legal and economic implications of AI. It covers the basics of AI's latest permutation, machine learning, and considers issues such as transparency, bias, liability, privacy, and regulation. -/- Both business and government have integrated algorithmic decision support systems into their daily operations, and the book explores the implications for our lives as citizens. For example, do we take it on faith that a machine knows best in approving a patient's health insurance claim or a defendant's request for bail? What is the potential for manipulation by targeted political ads? How can the processes behind these technically sophisticated tools ever be transparent? The book discusses such issues as statistical definitions of fairness, legal and moral responsibility, the role of humans in machine learning decision systems, “nudging” algorithms and anonymized data, the effect of automation on the workplace, and AI as both regulatory tool and target. (shrink)
The danger of human operators devolving responsibility to machines and failing to detect cases where they fail has been recognised for many years by industrial psychologists and engineers studying the human operators of complex machines. We call it “the control problem”, understood as the tendency of the human within a human–machine control loop to become complacent, over-reliant or unduly diffident when faced with the outputs of a reliable autonomous system. While the control problem has been investigated for some time, up (...) to this point its manifestation in machine learning contexts has not received serious attention. This paper aims to fill that gap. We argue that, except in certain special circumstances, algorithmic decision tools should not be used in high-stakes or safety-critical decisions unless the systems concerned are significantly “better than human” in the relevant domain or subdomain of decision-making. More concretely, we recommend three strategies to address the control problem, the most promising of which involves a complementary coupling between highly proficient algorithmic tools and human agents working alongside one another. We also identify six key principles which all such human–machine systems should reflect in their design. These can serve as a framework both for assessing the viability of any such human–machine system as well as guiding the design and implementation of such systems generally. (shrink)
The past two decades have witnessed a revival of interest in multiple realization and multiply realized kinds. Bechtel and Mundale’s (1999) illuminating discussion of the subject must no doubt be credited with having generated much of this renewed interest. Among other virtues, their paper expresses what seems to be an important insight about multiple realization: that unless we keep a consistent grain across realized and realizing kinds, claims alleging the multiple realization of psychological kinds are vulnerable to refutation. In this (...) paper I argue that, intuitions notwithstanding, the terms of their recommendation make it impossible to follow, while also misleadingly insinuating that its application virtually guarantees mind-brain identity. Instead of a matching of grains, what multiple realization really requires is a principled method for adjudicating upon differences between tokens. Shapiro’s (2000) work on multiple realization can be understood as an attempt to adumbrate just such a method. While his “causal relevance” criterion can easily be mistaken for Bechtel and Mundale’s grain requirement, my analysis reveals exactly where and why these two tests diverge. (shrink)
The leading hypothesis concerning the “reuse” or “recycling” of neural circuits builds on the assumption that evolution might prefer the redeployment of established circuits over the development of new ones. What conception of cognitive architecture can survive the evidence for this hypothesis? In particular, what sorts of “modules” are compatible with this evidence? I argue that the only likely candidates will, in effect, be the columns which Vernon Mountcastle originally hypothesized some 60 years ago, and which form part of the (...) well-known columnar hypothesis in neuroscience—systems that cannot handle gross cognitive functions as distinct from strictly exiguous subfunctions. This is in stark contrast to the modules postulated by much of cognitive psychology, cognitive neuropsychology, and evolutionary psychology. And yet the fate of this revised notion is unclear. The main issue confronting it is the effect of the neural network context on local function. At some point the effects of context are so strong that the degree of specialization required for modularity is not able to be met. Still, despite indications from neuroimaging that peripheral and central systems deploy shared circuitry, some skills clearly do seem to display modularization and autonomy. This article: provides an in-depth analytical and historical review of the fortunes of modular thinking in cognitive science; offers a systematic calibration of brain regions in terms of degrees of functional specificity and robustness; and suggests another way of accounting for the partially encapsulated character of expertise and other highly practiced skills without having to resort to domain-specific modules. (shrink)
Modularity is a fundamental doctrine in the cognitive sciences. It holds a preeminent position in cognitive psychology and generative linguistics, as well as a long history in neurophysiology, with roots going all the way back to the early nineteenth century. But a mature field of neuroscience is a comparatively recent phenomenon and has challenged orthodox conceptions of the modular mind. One way of accommodating modularity within the new framework suggested by these developments is to go for increasingly soft versions of (...) modularity. One such version, which I call the “system” view, is so soft that it promises to meet practically any challenge neuroscience can throw at it. In this paper, I reconsider afresh what we ought to regard as the sine qua non of modularity and offer a few arguments against the view that an insipid “system” module could be the legitimate successor of the traditional notion. (shrink)
Evidence of the pervasiveness of neural reuse in the human brain has forced a revision of the standard conception of modularity in the cognitive sciences. One persistent line of argument against such revision, however, cites the evidence of cognitive dissociations. While this article takes the dissociations seriously, it contends that the traditional modular account is not the best explanation. The key to the puzzle is neural redundancy. The article offers both a philosophical analysis of the relation between reuse and redundancy (...) as well as a plausible solution to the problem of dissociations. (shrink)
The use of advanced AI and data-driven automation in the public sector poses several organisational, practical, and ethical challenges. One that is easy to underestimate is automation bias, which, in turn, has underappreciated legal consequences. Automation bias is an attitude in which the operator of an autonomous system will defer to its outputs to the point where the operator overlooks or ignores evidence that the system is failing. The legal problem arises when statutory office-holders (or their employees) either fetter their (...) discretion to in-house algorithms or improperly delegate their discretion to third-party software developers—something automation bias may facilitate. A synthesis of previous research suggests an easy way to mitigate the risks of automation bias and its potential legal ramifications is for those responsible for procurement decisions to adhere to a simple checklist that ensures that the pitfalls of automation are avoided as much as possible. (shrink)
We review the literature on how perceiving an AI making mistakes violates trust and how such violations might be repaired. In doing so, we discuss the role played by various forms of algorithmic transparency in the process of trust repair, including explanations of algorithms, uncertainty estimates, and performance metrics.
The AI revolution provides a neat illustration of C.P. Snow's ideas regarding "the two cultures" and a timely opportunity to reflect on why mutual suspicion persists between those in the natural sciences, on the one hand, and the humanities (and to an extent the social sciences), on the other.
A familiar trope of cognitive science, linguistics, and the philosophy of psychology over the past forty or so years has been the idea of the mind as a modular system-that is, one consisting of functionally specialized subsystems responsible for processing different classes of input, or handling specific cognitive tasks like vision, language, logic, music, and so on. However, one of the major achievements of neuroscience has been the discovery that the brain has incredible powers of renewal and reorganization. This "neuroplasticity," (...) in its various forms, has challenged many of the orthodox conceptions of the mind which originally led cognitive scientists to postulate hardwired mental modules. -/- This book examines how such discoveries have changed the way we think about the structure of the mind. It contends that the mind is more supple than prevailing theories in cognitive science and artificial intelligence acknowledge. The book uses language as a test case. The claim that language is cognitively special has often been understood as the claim that it is underpinned by dedicated-and innate-cognitive mechanisms. Zerilli offers a fresh take on how our linguistic abilities could be domain-general: enabled by a composite of very small and redundant cognitive subsystems, few if any of which are likely to be specialized for language. In arguing for this position, however, the book takes seriously various cases suggesting that language dissociates from other cognitive faculties. -/- Accessibly written, The Adaptable Mind is a fascinating account of neuroplasticity, neural reuse, the modularity of mind, the evolution of language, and faculty psychology. (shrink)
I draw parallels and contrasts between dual-system and modular approaches to cognition, the latter standing to inherit the same problems De Neys identifies regarding the former. Despite these two literatures rarely coming into contact, I provide one example of how he might gain theoretical leverage on the details of his “non-exclusivity” claim by paying closer attention to the modularity debate.
Suddendorf explores “the gap” between humans and other animals, with a particular emphasis on our great ape relatives. Both for nonscientists and those scientists or philosophers whose work is not centrally preoccupied with such questions, the book provides a tidy compendium of experimental results organized around a number of precisely defined areas of competence. He takes language, mental time travel, theory of mind, intelligence, culture and morality to be definitive of human cognitive prowess and judiciously evaluates the comparative evidence he (...) has assembled in respect of each of them .What is especially refreshing about his interpretations is that they come off as measured and dispassionate just when one imagines the pull of optimism would be strongest. At no point does he rely on hand-waving explanations or recapitulate familiar pieties about man and beast. (shrink)