The question of whether AI systems such as robots can or should be afforded moral agency or patiency is not one amenable either to discovery or simple reasoning, because we as societies constantly reconstruct our artefacts, including our ethical systems. Consequently, the place of AI systems in society is a matter of normative, not descriptive ethics. Here I start from a functionalist assumption, that ethics is the set of behaviour that maintains a society. This assumption allows me to exploit the (...) theoretical biology of sociality and autonomy to explain our moral intuitions. From this grounding I extend to consider possible ethics for maintaining either human- or of artefact-centred societies. I conclude that while constructing AI systems as either moral agents or patients is possible, neither is desirable. In particular, I argue that we are unlikely to construct a coherent ethics in which it it is ethical to afford AI moral subjectivity. We are therefore obliged not to build AI we are obliged to. (shrink)
Conferring legal personhood on purely synthetic entities is a very real legal possibility, one under consideration presently by the European Union. We show here that such legislative action would be morally unnecessary and legally troublesome. While AI legal personhood may have some emotional or economic appeal, so do many superficially desirable hazards against which the law protects us. We review the utility and history of legal fictions of personhood, discussing salient precedents where such fictions resulted in abuse or incoherence. We (...) conclude that difficulties in holding “electronic persons” accountable when they violate the rights of others outweigh the highly precarious moral interests that AI legal personhood might protect. (shrink)
How obliged can we be to AI, and how much danger does it pose us? A surprising proportion of our society holds exaggerated fears or hopes for AI, such as the fear of robot world conquest, or the hope that AI will indefinitely perpetuate our cul- ture. These misapprehensions are symptomatic of a larger problem—a confusion about the nature and origins of ethics and its role in society. While AI technologies do pose promises and threats, these are not qualitatively different (...) from those posed by other artifacts of our culture which are largely ignored: from factories to advertising, weapons to political systems. Ethical systems are based on notions of identity, and the exaggerated hopes and fears of AI derive from our cultures having not yet accommodated the fact that language and reasoning are no longer uniquely human. The experience of AI may improve our ethical intuitions and self-understanding, potentially helping our societies make better-informed decisions on serious ethical dilemmas. (shrink)
This article argues that conscious attention exists not so much for selecting an immediate action as for using the current task to focus specialized learning for the action-selection mechanism and predictive models on tasks and environmental contingencies likely to affect the conscious agent. It is perfectly possible to build this sort of a system into machine intelligence, but it would not be strictly necessary unless the intelligence needs to learn and is resource-bounded with respect to the rate of learning versus (...) the rate of relevant environmental change. Support for this theory is drawn from scientific research and AI simulations. Consequences are discussed with respect to self-consciousness and ethical obligations to and for AI. (shrink)
The term embodiment identifies a theory that meaning and semantics cannot be captured by abstract, logical systems, but are dependent on an agentâs experience derived from being situated in an environment. This theory has recently received a great deal of support in the cognitive science literature and is having significant impact in artificial intelligence. Memetics refers to the theory that knowledge and ideas can evolve more or less independently of their human-agent substrates. While humans provide the medium for this evolution, (...) memetics holds that ideas can be developed without human comprehension or deliberate interference. Both theories have profound implications for the study of languageâits potential use by machines, its acquisition by children and of particular relevance to this special issue, its evolution. This article links the theory of memetics to the established literature on semantic space, then examines the extent to which these memetic mechanisms might account for language independently of embodiment. It then seeks to explain the evolution of language through uniquely human cognitive capacities which facilitate memetic evolution. (shrink)
When individuals learn from what others tell them, the information is subject to transmission error that does not arise in learning from direct experience. Yet evidence shows that humans consistently prefer this apparently more unreliable source of information. We examine the effect this preference has in cases where the information concerns a judgment on others’ behaviour and is used to establish cooperation in a society. We present a spatial model confirming that cooperation can be sustained by gossip containing a high (...) degree of uncertainty. Accuracy alone does not predict the value of information in evolutionary terms; relevance, the impact of information on behavioural outcomes, must also be considered. We then show that once relevance is incorporated as a criterion, second-hand information can no longer be discounted on the basis of its poor fidelity alone. Finally we show that the relative importance of accuracy and relevance depends on factors of life history and demography. (shrink)
Social learning is a source of behaviour for many species, but few use it as extensively as they seemingly could. In this article, I attempt to clarify our understanding of why this might be. I discuss the potential computational properties of social learning, then examine the phenomenon in nature through creating a taxonomy of the representations that might underly it. This is achieved by first producing a simplified taxonomy of the established forms of social learning, then describing the primitive capacities (...) necessary to support them, and finally considering which of these capacities we actually have evidence for. I then discuss theoretical limits on cultural evolution, which include having sufficient information transmitted to support robust representations capable of supporting variation for evolution, and the need for limiting the extent of social conformity to avoid ecological fragility. Finally, I show how these arguments can inform several key scientific questions, including the uniqueness of human culture, the long lifespans of cultural species, and the propensity of animals to seemingly have knowledge about a phenomenon well before they will act upon it. (shrink)
Social learning is a source of behaviour for many species, but few use it as extensively as they seemingly could. In this article, I attempt to clarify our understanding of why this might be. I discuss the potential computational properties of social learning, then examine the phenomenon in nature through creating a taxonomy of the representations that might underly it. This is achieved by first producing a simplified taxonomy of the established forms of social learning, then describing the primitive capacities (...) necessary to support them, and finally considering which of these capacities we actually have evidence for. I then discuss theoretical limits on cultural evolution, which include having sufficient information transmitted to support robust representations capable of supporting variation for evolution, and the need for limiting the extent of social conformity to avoid ecological fragility. Finally, I show how these arguments can inform several key scientific questions, including the uniqueness of human culture, the long lifespans of cultural species, and the propensity of animals to seemingly have knowledge about a phenomenon well before they will act upon it. (shrink)
One of the interesting and occasionally controversial aspects of Dennett’s career is his direct involvement in the scientific process. This article describes some of Dennett’s participation on one particular project conducted at MIT, the building of the humanoid robot named Cog. One of the intentions of this project, not to date fully realized, was to test Dennett’s multiple drafts theory of consciousness. I describe Dennett’s involvement and impact on Cog from the perspective of a graduate student. I also describe the (...) problem of coordinating distributed intelligent systems, drawing examples from robot intelligence, human intelligence, and the Cog project itself. (shrink)
One of the interesting and occasionally controversial aspects of Dennett’s career is his direct involvement in the scientific process. This article describes some of Dennett’s participation on one particular project conducted at MIT, the building of the humanoid robot named Cog. One of the intentions of this project, not to date fully realized, was to test Dennett’s multiple drafts theory of consciousness. I describe Dennett’s involvement and impact on Cog from the perspective of a graduate student. I also describe the (...) problem of coordinating distributed intelligent systems, drawing examples from robot intelligence, human intelligence, and the Cog project itself. (shrink)
Language isn't the only way to cross modules, nor is it the only module with access to both input and output. Minds don't generally work across modules because this leads to combinatorial explosion in search and planning. Language is special in being a good vector for mimetics, so it becomes associated with useful cross-module concepts we acquire culturally. Further, language is indexical, so it facilitates computationally expensive operations.