Extract from Hofstadter's revew in Bulletin of American Mathematical Society : http://www.ams.org/journals/bull/1980-02-02/S0273-0979-1980-14752-7/S0273-0979-1980-14752-7.pdf -/- "Aaron Sloman is a man who is convinced that most philosophers and many other students of mind are in dire need of being convinced that there has been a revolution in that field happening right under their noses, and that they had better quickly inform themselves. The revolution is called "Artificial Intelligence" (Al)-and Sloman attempts to impart to others the "enlighten- ment" which he clearly regrets not having (...) experienced earlier himself. Being somewhat of a convert, Sloman is a zealous campaigner for his point of view. Now a Reader in Cognitive Science at Sussex, he began his academic career in more orthodox philosophy and, by exposure to linguistics and AI, came to feel that all approaches to mind which ignore AI are missing the boat. I agree with him, and I am glad that he has written this provocative book. The tone of Sloman's book can be gotten across by this quotation (p. 5): "I am prepared to go so far as to say that within a few years, if there remain any philosophers who are not familiar with some of the main developments in artificial intelligence, it will be fair to accuse them of professional incom- petence, and that to teach courses in philosophy of mind, epistemology, aesthetics, philosophy of science, philosophy of language, ethics, metaphysics, and other main areas of philosophy, without discussing the relevant aspects of artificial intelligence will be as irresponsible as giving a degree course in physics which includes no quantum theory." -/- (The author now regrets the extreme polemical tone of the book.). (shrink)
Replication or even modelling of consciousness in machines requires some clarifications and refinements of our concept of consciousness. Design of, construction of, and interaction with artificial systems can itself assist in this conceptual development. We start with the tentative hypothesis that although the word “consciousness” has no well-defined meaning, it is used to refer to aspects of human and animal informationprocessing. We then argue that we can enhance our understanding of what these aspects might be by designing and building virtual-machine (...) architectures capturing various features of consciousness. This activity may in turn nurture the development of our concepts of consciousness, showing how an analysis based on information-processing virtual machines answers old philosophical puzzles as well enriching empirical theories. This process of developing and testing ideas by developing and testing designs leads to gradual refinement of many of our pre-theoretical concepts of mind, showing how they can be construed as implicitly “architecture-based” concepts. Understanding how humanlike robots with appropriate architectures are likely to feel puzzled about qualia may help us resolve those puzzles. The concept of “qualia” turns out to be an “architecture-based” concept, while individual qualia concepts are “architecture-driven”. (shrink)
Emotions involve complex processes produced by interactions between motives, beliefs, percepts, etc. E.g. real or imagined fulfilment or violation of a motive, or triggering of a 'motive-generator', can disturb processes produced by other motives. To understand emotions, therefore, we need to understand motives and the types of processes they can produce. This leads to a study of the global architecture of a mind. Some constraints on the evolution of minds are disussed. Types of motives and the processes they generate are (...) sketched. (shrink)
This is not a scholarly research paper, but a ‘position paper’ outlining an approach to the study of mind which has been gradually evolving since about 1969 when I first become acquainted with work in Artificial Intelligence through Max Clowes. I shall try to show why it is more fruitful to construe the mind as a control system than as a computational system.
This is not a scholarly research paper, but a ‘position paper’ outlining an approach to the study of mind which has been gradually evolving since about 1969 when I first become acquainted with work in Artificial Intelligence through Max Clowes. I shall try to show why it is more fruitful to construe the mind as a control system than as a computational system.
This paper is about how to give human-like powers to complete agents. For this the most important design choice concerns the overall architecture. Questions regarding detailed mechanisms, forms of representations, inference capabilities, knowledge etc. are best addressed in the context of a global architecture in which different design decisions need to be linked. Such a design would assemble various kinds of functionality into a complete coherent working system, in which there are many concurrent, partly independent, partly mutually supportive, partly potentially (...) incompatible processes, addressing a multitude of issues on different time scales, including asynchronous, concurrent, motive generators. Designing human like agents is part of the more general problem of understanding design space, niche space and their interrelations, for, in the abstract, there is no one optimal design, as biological diversity on earth shows. (shrink)
he design-based approach is a methodology for investigating mechanisms capable of generating mental phenomena, whether introspectively or externally observed, and whether they occur in humans, other animals or robots. The study of designs satisfying requirements for autonomous agency can provide new deep theoretical insights at the information processing level of description of mental mechanisms. Designs for working systems (whether on paper or implemented on computers) can systematically explicate old explanatory concepts and generate new concepts that allow new and richer interpretations (...) of human phenomena. To illustrate this, some aspects of human grief are analysed in terms of a particular information processing architecture being explored in our research group. We do not claim that this architecture is part of the causal structure of the human mind; rather, it represents an early stage in the iterative search for a deeper and more general architecture, capable of explaining more phenomena. However even the current early design provides an interpretative ground for some familiar phenomena, including characteristic features of certain emotional episodes, particularly the phenomenon of perturbance (a partial or total loss of control of attention). The paper attempts to expound and illustrate the design-based approach to cognitive science and philosophy, to demonstrate the potential effectiveness of the approach in generating interpretative possibilities, and to provide first steps towards an information processing account of `perturbant', emotional episodes. (shrink)
This paper is about how to give human-like powers to complete agents. For this the most important design choice concerns the overall architecture. Questions regarding detailed mechanisms, forms of representations, inference capabilities, knowledge etc. are best addressed in the context of a global architecture in which different design decisions need to be linked. Such a design would assemble various kinds of functionality into a complete coherent working system, in which there are many concurrent, partly independent, partly mutually supportive, partly potentially (...) incompatible processes, addressing a multitude of issues on different time scales, including asynchronous, concurrent, motive generators. Designing human like agents is part of the more general problem of understanding design space, niche space and their interrelations, for, in the abstract, there is no one optimal design, as biological diversity on earth shows. (shrink)
What is the relation between intelligence and computation? Although the difficulty of defining `intelligence' is widely recognized, many are unaware that it is hard to give a satisfactory definition of `computational' if computation is supposed to provide a non-circular explanation for intelligent abilities. The only well-defined notion of `computation' is what can be generated by a Turing machine or a formally equivalent mechanism. This is not adequate for the key role in explaining the nature of mental processes, because it is (...) too general, as many computations involve nothing mental, nor even processes: they are simply abstract structures. We need to combine the notion of `computation' with that of `machine'. This may still be too restrictive, if some non-computational mechanisms prove to be useful for intelligence. We need a theory-based taxonomy of {\em architectures} and {\em mechanisms} and corresponding process types. Computational machines my turn out to be a sub-class of the machines available for implementing intelligent agents. The more general analysis starts with the notion of a system with independently variable, causally interacting sub-states that have different causal roles, including both `belief-like' and `desire-like' sub-states, and many others. There are many significantly different such architectures. For certain architectures (including simple computers), some sub-states have a semantic interpretation for the system. The relevant concept of semantics is defined partly in terms of a kind of Tarski-like structural correspondence (not to be confused with isomorphism). This always leaves some semantic indeterminacy, which can be reduced by causal loops involving the environment. But the causal links are complex, can share causal pathways, and always leave mental states to some extent semantically indeterminate. (shrink)
This paper rehearses some relatively old arguments about how any coherent notion of free will is not only compatible with but depends on determinism. However the mind-brain identity theory is attacked on the grounds that what makes a physical event an intended action A is that the agent interprets the physical phenomena as doing A. The paper should have referred to the monograph Intention by Elizabeth Anscombe, which discusses in detail the fact that the same physical event can have multiple (...) descriptions, using different ontologies. (shrink)
This position paper presents the beginnings of a general theory of representations starting from the notion that an intelligent agent is essentially a control system with multiple control states, many of which contain information (both factual and non-factual), albeit not necessarily in a propositional form. The paper attempts to give a general characterisation of the notion of the syntax of an information store, in terms of types of variation the relevant mechanisms can cope with. Similarly concepts of semantics pragmatics and (...) inference are generalised to apply to information-bearing sub-states in control systems. A number of common but incorrect notions about representation are criticised (such as that pictures are in some way isomorphic with what they represent). (shrink)
Most philosophers appear to have ignored the distinction between the broad concept of Virtual Machine Functionalism (VMF) described in Sloman&Chrisley (2003) and the better known version of functionalism referred to there as Atomic State Functionalism (ASF), which is often given as an explanation of what Functionalism is, e.g. in Block (1995). -/- One of the main differences is that ASF encourages talk of supervenience of states and properties, whereas VMF requires supervenience of machines that are arbitrarily complex networks of causally (...) interacting (virtual, but real) processes, possibly operating on different time-scales, examples of which include many different procesess usually running concurrently on a modern computer performing various tasks concerned with handling interfaces to physical devices, managing the file system, dealing with security, providing tools, entertainments, and games, and possibly processing research data. Another example of VMF would be the kind of functionalism involved in a large collection of possibly changing socio-economic structures and processes interacting in a complex community, and yet another is illustrated by the kind of virtual machinery involved in the many levels of visual processing of information about spatial structures, processes, and relationships (including percepts of moving shadows, reflections, highlights, optical-flow patterns and changing affordances) as you walk through a crowded car-park on a sunny day: generating a whole zoo of interacting qualia. (Forget solitary red patches, or experiences thereof.) -/- Perhaps VMF should be re-labelled "Virtual MachinERY Functionalism" because the word 'machinery' more readily suggests something complex with interacting parts. VMF is concerned with virtual machines that are made up of interacting concurrently active (but not necessarily synchronised) chunks of virtual machinery which not only interact with one another and with their physical substrates (which may be partly shared, and also frequently modified by garbage collection, metabolism, or whatever) but can also concurrently interact with and refer to various things in the immediate and remote environment (via sensory/motor channels, and possible future technologies also). I.e. virtual machinery can include mechanisms that create and manipulate semantic content, not only syntactic structures or bit patterns as digital virtual machines do. -/- Please note: Click on the title above or the link below to read the paper. I prefer to keep all my papers freely accessible on my web site so that I can correct mistakes and add improvements. -/- http://www.cs.bham.ac.uk/research/projects/cogaff/misc/vm-functionalism.html -/- This is now part of the Meta-Morphogenesis project: http://www.cs.bham.ac.uk/research/projects/cogaff/misc/meta-morphogenesis.html. (shrink)
As a step towards comprehensive computer models of communication, and effective human machine dialogue, some of the relationships between communication and affect are explored. An outline theory is presented of the architecture that makes various kinds of affective states possible, or even inevitable, in intelligent agents, along with some of the implications of this theory for various communicative processes. The model implies that human beings typically have many different, hierarchically organized, dispositions capable of interacting with new information to produce affective (...) states, distract attention, interrupt ongoing actions, and so on. High "insistence" of motives is defined in relation to a tendency to penetrate an attention filter mechanism, which seems to account for the partial loss of control involved in emotions. One conclusion is that emulating human communicative abilities will not be achieved easily. Another is that it will be even more difficult to design and build computing systems that reliably achieve interesting communicative goals. (shrink)
"The Emperor's New Mind" by Roger Penrose has received a great deal of both praise and criticism. This review discusses philosophical aspects of the book that form an attack on the "strong" AI thesis. Eight different versions of this thesis are distinguished, and sources of ambiguity diagnosed, including different requirements for relationships between program and behaviour. Excessively strong versions attacked by Penrose (and Searle) are not worth defending or attacking, whereas weaker versions remain problematic. Penrose (like Searle) regards the notion (...) of an algorithm as central to AI, whereas it is argued here that for the purpose of explaining mental capabilities the architecture of an intelligent system is more important than the concept of an algorithm, using the premise that what makes something intelligent is not what it does but how it does it. What needs to be explained is also unclear: Penrose thinks we all know what consciousness is and claims that the ability to judge Go "del's formula to be true depends on it. He also suggests that quantum phenomena underly consciousness. This is rebutted by arguing that our existing concept of "consciousness" is too vague and muddled to be of use in science. This and related concepts will gradually be replaced by a more powerful theory-based taxonomy of types of mental states and processes. The central argument offered by Penrose against the strong AI thesis depends on a tempting but unjustified interpretation of Goedel's incompleteness theorem. Some critics are shown to have missed the point of his argument. A stronger criticism is mounted, and the relevance of mathematical Platonism analysed. Architectural requirements for intelligence are discussed and differences between serial and parallel implementations analysed. (shrink)
My favourite leading question when teaching Philosophy of Mind is ‘Could a goldfish long for its mother?’ This introduces the philosophical technique of ‘conceptual analysis’, essential for the study of mind (Sloman 1978, ch. 4). By analysing what we mean by ‘A longs for B’, and similar descriptions of emotional states we see that they inv olve rich cognitive structures and processes, i.e. computations. Anything which could long for its mother, would have to hav e some sort of representation of (...) its mother, would have to believe that she is not in the vicinity, would have to be able to represent the _possibility _of being close to her, would have to desire that possibility, and would have to be to some extent pre-occupied or obsessed with that desire. That is, it should intrude into and interfere with other activities, like admiring the scenery, catching smaller fish, etc. If the desire were there, but could be calmly put aside, whilst other interests were pursued, then it would not be truly a state of longing. It might be a state of preferring. Thus longing involves computational interrupts. The same seems to be true of all emotions. (shrink)
The common view that the notion of a Turing machine is directly relevant to AI is criticised. It is argued that computers are the result of a convergence of two strands of development with a long history: development of machines for automating various physical processes and machines for performing abstract operations on abstract entities, e.g. doing numerical calculations. Various aspects of these developments are analysed, along with their relevance to AI, and the similarities between computers viewed in this way and (...) animal brains. This comparison depends on a number of distinctions: between energy requirements and information requirements of machines, between ballistic and online control, between internal and external operations, and between various kinds of autonomy and self-awareness. The ideas are all intuitively familiar to software engineers, though rarely made fully explicit. Most of this has nothing to do with Turing machines or most of the mathematical theory of computation. But it has everything to do with both the scientific task of understanding, modelling or replicating human or animal intelligence and the engineering applications of AI, as well as other applications of computers. (shrink)
It is often thought that there is one key design principle or at best a small set of design principles, underlying the success of biological organisms. Candidates include neural nets, ‘swarm intelligence’, evolutionary computation, dynamical systems, particular types of architecture or use of a powerful uniform learning mechanism, e.g. reinforcement learning. All of those support types of self-organising, self-modifying behaviours. But we are nowhere near understanding the full variety of powerful information-processing principles ‘discovered’ by evolution. By attending closely to the (...) diver- sity of biological phenomena we may gain key insights into (a) how evolution happens, (b) what sorts of mechanisms, forms of representation, types of learning and development and types of architectures have evolved, (c) how to explain ill-understood aspects of human and animal intelligence, and (d) new useful mechanisms for artificial systems. (shrink)
This paper offers a short and biased overview of the history of discussion and controversy about the role of different forms of representation in intelligent agents. It repeats and extends some of the criticisms of the `logicist' approach to AI that I first made in 1971, while also defending logic for its power and generality. It identifies some common confusions regarding the role of visual or diagrammatic reasoning including confusions based on the fact that different forms of representation may be (...) used at different levels in an implementation hierarchy. This is contrasted with the way in the use of one form of representation (e.g. pictures) can be {\em controlled} using another (e.g. logic, or programs). Finally some questions are asked about the role of metrical information in biological visual systems. (shrink)
The aim of the thesis is to show that there are some synthetic necessary truths, or that synthetic apriori knowledge is possible. This is really a pretext for an investigation into the general connection between meaning and truth, or between understanding and knowing, which, as pointed out in the preface, is really the first stage in a more general enquiry concerning meaning. After the preliminaries, in which the problem is stated and some methodological remarks made, the investigation proceeds in two (...) stages. First there is a detailed inquiry into the manner in which the meanings or functions of words occurring in a statement help to determine the conditions in which that statement would be true. This prepares the way for the second stage, which is an inquiry concerning the connection between meaning and necessary truth. The first stage occupies Part Two of the thesis, the second stage Part Three. In all this, only a restricted class of statements is discussed, namely those which contain nothing but logical words and descriptive words, such as "Not all round tables are scarlet" and "Every three-sided figure is three-angled". (shrink)
This paper outlines a design-based methodology for the study of mind as a part of the broad discipline of Artificial Intelligence. Within that framework some architectural requirements for human-like minds are discussed, and some preliminary suggestions made regarding mechanisms underlying motivation, emotions, and personality. A brief description is given of the `Nursemaid' or `Minder' scenario being used at the University of Birmingham as a framework for research on these problems. It may be possible later to combine some of these ideas (...) with work on synthetic agents inhabiting virtual reality environments. (shrink)
This paper, along with the following paper by John McCarthy, introduces some of the topics to be discussed at the IJCAI95 event `A philosophical encounter: An interactive presentation of some of the key philosophical problems in AI and AI problems in philosophy.' Philosophy needs AI in order to make progress with many difficult questions about the nature of mind, and AI needs philosophy in order to help clarify goals, methods, and concepts and to help with several specific technical problems. Whilst (...) philosophical attacks on AI continue to be welcomed by a significant subset of the general public, AI defenders need to learn how to avoid philosophically naive rebuttals. (shrink)
CONJECTURE: Alongside the innate physical sucking reflex for obtaining milk to be digested, decomposed and used all over the body for growth, repair, and energy, there is a genetically determined information-sucking reflex, which seeks out, sucks in, and decomposes information, which is later recombined in many ways, growing the information-processing architecture and many diverse recombinable competences.
This paper attempts to characterise a unifying overview of the practice of software engineers, AI designers, developers of evolutionary forms of computation, designers of adaptive systems, etc. The topic overlaps with theoretical biology, developmental psychology and perhaps some aspects of social theory. Just as much of theoretical computer science follows the lead of engineering intuitions and tries to formalise them, there are also some important emerging high level cross disciplinary ideas about natural information processing architectures and evolutionary mechanisms and that (...) can perhaps be unified and formalised in the future. There is some speculation about the evolution of human cognitive architectures and consciousness. (shrink)
What we have learnt in the last six or seven decades about virtual machinery, as a result of a great deal of science and technology, enables us to offer Darwin a new defence against critics who argued that only physical form, not mental capabilities and consciousness could be products of evolution by natural selection. The defence compares the mental phenomena mentioned by Darwin’s opponents with contents of virtual machinery in computing systems. Objects, states, events, and processes in virtual machinery which (...) we have only recently learnt how to design and build, and could not even have been thought about in Darwin’s time, can interact with the physical machinery in which they are implemented, without being identical with their physical implementation, nor mere aggregates of physical structures and processes. The existence of various kinds of virtual machinery depends on complex webs of causal connections involving hardware and software structures, events and processes, where the specification of such causal webs requires concepts that cannot be defined in terms of concepts of the physical sciences. That indefinability, plus the possibility of various kinds of self-monitoring within virtual machinery, seems to explain some of the allegedly mysterious and irreducible features of consciousness that motivated Darwin’s critics and also more recent philosophers criticising AI. There are consequences for philosophy, psychology, neuroscience and robotics. (shrink)
Author comments Rick Grush’s statements about emulation and embodied approach to representation. He proposes his modification of Grush’s definition of emulation, criticizing notion of “standing in for”. He defends of notion of representation. He claims that radical embodied theories are not applicable to all cognition.
This is a contribution to construction of a research roadmap for future cognitive systems, including intelligent robots, in the context of the euCognition network, and UKCRC Grand Challenge 5: Architecture of Brain and Mind. -/- A meeting on the euCognition roadmap project was held at Munich Airport on 11th Jan 2007. This document was in part a response to discussions at that meeting. An explanation of why specifying requirements is a hard problem, and why it needs to be done, along (...) with some suggestions for making progress, can be found in this presentation: http://www.cs.bham.ac.uk/research/projects/cosy/papers/#pr0701 "What's a Research Roadmap For? Why do we need one? How can we produce one?" Working on that presentation made me realise that certain deceptively familiar words and phrases frequently used in this context (e.g. "robust". "flexible", "autonomous") appear not to need explanation because everyone understands them, whereas in fact they have obscure semantics that needs to be elucidated. Only then can we understand what the implications are for research targets. In particular, they need explanation and analysis if they are to be used to specify requirements and research goals, especially for publicly funded projects. -/- First draft analyses are presented here. In the long term I would like to expand and clarify those analyses, and to provide many different examples to illustrate the points made. This will probably have to be a collaborative research activity. (shrink)