Machine Consciousness
Summary | Machines cast the problem of explaining consciousness in a particularly interesting light. The most basic question is: Could a machine be conscious? Or, can consciousness be explained mechanically? More specifically, does consciousness have anything to do with what something is made out of, or is the only relevant issue what a thing’s parts are doing, whatever those parts are made of? Could a machine made out of gears and pulleys be conscious? Could a computer be conscious (a variant: is the internet conscious)? Could a machine made out mostly water, carbon, and nitrogen be conscious? Are only information processors conscious (however this is defined, and however information processing is implemented)? Perhaps can openers aren’t conscious not because they are made out of steel and plastic, but because their parts aren’t processing information, or not processing information in the right way. These two issues can be combined: Can only machines with neurons be conscious because only neurons can do what has to be done to produce consciousness? Perhaps consciousness cannot be explained mechanically, but nevertheless only mechanical things can be conscious; rocks are excluded, perhaps. Is being alive necessary? Could we upload our consciousness to another kind of machine? Finally, what is the relation between behavior and the attribution of consciousness? Confronted with a non-conscious robot that behaved as if it were conscious, we would find it nearly impossible not to treat it accordingly, say, by refraining from insulting it or hitting it. Behaving as if they are conscious is in fact all we have to go on concerning our fellow carbon-based earthlings. So, it is because animals like dogs, cats, octopi, and humans behave as if they are conscious, that we naturally conclude that they are (at least today . . . throughout history, however, many humans have been reluctant to attribute consciousness to others significantly unlike them, including other humans, dogs, cats, and octopi, etc.) |
Key works | Leibniz, in section 17 of his Monodology, was one of the first to argue that thinking and perception could not be mechanical by imagining walking around in a large machine, like a windmill, that could think; one could not find the root of its thinking in the workings of its gears and pulleys. An excellent modern version of Leibniz is Searle 1980. Block 1978 comes to a negative conclusion again, like Leibniz's. Stan Franklin's Artificial Minds, MIT Press, 1995, has a more positive conclusion, and covers a lot interesting ground. Another positive argument is Chalmers 2011. |
Introductions | An introduction is Gamez 2008. Also good is Stan Franklin's Artificial Minds, MIT Press, 1995. |
- The Turing Test (466)
- Godelian Arguments Against AI (270)
- The Chinese Room (268)
- Machine Mentality, Misc (313)
- Philosophy of Consciousness (30,899 | 8,856)
- Cognitive Models of Consciousness (209)
- The Turing Test (466)
- The Chinese Room (268)
- Machine Mentality, Misc (313)
1 filter applied
|
Off-campus access
Using PhilPapers from home?
Create an account to enable off-campus access through your institution's proxy server. Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Editorial team
General Editors:
David Bourget (Western Ontario) David Chalmers (ANU, NYU) Area Editors: David Bourget Gwen Bradford Berit Brogaard Margaret Cameron David Chalmers James Chase Rafael De Clercq Ezio Di Nucci Esa Diaz-Leon Barry Hallen Hans Halvorson Jonathan Ichikawa Michelle Kosch Øystein Linnebo JeeLoo Liu Paul Livingston Brandon Look Manolo Martínez Matthew McGrath Michiru Nagatsu Susana Nuccetelli Giuseppe Primiero Jack Alan Reynolds Darrell P. Rowbottom Aleksandra Samonek Constantine Sandis Howard Sankey Jonathan Schaffer Thomas Senor Robin Smith Daniel Star Jussi Suikkanen Aness Kim Webster Other editors Contact us Learn more about PhilPapers |