Philosophy of Cognitive Science > Philosophy of Artificial Intelligence > Ethics of Artificial Intelligence > Machine Ethics
Machine Ethics
Edited by Jeffrey White (Okinawa Institute Of Science And Technology, Universidade Nova de Lisboa)
About this topic
Summary | In the early 2000s, James Moor set out four classes of ethical machine, advising that the near-term focus of machine ethics research should be on "explicit ethical agents", agents designed from an understanding of human theoretical ethics to operate according with these theoretical principles. Above this class, the ultimate aim of inquiry into machine ethics is understanding human morality and natural science well enough to engineer a fully autonomous, moral machine. This sub-category focuses on supporting this inquiry. Other work on other sorts of computer applications and their ethical impacts appear in different categories, including Ethics of Artificial Intelligence, Moral Status of Artificial Systems, and also Robot Ethics, Algorithmic Fairness, Computer Ethics, and others. Machine ethics is ethics, and it is also a study of machines. Machine ethicists wonder why people, human beings, other organisms, do what they do when they do it, and what makes these things the right things to do - they are ethicists. In addition, machine ethicists work out how to articulate such processes in an independent artificial system (rather than by parenting a biological child, or training a human minion, as traditional alternatives). So, machine ethics researchers engage directly with rapidly advancing work in cognitive science and psychology alongside that in robotics and AI, applied ethics such as medical ethics and philosophy of mind, computer modeling and data science, and so on. Drawing from so many disciplines with all of these advancing rapidly and with their own impacts, machine ethics is in the middle of a maelstrom of current research activity. Advances in materials science and physical chemistry leverage advances in cognitive science and neurology which feed advances in AI and robotics, including in regards to its interpretability for illustration. Putting this all together is the challenge for the machine ethics researcher. This sub-category is intended to support efforts to meet this challenge. |
Key works | Allen et al 2005, Wallach et al 2008, Tonkens 2012, Tonkens 2009, Müller & Bostrom 2014, White 2013, White 2015, |
Introductions | Anderson & Anderson 2007, Segun 2021, Powers 2011, Moor 2006 |
Show all references
Related categories
Siblings:
- Algorithmic Fairness (72)
- Artificial Intelligence Safety (189)
- Autonomous Vehicles (15)
- Autonomous Weapons (11)
- Moral Status of Artificial Systems (503)
- Robot Ethics (433)
- Ethics of Artificial Intelligence, Misc (998)
- Robot Ethics (433)
- Computer Ethics (980 | 374)
Jobs in this area
Visiting Assistant Professor
Assistant Professor of Philosophy of AI
Assistant Professor (Tenure-Track), Philosophy
Jobs from PhilJobs
459 found
Order:
1 filter applied
|
Off-campus access
Using PhilPapers from home?
Create an account to enable off-campus access through your institution's proxy server. Monitor this page
Be alerted of all new items appearing on this page. Choose how you want to monitor it:
Editorial team
General Editors:
David Bourget (Western Ontario) David Chalmers (ANU, NYU) Area Editors: David Bourget Gwen Bradford Berit Brogaard Margaret Cameron David Chalmers James Chase Rafael De Clercq Ezio Di Nucci Esa Diaz-Leon Barry Hallen Hans Halvorson Jonathan Ichikawa Michelle Kosch Øystein Linnebo JeeLoo Liu Paul Livingston Brandon Look Manolo Martínez Matthew McGrath Michiru Nagatsu Susana Nuccetelli Giuseppe Primiero Jack Alan Reynolds Darrell P. Rowbottom Aleksandra Samonek Constantine Sandis Howard Sankey Jonathan Schaffer Thomas Senor Robin Smith Daniel Star Jussi Suikkanen Aness Kim Webster Other editors Contact us Learn more about PhilPapers |