Testing for Causality in Artificial Intelligence (AI)

In Sangeetha Menon, Saurabh Todariya & Tilak Agerwala (eds.), AI, Consciousness and The New Humanism: Fundamental Reflections on Minds and Machines. Springer Nature Singapore. pp. 37-54 (2024)
  Copy   BIBTEX

Abstract

In the 1950 in a landmark paper on artificial intelligence (AI), Alan Turing posed a fundamental question “Can machines think?” Towards answering this, he devised a three-party ‘imitation game’ (now famously dubbed as the Turing Test) where a human interrogator is tasked to correctly identify a machine from another human by employing only written questions to make this determination. Turing went on and argued against all the major objections to the proposition that ‘machines can think’. In this chapter, we investigate whether machines can think causally. Having come a long way since Turing, today’s AI systems and algorithms such as deep learning (DL), machine learning (ML), and artificial neural networks (ANN) are very efficient in finding patterns in data by means of heavy computation and sophisticated information processing via probabilistic and statistical inference, not to mention the recent stunning human-like performance of large language models (ChatGPT and others). However, they lack an inherent ability for true causal reasoning and judgement. Heralding our entry into an era of causal revolution from information revolution, Judea Pearl proposed a “Ladder of Causation” to characterize graded levels of intelligence, based on the power of causal reasoning. Despite tremendous success of today’s AI systems, Judea Pearl placed these algorithms (DL/ML/ANN) at the lowest rung of this ladder since they learn only by associations and statistical correlations (like most animals and babies). On the other hand, intelligent humans are capable of interventional learning (second rung) as well as counterfactual and retrospective reasoning (third rung) aided with imagination, creativity, and intuitive reasoning. It is acknowledged that humans have a highly adaptable, rich, and dynamic causal model of reality which is non-trivial to be programmed in machines. What are the specific factors that make causal thinking so difficult for machines to learn? Is it possible to design an imitation game for causal intelligence machines (a causal Turing Test)? This chapter will explore some possible ways to address these challenging and fascinating questions.

Links

PhilArchive



    Upload a copy of this work     Papers currently archived: 92,873

External links

Setup an account with your affiliations in order to access resources via your University's proxy server

Through your library

Similar books and articles

Intelligence, Artificial and Otherwise.Paul Dumouchel - 2019 - Forum Philosophicum: International Journal for Philosophy 24 (2):241-258.
Embodied artificial intelligence once again.Anna Sarosiek - 2017 - Philosophical Problems in Science 63:231-240.
Human and Artificial Intelligence: A Critical Comparison.Thomas Fuchs - 2022 - In Rainer M. Holm-Hadulla, Joachim Funke & Michael Wink (eds.), Intelligence - Theories and Applications. Springer. pp. 249-259.
Consciousness, intentionality, and intelligence: Some foundational issues for artificial intelligence.Murat Aydede & Guven Guzeldere - 2000 - Journal of Experimental and Theoretical Artificial Intelligence 12 (3):263-277.

Analytics

Added to PP
2024-03-21

Downloads
4 (#1,638,870)

6 months
4 (#853,525)

Historical graph of downloads
How can I increase my downloads?

Citations of this work

No citations found.

Add more citations

References found in this work

No references found.

Add more references