

On Human-like Performance Artificial Intelligence through Causal Learning: A Demonstration Using an Atari Game
Blogs from ODSC SpeakersConferencesAPAC 2021posted by ODSC Community September 2, 2021 ODSC Community

This upcoming ODSC APAC 2021 talk will provide the theoretical background and a practical demonstration of how to implement causal learning or learning of causality.
Why is causality important?
Prof. Judea Pearl of UCLA, the 2011 Recipient of the Turing Award (the equivalent of the Nobel Prize for Computer Science), who is also the pioneer of many of the key machine learning techniques used in AI today, declared in 2018: To Build Truly Intelligent Machines, Teach Them Cause and Effect! – Communications of the ACM, May 17, 2018.
Therefore, causality is at the heart of the intelligent understanding of the world and so far current machine learning methods have not adequately covered it.
Despite the progress made in AI, especially in the successful deployment of deep learning for many useful tasks, the systems involved typically require a huge number of training instances, and hence a long time for training. As a result, these systems are not able to rapidly adapt to changing rules and constraints in the environment. This is unlike humans, who are usually able to learn with only a handful of experiences. This hampers the deployment of, say, an adaptive robot that can learn and act rapidly in the ever-changing environment of a home, office, factory, or disaster area. Thus, it is necessary for an AI or robotic system to achieve human performance not only in terms of the “level” or “score” (e.g., success rate in classification, score in Atari game playing, etc.) but also in terms of the speed with which the level or score can be achieved. In contrast with earlier effort on Atari game learning and playing, in which the ability of a deep reinforcement learning system to learn and play the games at human level in terms of score was demonstrated, we describe a system that is able to learn causal rules rapidly in an Atari game environment and achieve human-like performance in terms of both score and time.
What is a good example of causal learning?
One example of the learning of causality is given in the figure below:
(Figure above taken from: Yang, X. and Ho, S.-B. (2018). Learning Correlations and Causalities through an Inductive Bootstrapping Process. Proceedings of the IEEE Symposium Series on Computational Intelligence, November 18-21, 2018. )
The top part of the figure is a sequence of events F, M, N1, and N2 and it shows N1 and N2 being the noise embedded in the purported pair of causally linked events, F (e.g., “Force”) and M (e.g., “Movement”). F and M could also be, say, lightning and thunder respectively. The noisy events N1 and N2 could be wind and bird chirping, say. Each little bar chart in the figure represents one cycle of processing through the long chain of F, M, N1, and N2 events. It can be seen that initially the “Causal Strength” of the causality between F and M, i.e., F🡪M is low, and it is buried in other “noisy” links between events, such as N2🡪M, etc. After 4 cycles of processing, F🡪M begins to emerge, and acquire the highest “Causal Strength”, and thus this true causal link between events is being uncovered.
Learn more at the talk
This talk will cover the techniques for achieving the above learning of causality, as well as discuss other relevant topics on causal learning such as how to apply it to learn to play an Atari Game, Space Invaders. Fundamental theoretical concepts on perceptual causality and causal learning will also be covered.
Seng-Beng Ho is currently Senior Scientist & Deputy Director, Department of Social & Cognitive Computing, Institute of High Performance Computing, Agency of Science, Technology & Research, Singapore. He obtained his Ph.D. in Cognitive Science (AI, Neuroscience, Psychology, & Linguistics) and M.Sc. in Computer Science from the University of Wisconsin, Madison, U.S.A. He has a B.E. in Electronic Engineering from the University of Western Australia. He is the author of a monograph published in June 2016 by Springer International entitled “Principles of Noology: Toward a Theory and Science of Intelligence”. In the book, he presents a principled and fundamental theoretical framework that is critical for building truly general AI systems.