

How to Bring Our World knowledge to Machine Learning?
Machine LearningModelingEurope 2022posted by ODSC Community April 7, 2022 ODSC Community

Editor’s note: Oliver is a speaker for ODSC Europe 2022 this June 15th-16th. Be sure to check out his talk, “How to teach our world knowledge to a neural network?” there!
Have a look at my small trick in the video, can you help but try to solve the mystery: where did the smurf go?
Obviously, this blog post is not so much about magic and also not about my fairly limited magic skills, but rather about our knowledge of the world that seems challenged by such tricks.
Because the concept of object permanence is so baked into the core of our minds, one of the most common categories of magic tricks are the ones that either make a thing disappear or appear without a trace or both. A trick like this always makes you think: where did it go or where did it come from because you know, things don’t just appear or disappear.
Our minds are full of such knowledge about the world, be it the physics of normal objects, or the properties of human beings. If an automatic system fails to display knowledge of such basic properties of the world we typically are disappointed and our trust in the system drops immediately. This is especially true for systems based on machine learning.
Bringing in Priors
In the ideal machine learning world, we would be able to expose a learning system to such a large variety of information that the system could learn everything about the world. However, arguably even our knowledge has not been acquired in a single lifetime, but instead has been built up by evolution over thousands and even millions of years. Some AI researchers suggest that in order to build anything comparable to human performance would also need to actively explore the world for quite some time.
This, however, is impractical in a real-world project. Instead, we would rather concentrate on bringing in the prior beliefs we hold. We typically do this by setting up the machine learning approach in such a way as to reflect that knowledge. This includes choosing supervised, self-supervised, or unsupervised learning. It also includes the learning algorithm and if you choose neural networks, the architecture and loss of it.
We can also encode certain beliefs about our scenarios into the training data. One way of doing this is to be consciously biased about the data you choose for training. We would prefer data we consider more likely to occur. Another way is to augment or adapt the training data in such a way.
Going one step further
You can not only encode general knowledge about the world or more specific information about your domain into a system, but also account for specific properties of your solution. Consider for an image recognition solution, you might know that the lighting in production will be low and the camera used will be of low quality. So you might also want to take this into account, maybe by reducing the quality of the images in your training data as well.
Workshop at ODSC Europe 2022 in London
Our understanding of the world is interesting from the perspective of philosophy, but it also is a very practical topic. If you want to get really practical you can join my “How to teach our world knowledge to a neural network?” hands-on workshop which will be held in person at ODSC Europe 2022 in London. The workshop will be accompanied by well-prepared examples using TensorFlow, but the content can be transferred to neural networks in general.
About the Author/ODSC Europe 2022 Speaker
Oliver Zeigermann is a software developer and architect from Hamburg, Germany. He has been developing software with different approaches and programming languages for more than 3 decades. Lately, he has been focusing on Machine Learning and its interactions with humans.