Why Uncertainty in AI is Good for Business Why Uncertainty in AI is Good for Business
We don’t like the word “uncertainty.” Business favors decisive action and leadership, so learning to embrace uncertainty as a good thing... Why Uncertainty in AI is Good for Business

We don’t like the word “uncertainty.” Business favors decisive action and leadership, so learning to embrace uncertainty as a good thing could be difficult. If you’re a business leader hoping to implement AI into your operations in the next few months or years, you’re going to have to make peace with that word. Here’s why uncertainty in AI is good for business.

[Related Article: Known Unknowns: Designing Uncertainty Into the AI-Powered System]

Like many things, uncertainty has a different intention in the science field than it does in popular usage. When we’re talking about learning models, we’re trying to use the concept of uncertainty to allow models to learn more like humans and make fewer mistakes as a result. If you aren’t sure how uncertainty is going to fit into your AI initiative in the future, here’s what to keep in mind.

Uncertainty Helps Accuracy

When a doctor isn’t sure about the validity of a diagnosis, what does the doctor do? They pull in experts to help with evaluation and to run or rerun diagnostic tests. These actions help improve a doctor’s accuracy with the diagnosis because the doctor both learns from others for future reference and improves the current diagnosis.

Deep learning could function in the same way. In Dirk Elsinghorst’s thought experiment, for example, a computer is trained to classify animals in a safari to help safari-goers remain safe. The model trains with available data, putting animals in a risky or safe category and seems to do well with accuracy. All kinds of snakes are classified as risky while Zebras are safe. 

However, the model never encounters a tiger and classifies it as safe because of the zebra stripes’ similarities to a zebra’s. If the model were able to communicate uncertainty, humans could intervene to alter the outcome.

When we blindly assume that these models are completely accurate, we miss a huge chunk of the picture. Uncertainty isn’t a bad thing when it’s communicated and could give your Engineers the chance to improve the entire model.

Uncertainty Isn’t All the Same

What makes models uncertain comes from a variety of sources. Two common ones – Aleatoric and Epistemic – give us clues for how to increase the accuracy of our models. Without them, we can’t build a system of models that accounts for certain variations in data, similar to the way humans learn.

Aleatoric uncertainty accounts for chance. Differences in the environment, skill levels of people capturing data, different equipment, random occurrences, all these things cause small (or sometimes significant) variations in data. Since nothing in your business happens in a vacuum, embracing this type of certainty can create a more accurate model than one created in the perfection of the ideal lab. 

Epistemic uncertainty is part of the model itself. Time and experience can help modelers understand how differences in models can account for uncertainty. Models that are too simple for the data at hand can have high variation in outcome, for example. Missing data can throw everything off, as another example. Embracing this kind of uncertainty gives you an edge in hiring modelers who have shown the right kind of knack for working within models in your field.

Uncertainty Keeps You Humble (and Realistic)

Business is talking a lot right now about how AI is going to take jobs and make humans obsolete. With a good understanding of uncertainty, you know that this isn’t necessarily true. Humans are critical parts of the AI revolution, and uncertainty is a big part of that.

Researchers are building models that require less data and account for more uncertainty. Bayesian deep learning is gaining traction as the alternative to neural network black boxes. With powerful neural networks, the data drives the decisions, and large amounts of data disappear into the model. The model spits out conclusions, and we can’t ever really know why or how they reached these decisions.

Bayesian models rely on a scientific method style of data analysis. The hypothesis is updated based on data, and researchers continually refine and feed new information into the model to achieve higher accuracy, much like humans learn.

This space leaves so much room for humans to work alongside AI by using it to fill in gaps in human labor – we aren’t good at processing extensive data by hand – and returning humans to what we do best, innovate. The uncertainty in deep learning isn’t a crutch; it’s a chance for humans to shine in those higher-order thought processes.

Building More Accurate Models

Your AI/ML Engineer isn’t going to eliminate uncertainty. A realistic view of what uncertainty means for your AI initiatives can help foster realistic goals. It also allows your Engineer to build fantastic models without the fear of failure that’s so typical when Engineers switch from research to the business world. 

[Related Article: Leveraging AI For Product and Company Growth]

Embracing the uncertainty of AI is no different than the uncertainty that drove you to go into business. Just like you know that uncertainty in the market can pave the way for better products, that same uncertainty allows your data science team to build models with greater potential. Uncertainty isn’t something to fear in your models, and the sooner you understand its power, the better.

Elizabeth Wallace, ODSC

Elizabeth is a Nashville-based freelance writer with a soft spot for startups. She spent 13 years teaching language in higher ed and now helps startups and other organizations explain - clearly - what it is they do. Connect with her on LinkedIn here: https://www.linkedin.com/in/elizabethawallace/