fbpx
Known Unknowns: Designing Uncertainty Into the AI-Powered System Known Unknowns: Designing Uncertainty Into the AI-Powered System
Uncertainty may be a fearful state for many people, but for data scientists and developers training the next wave of AI, uncertainty may be... Known Unknowns: Designing Uncertainty Into the AI-Powered System

Uncertainty may be a fearful state for many people, but for data scientists and developers training the next wave of AI, uncertainty may be a good thing. Designing uncertainty directly into the system could help AI focus on what experts need to leverage state of the art AI and use it to inform our world.

The Principles of Uncertainty

The entire human experience can be containerized into a series of principles. Known-knowns are things you’re aware that you know. Unknown unknowns are things you don’t know that you don’t know. For AI, the concept of known unknowns are things you know you can’t predict.

Service providers aren’t providing help with known-knowns because those are things you can simply Google an answer for. Unknown-unknowns are similarly useless because your target audience wouldn’t know to search for those things, making your product challenging to deploy. However, the two in between, known-unknowns and unknown-knows, are ripe for AI advancement.

The known-unknown can help your users make better decisions in uncertain times. The other unknown-knowns can surprise and delight your users with things you’re discovering. Sean Kruzel and his organization Astrocyte focus on known unknowns to build uncertainty into the framework to help advancements and solve problems.

Uncertainty shows up in our tech world anyway. In Google autocomplete, the probability that you’re on one political spectrum when autocompleting a query about Trump is a lot less risky than say WebMD search results that could be potentially disastrous. However, building uncertainty into the system could be a lot more insightful than just handling these random bits of uncertainty.

We can train machines to handle this interplay of uncertainty and risk with high levels of success. One famous example of this principle is AlphaGo Zero, Google’s project, learning through playing by itself, resulting in superhuman capability during an ancient and complicated game.

Explore Versus Utilize

In game theory, learning to play a game is all about prediction. In simple games like tic-tack-toe, this system of probabilities is simple. In a game like Go, it’s infinitely more complex. Humans, and later their machine counterparts, become good at games by balancing the desire to explore, i.e., experiment and have fun with a board (which precludes winning) and utilizing or being sure of a winning move (so sure you miss other ways to play). It’s precisely this type of balance that makes machine learning so suited to uncertainty.

Designing Uncertainty

Focusing on actions and future states are more interesting than the current state. Balancing utilization with explore mode also allows AI to suggest best practices for whatever your user is searching for. Considering users are notoriously bad at articulating what they really want, this could help companies build better products and services.

A two-system model could aid this balance. In the first system, AI can help stay in the flow. For a self-driving car, for example, knows when to brake while driving in the city, and most users won’t want to override that safety function. This system utilizes uncertainty to make the decision without interference.

However, if that same car is driving someone through the countryside and encounters a plastic bag floating through the air, the car can instruct the person to take the wheel until it’s determined what the threat is (plastic bag or person?). That concept allows humans to explore alongside AI and provides feedback to the machine.

Currently, utilize mode is a lot more common. Think back to Google autocomplete or to pop-ups related to a user’s current action. Explorer mode, however, is gaining traction with places like WebMD getting better at predicting what’s wrong and could be better through feedback loops.

How Do We Apply These Principles

Ultimately, incorporating these uncertainty principles has the benefit of building in empowerment downstream. Instead of throwing out models that compete with each other, designing uncertainty into the model allows companies to automate processes. It can provide quite insights and nudge people towards making decisions that benefit them.

In explore mode, specific models can encourage people to take control at certain intervals, breaking down what the current system is. If the models offered by AI are competing, for example when discretionary investors need to make tough calls, people can be encouraged to take the wheel and make decisions based on all the information available.

AI can build robust information systems that help nudge or allow people to explore information that may not be as readily available until it’s processed through machine knowledge. Improving those models only builds more robust AI solutions in a world already full of uncertainty. It’s time we embrace the uncertainty principle and use it to improve machine further and empower users.

Elizabeth Wallace

Elizabeth Wallace, ODSC

Elizabeth is a Nashville-based freelance writer with a soft spot for startups. She spent 13 years teaching language in higher ed and now helps startups and other organizations explain - clearly - what it is they do. Connect with her on LinkedIn here: https://www.linkedin.com/in/elizabethawallace/

1