Decision Intelligence and Why Goldilocks Make Bad Choices…
Spoiler: Goldilocks survives. Let us first start by saying things could have turned out much worse. Really worse. Don’t walk into a stranger’s house and eat their food and nap in their house. This is just a no-brainer!
In the real world when we make decisions there is a much more involved and complex inner process at work—regardless of how instantaneous the mind appears to decide how to do X, Y or Z. In fact, humans make decisions non-stop from first to last breath and are only continuing to make more and more decisions on a daily basis. Some statistics suggest that the average human makes around 35,000 decisions daily as of 2016.
[Related Article: From Data to Process to Decision]
So given that us humans love automation and can all agree that a computer as a processor is potentially capable of making much better decisions than ourselves (humans) when provided with the right information to make said decisions, then it was the only sane choice to start to depend more and more on computers. Logically, a computer doesn’t put weight on human emotion and is as honest as the collective corpus of historic data from which it can pull from to make decisions under uncertainty.
The notion however of layering an unbiased human touch to autonomous decision making is something that is truly interesting for those future-forward thinkers!
Uncertainty and Bayesian Reasoning
Uncertainty is the lens of the Bayesian world. When considering how you make decisions, or how another individual would make a similar decision, you have to consider what weights and additional counter-weights are at play when optimizing for a particular outcome. Consider the morning commute to work. I live in the Bay Area and we have possibly the worst traffic in America any day during the normal work week. How would you optimize your morning or evening commute? The answer, of course, lies within the realm of human-directed autonomous decision intelligence.
Decision Intelligence, Optimizations and Cost Functions
The morning commute is uncertain, even when armed with our historic and seasonal trend data like for instance an increased number of accidents due to the first rain of the year which is more of a poisson distribution, so given the additional complexities of the time-series, there are additional tactics and heuristics to optimize how to get from point A to point B. If all you care about is the fastest path from A -> B then you can use a Graph Search algorithm like Dijkstra’s Algorithm to run a shortest-path computation from point A to point B, with additional weights per edge based on historic or seasonal observations such as average time to cover the distance encapsulated in the graph edge—like the app Waze does in real-time for many Bay Area commuters. This probability graph is then returned and your app has an initial set of instructions for getting to your destination based on a time-based estimation of your trip from A->B, mine being the office, with an additional layer of real-time accident reports as additional negative weights for the cost of traveling those routes.
This is an obvious oversimplification given that point A to point B may be a very complex path across hundreds of miles and tens of thousands of choices and running each single or continuous query (updating based on changing traffic conditions) as an ad-hoc request would burn down the best of the servers on the market or become computationally unrealistic due to operating costs—especially given that this problem both temporal and geospatial in nature.
We Want to Feel in Control
As I subtly stated above “if all you care about is fastest path” then we have a possible solution and a decision that can be made more or less autonomously but in some cases, like my case, I actually would rather have a morning commute that optimizes for a care-free and more decision-free commute—in that case, one that may take me along back roads to ignore the congestion of the packed roadways, with the least number of turns. This optimization is tougher in terms of cost given that human emotion and frustration are opinionated and biased towards the individual, time of day, and many other factors. But when it comes to decision making things aren’t always cut and dry!
Decision making comes in all shapes and sizes. From simple rule-based decisions like “put on a jacket if it is cold outside,” “feed pets to keep them alive,” to more complicated optimization problems like when to service an expensive vehicle like commercial or military aircraft. Basically in all scenarios a decision is made based on simple rules or probabilities, just the probabilities tend to be more of a garden of forking paths or graphical in nature, and the rules can be automatically generated like in the case of regression or collected domain knowledge or both.
p(X | E) = Probability of an decision or outcome (X) occurring after Event (E)
Real-Time Decision Intelligence and Predictive Analytics Oh My!
The domain of Decision Intelligence has been around for a long while now, but it is trending again given that companies are turning to their Data and integrated continuous analytics systems to assist in making tough decisions or in making controlled decisions that don’t require a human-in-the-middle.
If you are curious about how to build a real-time—predictive decision engine on top of Apache Spark then come to the Open Data Science Conference in Boston in May 2020 and learn the process in a fun hands-on workshop led by me (Scott Haines). You will learn the steps required to build semi-autonomous or fully automated decision intelligence into your 2020 road map. You will learn how to build a simple system that leans on agriculture automation and IoT event streams to decided when to water or skip watering the lawn! This system can be used as a model to build fraud-detection systems or other gated and controlled rule-engines.
Revisiting the Goldilocks story. If Goldilocks lived in our modern times (or lived at all) she would be more adept in making difficult decisions. The story would go more like this, Goldilocks came upon a house in the woods, she observed that the house didn’t belong to her and that while she liked to explore and was a bit famished and could smell someone’s porridge from outside the house that she would need to consider the probability of potentially negative or positive outcomes of this honeypot. Given that Goldilocks had recently finished a class on Probabilistic Decision Making she quickly did some ad-hoc research into “walking into a strangers house” and given p(things-going-okay | walking-into-strangers-house) = 0.3 she decided to go order some take-out and watch a new movie on Netflix!