Data science is about a half-century old in the way we think of it now, but with the advent of AI, we’re reaching a precipice of how we want to model our AI initiatives. Moving forward with not just effective but ethical AI, we need a human-centered design principle.
[Related Article: AI Ethics: Avoiding Our Big Questions]
The Original Data Initiative
John Chambers, one of the fathers of data science as we know it saw a move to something known as greater statistics or the ability to really learn from our data. John Tukey said, “ An approximate answer to the right problem is worth a good deal more than an exact answer to an approximate problem.”
Guszcza predicts that we’re headed for “greater data science” much in the same way. AI is the implementation of greater data science, but the ways we implement AI could determine the future ethics of our products.
AI can quickly turn into artificial stupidity. If the user doesn’t understand the system, bad things can certainly happen. Self-driving cars are exciting, but if users don’t understand its capabilities, that quickly becomes dangerous. Likewise, facial recognition models that don’t recognize dark skin are similarly stupid (and dangerous). Also troubling, our propensity to retweet fake news more than real news, fooling algorithms into boosting that fake news.
Smart technologies can’t just stop at creation. They’re functionally dumb until they enable smart user adoption. Read that again. If your user can’t use AI for the highest good, your model is no good. Smart adoption leads to intelligent outcomes.
So what does that mean? According to Don Norman, author of The Design of Everyday Things, states that the problem design is that they’re too logical. We must account for the illogical behavior of humanity. Humans have clear psychology and ignoring the ways we interact with your world renders even the most innovative and beautiful designs functionally useless.
Human-centered design begins with thinking slow. Human decision making involves a lot of internal storytelling instead of carefully weighing the evidence. Ignoring the way we make decisions in favor of the ideal solution creates products that just don’t reflect our reality. We need data to overcome our natural bias and tendencies to jump to conclusions but haven’t been able to overcome the noise and difficulty in implementing this data.
We can’t just say that equations are better than humans at decision making, however, because we run into the “artificial stupidity” problem above. Instead, equations in conjunction with proper human intervention are some of the best options we have for implementing truly smart, ethical AI.
AI Equals Augmented Intelligence
Considering the user in the design process creates usable AI that avoids some of the worst problems. Guszcza compares it to eyeglasses. When you have poor eyesight, glasses can suddenly make things very clear. The glasses are useless by themselves, and no one would say that glasses are better at seeing than human eyes. Together, however, they’re functionally better than incomplete eyesight.
AI is like eyeglasses for your brain. Instead of thinking about it as better than human intelligence, think of it as helping make decision making more clear. Augmented Intelligence is a better understanding of human-centered AI because it allows us to implement AI initiatives that genuinely help rather than hinder our efforts.
For example, after his defeat by Deep Blue, chess master Kasparov went on to create a game known as advanced chess, in which human grandmasters used supercomputers to push the boundaries of the competition. During one competition, an amateur chess team using ordinary laptops won the tournament in a total upset.
For Kasparov, the reason was simple. The ability to look more deeply into chess moves and the skill in the process gave ordinary players and weak computation an advantage over grandmasters and supercomputers operating with a weaker process. Creating that robust process requires not only knowledge of AI but also design thinking and psychology. Increasing human skill level in our AI initiatives will go a long way to preventing the horror stories the press loves to report with AI gone wrong.
[Related Article: Emphasizing Humanity in a World of AI]
Blending AI and Human Intelligence
Humans need equations to overcome biases, but algorithms alone can also cause bias. What we need is AI designed for real human behavior, aiding smart adoption, and ensuring those algorithms are used to their best advantage. Pure machine learning approaches aren’t going to fulfill AI’s true potential because data and human empathy need each other. Creating AI from the perspective of what best meets human and societal needs instead of pushing what is technically possible could help us finally see those benefits we’ve been promised.