

How an AI Winter Could Be Good for Data Science
Business + ManagementTechnologyposted by Katherine Bailey April 3, 2018 Katherine Bailey

The term “Artificial Intelligence” has been making regular appearances in the headlines of the mainstream media lately, often with stories about robots being “better at reading than humans,” or researchers panicking after bots start communicating in their own language. But even more recently we’ve been seeing articles that question whether the hype around AI is warranted. Just last week The Financial Times published an opinion piece titled “Why we are in danger of overestimating AI” that points to examples of serious problems with current AI systems, such as how easily they can be fooled, or their lack of common sense knowledge.
An AI winter?
Gary Marcus of NYU recently wrote a critique of Deep Learning that has been covered not only in tech publications such as Wired and MIT Technology Review but also in the mainstream media — I heard a segment about it on NPR just the other day. In it Marcus says:
“One of the biggest risks in the current overhyping of AI is another AI winter, such as the one that devastated the field in the 1970’s, after the Lighthill report (Lighthill, 1973), suggested that AI was too brittle, too narrow and too superficial to be used in practice. Although there are vastly more practical applications of AI now than there were in the 1970s, hype is still a major concern.”
While the term “AI winter” usually describes any period of reduced funding in AI research, I want to be clear from the outset that I am in no way advocating for anyone’s research funding to be cut! But I do maintain that a lull in the hype surrounding AI might help to sharpen the focus of that research, moving it away from the less well-defined notion of “Artificial General Intelligence” (AGI), aka strong AI or human-level intelligence, and instead towards the notion of AI as a tool that humans use to solve problems.
As with any tool, an AI system is valued according to its usefulness. Accuracy is certainly important, but even a badly overfitting Machine Learning (ML) model can be said to be “accurate” in some sense. So can one that has been trained on biased data. There are vast improvements that need to be made in the usefulness of the ML systems that we have today, and there’s an enormous amount of work to be done by data scientists in collaboration with software engineers and UX designers to produce more usable applications. Regardless of whether there’s a lull in the conversation around AGI, this work will need to be done and data scientists will continue to be in very high demand.
Some of the biggest challenges in Machine Learning today can be tackled in the absence of any progress towards human-level artificial intelligence. Examples of these challenges include the explainability problem (as described in this excellent piece by Will Knight), the problems surrounding biased data (as brilliantly elucidated by Kate Crawford in her NIPS 2017 keynote), and the lack of labeled training data available for many tasks (as any seasoned data scientist will know). Let’s look at these in turn.
Explainability
Explainable or interpretable AI is a burgeoning field in its own right. Quite a lot of time was given to it at the most recent NIPS conference, which included a debate where one side comprising Yann LeCun and Kilian Weinberger argued that interpretability was not important. They pointed, as an example, to AlphaZero’s inhuman level of skill at chess and Go as showing that human-interpretable representations aren’t necessary for a successful algorithm. Others argue against tackling this issue for different reasons: because it’s too hard, or even impossible. Carlos Perez calls it an unsolvable problem and says that while fake explanations might be possible in some cases, “a complete explanation will, in a majority of cases, be completely inaccessible to humans.” There’s also the view that the irrationality of human decision-making means we should let machines off the hook in this regard. Everyone agrees that explainability is hard. The view that it is unnecessary seems only to be held by certain AI researchers, not by the users of ML systems. But as long as we’re focused on ML systems as tools — tools that have users — we need to take the needs of those users into account irrespective of what the explainability skeptics have to say on the matter.
There are many interesting approaches being developed, but some of them raise the question of whether the purpose of an explanation is just to reassure, rather than actually provide an accurate insight into why a system responded as it did. There is other important work going on that studies what makes an explanation interpretable or utilizable by humans at all. This work on explainability is only going to increase in importance.
Biased data
Explainability is partly about trust — trust that the system is producing correct and unbiased answers. When all that matters is winning a game of Go, trust doesn’t seem relevant. But a game, a “voluntary attempt to overcome unnecessary obstacles,” to quote philosopher Bernard Suits in his book The Grasshopper, is rarely a good analogy for real life situations where AI might be employed. In the areas of healthcare and criminal justice, for example, gamification might make for more tractable AI problems, yet negatively impact people’s health or freedom. These are two areas where the use of biased data for training ML models can have extremely serious consequences. Sometimes the bias is introduced by the method in which the data is collected, while at other times ML models reflect problematic biases in the world, which they then perpetuate or even amplify. This is another hard problem, and one that we will need to work on for years to come. As Kate Crawford put it in her NIPS talk, “We can’t simply boost a signal or tweak a convolutional neural network to resolve this issue, we need to have a deeper sense of what is the history of structural inequity and bias in these systems.”
Lack of data
Along with problems inherent in ML systems themselves, there’s the broader problem of creating such systems in the first place, systems that are hungry for training data. In most cases, the ML models needs data that has been labeled with “ground truth” information. The model learns from these labeled examples, such that it can then predict the labels for future data. Companies like Facebook, Amazon, and Google have vast quantities of data available to work with, much of it helpfully labeled by their users. But most companies looking to employ ML techniques to solve problems have nothing like this kind of data available to them. How can they compensate for their lack of labeled data? This is where creative use can be made of techniques like transfer learning, where we apply learning from big data sets to problems with smaller data sets. Or like few-shot learning, where a system can learn to classify examples from just a few labeled examples. Or human-in-the-loop systems, where humans provide labels for the data. Transfer learning is about learning rich representations of data. It often goes hand-in-hand with few-shot learning because it’s the very richness of the representations that makes it possible to learn from just a few examples. Now bring in a human to do those “few shots” and it becomes possible to go from having no labeled data at all to having a powerful classifier. There really is a lot to explore here, including how best to present the human with the most effective examples to label, as well as honing the UX to get the most out of the human in the loop.
Why an AI winter might be good
The main point I’ve been making so far is that waning interest in AGI won’t be a bad thing for data scientists. Why? Because the important and useful work that actually needs to be done just doesn’t depend on making progress towards AGI. But might waning interest in AGI actually be a good thing? Thinking back to the dichotomy between AI as human-level intelligence and AI as human tool, it’s the latter conception of AI that represents the real and most valuable opportunities for data scientists. Waning interest in AGI certainly won’t mean a wane in interest in AI-as-tool, and this shift in focus will in fact mean more progress on usability. So an “AI winter” might be a good thing, at least if the only thing being put on ice is research on AGI. In fact, that kind of winter might even result in an AI summer for work on AI-as-tool, since the problems we actually need to work on would finally have everyone’s attention.
Many prominent researchers agree that to some extent the current hype around AI is a distraction from serious work in ML. If an AI winter comes, the only thing we’ll lose is this distraction.
Katherine Bailey leads a team of engineers and data scientists working on machine learning at Acquia. She mostly at http://katbailey.github.io. Her background is in Software Engineering but she moved into Data Science and Machine Learning in 2015. Katherine started Acquia’s Machine Learning initiative in 2016 to incorporate ML techniques into its products, and hired a team of engineers and data scientists to turn ML ideas into applications.