Artificial Intelligence and the Future of Work
Technology makes some types of jobs obsolete and creates other types of jobs — that’s been true since the stone age. While in the past, machines have replaced people in jobs that require physical labor, we’re increasingly seeing traditionally white collar jobs augmented by machines: financial analysts, online marketers, and financial reporters, just to name a few. Of course, these advances also create new jobs. The electronic computers that we know today, for example, replaced human beings performing the actual calculations, but in the process created all kinds of new types of work.
Artificial intelligence seems like it might work the same way, creating jobs for artificial intelligence researchers and slowly displacing all other kinds of knowledge work. And while this might be where we end up a century from now, the path to get there won’t quite look the way people think. We can see where we’re going from AI design patterns used at Google, Facebook and other companies investing heavily in artificial intelligence. In the most common design patterns, AI can actually increase demand for exactly the kind of work that it is automating.
Design Pattern 1: Training Data
Byfar the most common kind of artificial intelligence used in the business world is called supervised machine learning. The “supervised” part is important: it means that an algorithm is learning from training data. Algorithms still don’t learn anywhere near as efficiently as humans, but they can make up for it by processing far, far more data.
The quantity and quality of training data is actually the most important factor for ensuring a machine learning algorithm works well and the best companies take this training data collection process very, very seriously.Many people don’t realize that Google pays for tens of millions of man-hours collecting and labeling data that they feed into their machine learning algorithms.
Collecting training data is a never-ending process. Every time Twitter invents a new word or emoji, machine learning algorithms have no way of understanding it until they see many examples of its usage. Every time a company wants to expand into a new language or even a new market with slightly different patterns, they need to collect a new set of training data or their machine learning algorithms are working under dubious circumstances.
As machine learning becomes more well understood and high quality algorithms become something you can buy off the shelf, training data collection has become the most labor intensive part of launching a new machine learning algorithm.
Design Pattern 2: Human-in-the-loop
Ofcourse, some problems (like spreadsheet math) are incredibly easy for computers and some problems (like walking on two feet) are incredibly hard. It’s the same with machine learning. In every domain where machine learning works there are situations the algorithms figure out right away and situations that are maddeningly difficult to get them to perform well. This is why machine learning algorithms are famously easy to get to 80% accuracy and really, really tough to get to 99% accuracy.
Luckily, good machine learning algorithms can tell the cases where they are likely to do well and likely to struggle. Machine models have no ego, so they’re happy to tell you when their confidence is low. This is why the “human-in-the-loop” design pattern has become very widespread: humans get passed the processes and decisions that a machine can’t confidently make.
For years people have dreamed of a robot personal assistant, and products like Facebook M and Clara Labs are making this a reality. But they don’t automate everything. Instead they have algorithms handle emails and scheduling issues where the intent is clear to them and hand more complicated messages and requests to human being.
This design pattern has taken off far faster than anyone expected. Self driving cars don’t immediately replace human drivers; they take over in certain situations (like parallel parking) and hand back control to the human driver when things get complicated (such as on a busy street with construction). ATMs don’t automatically read every check you deposit, only the ones where the handwriting is clear. In both instances, machines handle a sizeable percentage of the work but when they’re unsure if they can perform well, human input is needed.
Instead of machine learning replacing one job function at a time, machine learning actually replaces pieces of every job function. This makes the person doing the job increasingly more efficient. In some cases, this can lead to fewer jobs, but in others, this can create new markets and create more jobs for the same type of work. If one personal assistant can now handle twenty customers at once, personal assistants become much more inexpensive and maybe one hundred times as many people will work with one.
Design Pattern 3: Active Learning
Active learning is a design pattern that combines the first two patterns. The training data collected by the “Human in the Loop” can be fed back into the algorithm to make it better. Algorithms learn like people — novel, complicated situations help them learn much faster. So the examples that the algorithm can’t do that get labeled by a human are the perfect examples to help the algorithm improve.
In the future, as we do our jobs, we may be simultaneously teaching the same system that is slowly replacing us. On the other hand, we could see it as getting more and more leverage out of our work. It’s really a matter of your point of view.
It’s coming sooner and faster than you think
Most knowledge work has been spared from the effects of artificial intelligence because the upfront costs of building a machine learning algorithm have historically been so high. Unlike software, every machine learning model has to be custom-built for every individual application. So the only business applications that machine learning automated were massively profitable or cost-saving undertakings, like predicting energy usage or targeting ads.
But all that is changing. Two trends have been rapidly bringing the cost of machine learning down. For one, computing power is getting cheaper, as it always does. For the second, machine learning algorithms are becoming productized. In 2015 alone, Alibaba, Microsoft, Amazon and IBM all launched general-purpose cloud machine learning platforms. Companies no longer need Google-like R&D budgets to use machine learning internally.
What this means is that many smaller scale business functions are about to feel the effects of machine learning. When it costs a million dollars to build an algorithm, only the largest companies apply machine learning to classifying their support tickets, organizing their sales database, or handling collections. But when it costs twenty dollars a month, everyone will do it. And with all of the machine learning platforms launched in the last year that moment might have just happened.
Originally posted at lukasbiewald.com/