According to Michael I. Jordan, professor at the University of California, Berkeley AI lab, AI history—and the history of almost any type of engineering—is really just the history of trying to build things. During his talk, “Towards a Blend of Machine Learning and Microeconomics,” Jordan discusses AI history and the future of machine learning and drives home the point that ML isn’t just the individual algorithm or tool, but that the algorithm is a part of a bigger system of intelligence.
[Free download: Data Science Trends for 2020]
The History of Machine Learning
Jordan begins his presentation by laying out the roadmap of how we got to this version of the data science industry we’re in today. And though he says many people may disagree with him on this timeline, it makes a lot of sense when you break it down as he does. Here’s Jordan’s timeline:
1990-2000: Data science begins with fraud detection and supply chain management. It had to start simple, but it beautifully served a purpose no one had seen or tried before. This is what led to, and acted as the backbone of, a multi-billion dollar industry.
2000-2010: This is when we moved to the “human” side. Scientists and analysts began to realize that it wasn’t just numbers they were staring at or data about inanimate objects they had to compute, but that it was data about human beings. This created the recommendation systems and social media platforms that we see in our everyday lives today. Creating a simple recommendation system isn’t hard, but making one that could handle millions of users and millions of pieces of content was a massive development in the industry. And, Jordan says we don’t even know how to bring humans fully into this digital integration.
2010-now: From human-side developments, we dove deeper into difficult concepts like pattern recognition, neural networks, and deep learning. People are excited about this era because there’s enough data and technology to create machines that mimic humans. Though we’re still far away from a true humanoid, this potentiality excites us in our core. We’re mimicking humans, but we don’t yet know how to create them.
We’re now, however, moving into a new system—into a system of markets. Rather than thinking about an individual computer being smart and replacing a human in one specific task (like winning chess, driving a single car, or playing Go), we have to consider the entire network of AI systems. Take the network of Uber, for example, which is based on data around a system of understanding and movement. It’s a full map system, it has a huge 10 million user base, it connects riders and drivers, and even optimizes routes, which saves time and can decrease congestion. As we can see, the network itself has to be intelligent, even more so than the computer just trying to do its job based on how the programmer designed it.
The future of machine learning isn’t about downloading the right system, it’s about the relationships between producers and consumers and the community which uses the product and the system. Networks have worked for centuries, and though they’re constantly tweaked and manipulated, no single person designed it. Thus, its capabilities are not limited to one single person.
[Related article: 7 Top Data Science Trends in 2020 to Be Excited About]
So the next time you’re designing your next big project pitch, or are starting to build an algorithm, it’s important to remember the history of AI and its future as we’ve discussed. You’ll quickly get left behind if you remain focused on individual outcomes rather than collective goals that are achieved by the use of a massive network that everyone can employ. Your impact will be greater and your future will be secured.
Watch Jordan’s full talk here.