

At NVIDIA, Deep Learning Gets Deeper
Deep Learningposted by Spencer Norris, ODSC December 3, 2018 Spencer Norris, ODSC

Alison Lowndes and her team at NVIDIA are finding exciting new ways to handle the demands deep learning places on machines — and even more exciting ways to use the new technology.
At ODSC London 2018, Alison Lowndes of NVIDIA gave a talk on how advances in graphics processing have helped accelerate artificial intelligence research. She said these developments have provided a platform for the most sophisticated AI approaches available, such as deep learning.
[Related Article: Visualizing Your Convolutional Neural Network Predictions With Saliency Maps]
Lowndes opened with examples of multi-agent reinforcement learning algorithms outperforming humans on complex tasks. Particularly, she pointed to video games such as Dota 2, a popular online multiplayer game in which players battle each other for control of the map using avatars.
This is a very complex task to model. Lowndes explained that in order to build a seed for their models, more researchers are turning to an offshoot of reinforcement learning called imitation learning, also commonly referred to as one-shot reinforcement learning.
Imitation learning feeds a network records of whatever task it is trying to model — in this case, recordings of Dota 2 matches. It’s similar to how children learn by imitation. “You can teach a child anything, and we basically do that by showing them. If you think of the average primary school classroom, we are just showing its rote method,” Lowndes said.
This task requires massive computational power, which NVIDIA’s hardware happens to excel at. In 2006, NVIDIA introduced the compute unified device architecture, better known as CUDA. The new design paradigm allows data scientists to repurpose graphics chipsets for hardcore mathematical operations. This makes them applicable to the range of statistics-heavy problems in machine learning.
These advances blew open the gate for neural networks as a viable approach to AI. Once the genie was out of the bottle, more and more researchers began developing and sharing code bases for network configurations that they could redeploy on a wide variety of tasks.
Out of these advancements came a renaissance for reinforcement learning. All of a sudden, researchers had the raw power and code necessary to efficiently train models with thousands of parameters and that could look back in time for thousands of steps.
These models were largely based off of the traditional reinforcement learning approach, which was inspired by Markov decision processes: give examples, transition state, and reward the model according to how ‘good’ its decision was.
The setup for the Markov decision process is sound. But in 2016, Google found a way of hacking it to create AlphaGo. Instead of relying on videos of human-run games to build out the model, AlphaGo was trained by competing against itself. In the process thousands of agents simultaneously played against each other, starting with random moves and advancing into complex decision-making based on how well it performed in each game.
And it worked: That year, AlphaGo bested Lee Sedol, the 18-time world Go champion. In one year, researchers at Google had discovered the remarkable fact that, under special circumstances, machines can accomplish far more difficult tasks by relying on themselves instead of humans.
This approach is now being adapted into a wide range of other tasks, among the most famous of which is teaching a virtual agent how to walk. The video that emerged is ridiculous, but the gloss is besides the point — this new technique was capable of teaching itself how to walk.
According to Lowndes, these new approaches to deep learning are being applied to autonomous vehicles at Waymo, streamed video analyses, radiology, particle physics at CERN, oil exploration, video games, and other fields.
To accommodate the demand for computational power that these training processes require, NVIDIA is reengineering a wide range of its hardware capabilities. In 2018, the company released its Xavier chipset. The chipset is distributed across a 1,000-node farm, where each is capable of a petaflop of processing. A petaflop is equivalent to 1015 floating-point operations per second. This comes alongside the release of their DGX-2 server unit, a unified interface for 16 unique graphics cards capable of processing two petaflops.
[Related Article: Deep Learning Research in 2019: Part 2]
Lowndes concluded by pointing audience members to the NVIDIA Deep Learning Institute, the company’s training resource for getting developers up to speed on their AI ecosystem. She also directed them to the NVIDIA Inception Program, a startup incubator designed to help new companies develop their marketing plan and benefit from discounts on NVIDIA hardware.
It’s an exciting time to be a technologist. With the tremendous leaps in deep learning and the technologies being developed by teams like NVIDIA, it’s easy to see why.
Check out ODSC’s YouTube channel for Lowndes’ full talk.