Learning Deep Learning Series Part 1: Videos
Last week Open Data Science published an article about how I plan to teach myself deep learning using only free resources and after my first week I’m here to report on my progress and take on the resources I’ve used so far. This piece is specifically about video learning content. I purposefully decided to start off with video because I wanted to ease myself into this incredibly complex subject, the same way a swimmer enters a chilly body of water.
Since this is the first leg of my journey, I sought out Youtube videos and channels offering 101-content focusing on basic theories, concepts, and real-world uses of deep learning. I certainly didn’t want any coding tutorials or math-heavy content.
Even though I’m a capable data scientist, I was filled with equal parts excitement and anxiety when I took my first steps on this journey. I tried learning deep learning before in a weekend-workshop but was left almost as confounded as I was the time when I first started learning Python. Nevertheless, I wasn’t going to let the deep learning train leave me behind.
In the course of this article, I’ll be reviewing the a select number of video materials and chronicling my progress at understanding specific concepts and theories of deep learning.
For the very first educational resource in my journey towards deep learning mastery, I decided on the YouTube channel DeepLearning.TV. Their playlist entitled “Deep Learning SIMPLIFIED” was the just the first step I needed to take.
The 30-video long series offers outlines the most simple concepts underlying deep learning and neural networks, in addition to presenting real-world applications as well. DeepLearning.TV is a suitable resource even for those who have no experience with machine learning or big data, I’d even go so far as to recommend to any person who’s slightly curious about artificial intelligence. The style of DeepLearning.TV’s content proved to be just as valuable as the content itself. The combination of the presenter’s smooth speaking with their splashy and animated graphics facilitated an efficient learning environment.
When I started watching the second video entitled “What is a Neural Network” the foundational concepts of deep learning started to click in my mind. The concepts of layers and forward propagation were no longer elusive to me after watching this video, it made me feel like I was headed on the right course.
After the first several introductory videos, the series transitions to explaining the different types deep learning algorithms (convolutional & recurrent neural nets and more) and their applications. I felt the videos did an exceptional job at discussing why a certain neural net used for a certain type of data, i.e. convolutional neural nets and images. They regularly bring up real-world use cases of the different types of nets, which is a good way to keep the audience’s attention.
In the second half of the playlist, DeepLearning.TV guide the viewer through the various Python deep learning libraries such as Caffe, Torch, and TensorFlow and platforms like H2o.ai and Dato Graphlab. Obviously it’s necesa to touch on these tools, however I felt I wasn’t ready to go in on items just yet, I still wanted to hone my understanding of a how neural net operates.
As an initial coat of paint, DeepLearning.TV was the right choice for me. I may not be able to confidently teach how deep learning works, but I’ve developed a way to understand in my own mind.
A Friendly Introduction to Deep Learning
The use of the word “friendly” in the title of this video, by Luis Serrano, Machine Learning Nanodegree Lead at Udacity, is quite appropriate. In 33 minutes Serrano deftly goes over topics like gradient descent, logistic regression, activation function, and machine learning model probabilities.
For the majority of the video, I felt a lingering confusion in my mind about how subjects in the video related to deep learning. I certainly know how a logistic regression works but kept wondering why the instructor was talking about it in a video about deep learning. Deep learning to me represented something more fluid and flexible that the rigidity of a linear-based algorithm.
It was starting at the 24:09 mark when it all came together. Serrano introduces a set of data lacking a linear relationship between the two classes as an example. His point here is that the data requires two different logistic regression to sufficiently predict the two labels. This is where neural networks come in. The two separate classifiers are combined to form the basis of a neural net, with the numerical weights (coefficients) utilized as the layer of the deep learning algorithm. That sense of confusion vanished from my mind.
This is as close as it gets to an Eureka moment for me. Before watching this video, I wasn’t entirely confident what neural net layers were and how they functioned neural net. Serrano’s explanation helped me take a huge leap in understanding this crucial part of deep learning.
How Deep Neural Networks Work
After watching the previously mentioned videos, I continued to peruse more video content. A lot of the stuff I was saying was too similar to material I had already watch to make a significant improvement in my understanding. That was until I came across Facebook data scientist Brandon Rohrer’s 24:38-minute video “How Deep Neural Networks Work.”
In this tutorial Rohrer’ tactfully guides the audience through the intricacies of building a neural net to classify a four-pixel image as solid, or having vertical, diagonal, or horizontal lines. The presentations begins by visualizing how a network of neurons and layers work together to learn the features of the images. He explains how a neural net applies a system of weighting or multiplying input neurons with a coefficients to the input. This is where I first was introduced to the Sigmoid and Relu functions. Following that segment, Rohrer delves into fine-tuning error functions in neural network and the difficulties of calculating the gradient and how that influences the design of a neural network.
Rohrer’s presentation was the first one I encountered that noticeably tested my mathematical abilities, especially during the section on gradient descent. After accumulating hours of video content, I felt this was a good time to absorb more quantitative material. But what I must appreciated about the video was the precise commentary outlining the web of layers in a neural network. Before Rohrer’s video I had a decent grasp of layers and weights, but his explanation is what really solidified it for me. The other videos to certain degree assume the audience knows what a layer is, whereas Rohrer’s covers all the bases.
Status and Next Steps
First leg of the deep learning learning journey is completed and I’m feeling cautiously optimistic. The main goal of the video portion of my learning experience was establish a familiarity with the underlying principles and applications of deep learning. I’m certainly not ready to starting building any neural nets of my own, but I can visualize a simple net in my mind processing the features of data and returning a label. I believe that my decision to lead with the video content was validated due to the lack of feeling overwhelmed or intimidated. Stay tuned for the next episode in this series in which I take online course in deep learning.
Feature Image modified from “In the Bleachers by Steve Moore”