Intro. How I Plan to Teach Myself Deep Learning Using Only Free Resources
Learning Deep Learning Series Part 1: Videos
Learning Deep Learning Part 2: Online Courses
Learning Deep Learning Part 3: Github Repos
This is the second in a series of articles in which Data Science Associate George McIntire catalogs his experience teaching himself deep learning while only using free resources. This post is not sponsored by any entity and the views and opinions expressed in this piece are solely attributable to the author.
In part one of this series, I began my deep learning journey by using online videos. The videos I mentioned in that article did a great job helping me ease into this incredibly complex subject. My goal at the time was not to dive into building a neural net or even a simple model, but rather to establish a basic understanding of the foundations of deep learning. Though I’m not at a point where I can teach the subject matter I’ve already learned, I do feel as though I am ready to move onto the portion.
In this article, I’ll be focusing on online courses, namely Udacity’s free Deep Learning Course and fast.ai’s Practical Deep Learning For Coders, Part 1 course. I’ll discuss how these two courses reinforced the concepts I previously learning while introducing me to the coding aspect of deep learning. There are plenty of great online courses out there and if I had the time, I would do them all. I chose these two courses because I’m familiar with Udacity’s other data-related classes and Fast.ai’s course is relatively new and its popularity has piqued my curiosity. If you’re interested in other courses besides these two, I’ve listed them in this Github repo that is a collection of free deep learning resources.
For those looking for a short course, Udacity’s Deep Learning class is worth checking out. If you’re determined enough, you could easily finish the class in a single afternoon. Despite its brevity, the course is a valuable resource.
I do recommend taking Udacity’s Intro to Machine Learning before taking their deep learning course because it functions as a precursor. This is good advice in general, you shouldn’t attempt to learn deep learning before understanding machine learning concepts and algorithms.
The course is divided into seven sections addressing the basics, types of neural nets such as Convolutional vs Recurrent Neural Networks, and ends with building a live camera app. The majority of materials is presented in video format with quizzes and assignments inter-spliced between them. The assignments are self-evaluated Jupyter Notebooks that come mostly pre-filled out. Overall I tended to prefer the videos and assignments more than the quizzes. I felt like some quizzes were asking questions about things barely covered in the videos.
The first lesson entitled “From Machine Learning to Deep Learning” was, for the most part, a refresher, one that added a necessary extra layer of paint to my understanding, especially subjects like inputs, weights, and stochastic gradient descent. It also did a decent job of introducing me to new concepts such softmax and cross entropy.
The first assignment which asks students to train a logistic regression model on the notMNIST data set was a helpful step before my building an actual deep learning model. I appreciated the walk through in their videos and the annotated Jupyter Notebooks, which made a difference in my learning. This would be apparent again in the future assignments. For my own personal learning style, I was making much more progress the assignments than the videos and quizzes.
One drawback for me is that as I progressed through the course, I felt like my rate of learning was decreasing, I didn’t feel as though I was observing as much information per lesson as I was before. This was an issue for me during the CNN and RNN sections. I certainly understand the context and purpose in which those two types of neural nets are used for, but I still need work on my mastery of them.
As mentioned before, the Udacity is meant to brief, but in spite of that, I found their assignments to be a good foray into coding a deep learning model.
Fast.ai’s less than a year old Practical Deep Learning For Coders course has been making waves in AI world. It received glowing praise in the Harvard Business Review’s recent cover story on the impact of AI on business.
This course is an all encompassing treasure troves of deep learning and contains just as much if not more material than a graduate-level course. Due to the extensiveness of the course—it’s billed as a “7-week course”—I will be only discussing the first two lessons. I do however plan on finishing the rest and at some point beginning the recently announced follow-up.
This course from Jeremy Howard and Rachel Thomas is meticulous in every aspect and gives something for students of all learning styles. Thomas and Howard provide videos, Jupyter Notebooks, and a forum for students to discuss the material and ask questions.
Before even starting the course, fast.ai’s commitment to effective teaching the material was extremely evident. Thomas and Howard both go into detail about what why created this course and also map out a path to successfully completing their incredibly voluminous class.
If you prefer to learn from a single one-stop shop as opposed to a collection of resources, then fast.ai is right for you. However, I highly recommend becoming knowledgeable in AWS before starting this course, because the code used in the course is designed to be executed in an AWS instance and cannot be done on your local machine. The instructors are kind of enough to provide a 13-minute tutorial on setting up an AWS instance for deep learning, but from my experience with AWS, there are usually some hurdles even if you follow instructions to the T.
What I appreciated about the first lesson is that they provide two different videos, one that gives a lesson overview and one for the coding tutorial. This made a huge impact on me. The overview video sets the tone and scene for the lesson. It’s the stretching/warm up part before a big workout, without it you run the risk of pulling a muscle or suffering some other injury.
In lesson 1, they go over a lot of the basic concepts of deep learning, all of which I had encountered before in previous material. I didn’t find this redundant because I think it’s beneficial to hear certain ideas from different sources.
Then came the coding portion of lesson 1, where I learned how to use a Convolutional Neural Net to train an algorithm to detect dogs vs cats. Fast.ai did an immense job of annotating the Jupyter Notebook provided for this lesson, that combined with the video was very comforting for me. What I found interesting in this lesson is that focus on the programming aspects of deep learning as to its mathematical counterpart. The lesson is a meticulously-detailed walkthrough of using implementing a pre-trained deep learning model and creating a model from scratch using the Keras library. Even though I didn’t absolutely understand this entire lesson, I did feel as though I had made one of my biggest leaps in this entire project.
In lesson 2, Jeremy Howard goes over his Kaggle submission for the cats vs dogs competition, which helped address some lingering issues I had with the first lesson. The most important segment of the lesson for me was about the loss function. This topic has been covered in my previous learning, but Fast.ai really brought it home for me. Laying out the weights and activation in a spreadsheet for some reason is what this concept click for me as opposed to the forms it’s been presented to me.
The Fast.AI course is by far the best resource I’ve used so far in my deep learning journey. Deep learning is taught by some of the world’s smartest people, but the Fast.ai instructors don’t just know deep learning, they know how to teach it. When I get the opportunity I will definitely finish the rest of the course and the recently announced part 2 as well.
If these two courses don’t satisfy your learning appetite, then I recommend taking Coursera’s deep learning course, they’ve proven that they know their stuff when it comes to all topics related to data science. Others ones to check out are Deep Learning Fundamentals from DeepLearningTV and Deep Learning AI from NVIDIA, the former is great for just focusing on the basics while the latter is for those who want to learn the more complex topics.
- Solidified understanding the basic tenets and concepts of deep learning
- Introduced to the programming side of things.
- Learned how to use a pre-trained model as well create one.
George McIntire, ODSC
I'm a journalist turned data scientist/journalist hybrid. Looking for opportunities in data science and/or journalism. Impossibly curious and passionate about learning new things. Before completing the Metis Data Science Bootcamp, I worked as a freelance journalist in San Francisco for Vice, Salon, SF Weekly, San Francisco Magazine, and more. I've referred to myself as a 'Swiss-Army knife' journalist and have written about a variety of topics ranging from tech to music to politics. Before getting into journalism, I graduated from Occidental College with a Bachelor of Arts in Economics. I chose to do the Metis Data Science Bootcamp to pursue my goal of using data science in journalism, which inspired me to focus my final project on being able to better understand the problem of police-related violence in America. Here is the repo with my code and presentation for my final project: https://github.com/GeorgeMcIntire/metis_final_project.
ODSC’s Accelerate AI focuses on three key areas: Innovation, Expertise, and Management. Learn what the latest advances in AI and applied data science are, how they can affect your company, and how to build an effective team around their potential. Ready to learn more? Learn more here.
- An Introduction to Active Learning 45 views | by Nathaniel Jermain | under Conferences, Machine Learning
- RAPIDS cuGraph 33 views | by RAPIDS | under Modeling, Predictive Analytics
- Image Augmentation for Convolutional Neural Networks 33 views | by Nathaniel Jermain | under Machine Learning, Modeling, Python