Deep learning models can be intimidating and rightfully so; in their raw form they are highly complex algorithms that need to be engineered with expertise. However, deep learning is very accessible to individuals with a background in technical skills thanks to organizations and individuals that have packaged deep learning concepts into more user-friendly applications. This article will walk through some steps to get the ball rolling with deep learning. Getting started will require some background in programming and a knowledge of statistics will be helpful.
[Related Article: Deep Learning for Speech Recognition]
There are a variety of platforms that enable deep learning. One of the most popular APIs is Keras. On Keras’ RStudio page, they state “Being able to go from idea to result with the least possible delay is key to doing good research.” This statement translates to the API’s ease of use. Keras enables the construction of neural networks by using TensorFlow (a less user friendly machine learning library) backend and enabling common functions to be performed with pre-made APIs. Keras is available in both R and Python, however, not all functionality is available in R. As a beginner, it is key to work with a platform that has substantial community support. As you run into problems, more popular platforms will have a wealth of searchable content on StackOverflow and similar sites that will be crucial when you get stuck. The pairing of Python and Keras seems to be the most popular.
Getting Your Hands Dirty
There are a variety of strategies for learning a new technology; personally, I favor learning the fundamentals and trying to build simple models right away. The idea is to fail early and often, adapting from each failure. Kaggle, a company known for hosting online machine learning competitions, offers a “micro-course” in deep learning available here. The course focuses on image recognition which is one of the more developed and popular applications of deep learning. The utility of starting with image recognition and convolutional neural networks is that there is a wealth of information available for beginners to reference along the way. The Kaggle course requires Python knowledge, but Kaggle offers another free “micro-course” on Python that translates nicely to the deep learning module.
Leaving the Nest
After the Kaggle course, are you ready to start building custom neural networks? Probably not, but if you understand the concepts well from the course, you are ready to venture on your own with pre-trained models. A key challenge is to identify a task that is a challenge, but within grasp. There should be a clear signal in the data that one can model, a small number of classes to predict, and plenty of data. One example of a reasonable next step would be to predict people wearing hats from headshots. There is a clear signal, two classes to predict (wearing a hat or not), and plenty of labeled data to model. A great resource for labeled images of people is the CelebA dataset available here. Below are two useful images for the hat predictor from the CelebA dataset. It offers 202,599 images of celebrities with 40 binary attributes (e.g. long hair, hat, smiling etc.).
[Related Article: Using Mobile Devices for Deep Learning]
By successfully employing pre-trained models, you are ready to break loose of pre-made architectures, and start customizing your own to optimize the predictive accuracy. Understanding the in depth how convolutional neural networks work is key to being able to engineer a suitable architecture. Brandon Rohrer, Principal Data Scientist at iRobot, offers a great lecture on the topic below.
It’s useful to display the architecture of a model one is familiar with while following along with the lecture. This can be achieved in Python with Keras below. It is key to be able to understand the function of each layer shown in the summary and to be able to visualize how they are oriented with respect to each other.
Everyone has their own preferred method for tackling a new concept. Additionally, there are many avenues and tools available to data scientists to wield deep learning models. I presented some resources and a strategy that worked for me, but feel free to use your own!