Two years ago, Google released to the public open-source software and deep-learning library TensorFlow. The immensely powerful tool has been...



Two years ago, Google released to the public open-source software and deep-learning library TensorFlow. The immensely powerful tool has been driving the popularity of deep-learning since its debut and has also achieved the status of most forked repository on Github.


After a spike in interest in library’s initial release, we can see it’s popularity demonstrated continuous growth, nearly doubling its search interest score in 2017 alone.

Compared to deep learning libraries, TensorFlow is a class above the rest.


When it comes to GitHub commits, it is by far the most popular deep learning library, with 75% more commits than second place CNTK.

Today, the deep-learning software powers a number of Google services such as Google Photos search, speech recognition, Gmail’s “Smart Reply”, and it is an instrumental component of the Google Brain team. It’s incredibly adept at working a variety of data sources such as pictures, audio, text, and video. While much can be said about the popularity of TensorFlow, it’s important to understand how arguably the most significant deep-learning software came to be. In this post, we’ll delve into the origin story of TensorFlow and how it eventually came to rule the world.


TensorFlow is described by Google as its “Second-generation machine learning system.” The first-generation machine learning system and TensorFlow’s precursor was another deep-learning library called DistBelief, released in 2011. Relative to the technology at the time, DistBelief was a success It could identify cats on Youtube, it won the Large Scale Visual Recognition challenge of 2014, and created the popular and psychedelic DeepDream.

DistBelief was the first major release of The Google Brain project, created in 2011 and whose mission was to “explore the use of very-large-scale deep neural networks, both for research and for use in Google’s products.” The fathers of Google Brain were Jeff Dean, Greg Corrado, and Andrew Ng, whom you most likely know from his famous Coursera course.

While DistBelief demonstrated promising results and an ability to be used in popular Google products such as Google Search, Maps, Photos, and Translate, it could not overcome its limitations. DistBelief’s slowness and inability to scale spurred the creation of TensorFlow, which operates at twice the speed of its predecessor.

However, the biggest factor in the switch from DistBelief to TensorFlow was “it was a narrowly targeted to neural networks, it was difficult to configure, and it was tightly coupled to Google’s internal infrastructure — making it nearly impossible to share research code externally.” TensorFlow was specifically designed to fix the flaws of DistBelief, by substituting an obstinate approach for a more generalized one and releasing its power to the public. TensorFlow’s continued progress to this day owes much to the fact that it is open-sourced.

In a follow-up post to this one, we’re going to discuss some of the most prominent applications of TensorFlow currently being used today.


  • 2011: Google Brain project begins
  • 2011: DistBelief system created
  • 2012: DistBelief learns what a cat looks like through unsupervised learning
  • 2013: Google hires leading AI researcher Geoffrey Hinton
  • 2014: Google purchases DeepMind
  • 2014: DistBelief wins Large Scale Visual Recognition Challenge
  • July 2015: DeepDream is released
  • November 2015: TensorFlow is released
  • February 2016: TensorFlow becomes more forked repo on GitHub and continues to hold this position to this day.
  • February 2017: Version 1.0.0 of TensorFlow is released



George McIntire, ODSC

George McIntire, ODSC

I'm a journalist turned data scientist/journalist hybrid. Looking for opportunities in data science and/or journalism. Impossibly curious and passionate about learning new things. Before completing the Metis Data Science Bootcamp, I worked as a freelance journalist in San Francisco for Vice, Salon, SF Weekly, San Francisco Magazine, and more. I've referred to myself as a 'Swiss-Army knife' journalist and have written about a variety of topics ranging from tech to music to politics. Before getting into journalism, I graduated from Occidental College with a Bachelor of Arts in Economics. I chose to do the Metis Data Science Bootcamp to pursue my goal of using data science in journalism, which inspired me to focus my final project on being able to better understand the problem of police-related violence in America. Here is the repo with my code and presentation for my final project: https://github.com/GeorgeMcIntire/metis_final_project.