In a previous article, we discussed the origin story and history of the Python deep learning library TensorFlow. It’s experienced a monumental rise like...

In a previous article, we discussed the origin story and history of the Python deep learning library TensorFlow. It’s experienced a monumental rise like nothing seen before, in just two years since its debut it currently holds the title of the most forked repo on GitHub.

TensorFlow’s significance doesn’t come from how many people use it but from who uses it. Some of the biggest companies in tech such as Airbnb, Snapchat, and Uber all employ the library for their AI-related needs. TensorFlow has become the deep learning tool of choice at Uber because:

“The framework is one of the most widely used open source frameworks for deep learning, which makes it easy to onboard new users. It also combines high performance with an ability to tinker with low-level model details…TensorFlow has end-to-end support for a wide variety of deep learning use cases, from conducting exploratory research to deploying models in production on cloud servers, mobile apps, and even self-driving vehicles.”

One of the main reasons why TensorFlow has captured the title of the most important tool in deep learning and AI today is its versatility. TensorFlow can be used for a variety of data forms whether its text (document classification, translation, sentiment analysis), audio (voice recognition, Siri/Alexa/Google home), or visual (image processing, computer vision, video.) Pretty much any Google app or technology that uses AI employs TensorFlow. The performance of Google Translate remarkably increased when the company switched to this technology. It’s fair to say that Google, the creators of TensorFlow, have benefitted from this technology as much everyone who uses it.

Earlier this year Dropbox published a detailed analysis of how they used TensorFlow to build what they call an “Optical Character Recognition (OCR) Pipeline.” To put simply, they needed a tool that could read and process text from scanned documents uploaded from their users’ mobile document scanner in the Dropbox app. This was a complex undertaking for the company that required the use of deep learning technologies such as LSTMs, CTC, and CNNs. The project took eight months to come to fruition which left the company with what they describe as a “State-of-the-art OCR pipeline.”

One company that has hugely benefited is the British online retailer Ocado. The company is the world’s largest online grocery supermarket and as you can imagine they have some serious machine learning-related needs in their commerce. Part of Ocado’s success comes from its great relationship with its customers, however, they suffered from the overload of customer calls, email, texts, and other forms of communications from their loyal customer base. This is where TensorFlow comes in. They needed to devise a system that could automatically classify customer responses into categories of varying priority, which sounds just like textbook machine learning problem. Ocado assembled together a data science team to tackle this immense task. Their decision to employ TensorFlow paid huge dividends.

“Ocado was able to successfully deploy this new product in record time as a result of the close collaboration between three departments: data science, contact center systems, and quality and development. Working together allowed us to share data and update models quickly, which we could then deploy in a real-world environment.” – Pawel Domagala, product owner, Last Mile systems

Ocado and TensorFlow is the perfect example of how automation can be extremely beneficial to companies by easing the burden of a task which in turn allows employees to better serve customers or clientele.

TensorFlow has been an undeniable success in its two-year reign and the examples above prove that if a company has an AI-related need, then they should seriously consider turning to TensorFlow.

 

©ODSC2017

George McIntire, ODSC

George McIntire, ODSC

I'm a journalist turned data scientist/journalist hybrid. Looking for opportunities in data science and/or journalism. Impossibly curious and passionate about learning new things. Before completing the Metis Data Science Bootcamp, I worked as a freelance journalist in San Francisco for Vice, Salon, SF Weekly, San Francisco Magazine, and more. I've referred to myself as a 'Swiss-Army knife' journalist and have written about a variety of topics ranging from tech to music to politics. Before getting into journalism, I graduated from Occidental College with a Bachelor of Arts in Economics. I chose to do the Metis Data Science Bootcamp to pursue my goal of using data science in journalism, which inspired me to focus my final project on being able to better understand the problem of police-related violence in America. Here is the repo with my code and presentation for my final project: https://github.com/GeorgeMcIntire/metis_final_project.