This summer, I had a blast speaking at Immersive A.I.—the first annual Open Data Science Conference (ODSC) event in New York. The venue was flawless, the organizers were exceptionally well-prepared, and there was a remarkable breadth of topics covered by the speakers. In particular, I was impressed by the level of engagement and the thoughtfulness of the questions asked by the audience at my seminar, Deep Learning with TensorFlow 2.0 (slides from the talk available here).
Given this standout experience, I’m terrifically excited to be delivering an expanded, 3.5-hour version of this interactive seminar at ODSC West, ODSC’s largest meeting, in San Francisco on Wednesday, October 30th.
My Deep Learning with TensorFlow 2.0 workshop will serve as a primer on deep learning theory that will bring the revolutionary machine-learning approach to life with hands-on demos. Critically, these demos will feature TensorFlow 2.0, the cutting-edge revision of the world’s most popular deep learning library (see chart below), including its now-built-in, easy-to-use Keras API.
The seminar will be broken down into three lessons:
- The Unreasonable Effectiveness of Deep Learning
- Essential Deep Learning Theory
- Deep Learning with TensorFlow 2.0
In the first lesson, I’ll:
- Introduce what artificial neural networks (ANNs) are and how they facilitate the uniquely effective deep learning models that have become ubiquitous in recent years
- Cover the range of deep learning families that are deployed across applications as diverse as machine vision, natural language processing, and super-human game-playing
- Compare and contrast the relative strengths and most valuable use cases of the most popular deep learning libraries, including TensorFlow 2.0, Keras, and PyTorch
In the second lesson, Essential Deep Learning Theory, we’ll:
- Design and train a preliminary ANN using TensorFlow 2.0 and its high-level Keras API, in a hands-on Jupyter notebook demo run within the Colab cloud environment
- Cover all of the essential theory related to deep learning, including:
- artificial neurons
- activation functions
- layer types
- cost functions
- stochastic gradient descent
- fancy optimizers (e.g., Nadam)
- performance metrics
- weight initialization
- hyperparameter tuning
- avoiding overfitting (e.g., with dropout)
- Use the TensorFlow Playground to visualize the theory of a deep learning network in action
The final lesson will tie all of the content from the previous lessons together. Namely, we’ll:
- Summarize all of the new functionality association with the 2.0 release of TensorFlow, including:
- Eager autodifferentiation
- just-in-time compilation
- tf.data for data pipelines
- tf.io for data processing
- TensorFlow Serving for model deployment across many servers
- TensorFlow Lite for deployment to mobile or embedded devices
- TensorFlow.js for web browsers
- Revisit our introductory ANN Jupyter notebook from Lesson 1 and interactively beef it up with all of the theory we learned in Lesson 2 to create a high-performance deep learning model
- Design a convolutional neural network to excel at a machine vision task
[Related Article: Stream Data Processing with Apache Kafka and TensorFlow]
In the end, you’ll come away from the seminar with an intuitive understanding of deep learning’s foundations. With tips on overcoming common pitfalls and best-practices for designing and training ANNs provided within straightforward Jupyter notebooks (it will be a blend of the notebooks from my book Deep Learning Illustrated and my forthcoming TensorFlow 2.0 video tutorial), you’ll have all the knowledge you need to apply state-of-the-art deep learning models to your own data.
Following the seminar, I’ll be signing copies of my newly-released book (cover pictured) in a dedicated session, including giving away free signed copies. I’m very much looking forward to meeting you then!