TensorFlow for Computer Vision – Transfer Learning Made Easy
Writing neural network model architectures from scratch involves a lot of guesswork. How many layers? How many nodes per layer? What activation function to use? Regularization? You won’t run out of questions any time soon. Transfer learning takes a different approach. Instead of starting from scratch,... Read more
Melting Pot and the Reverse-Engineering Approach to Multi-Agent Artificial General Intelligence
Editor’s Note: Joel is a speaker for ODSC East 2022. Be sure to check out his talk, “Scalable Evaluation of Multi-Agent Reinforcement Learning with Melting Pot,” there! Homo sapiens are a funny species. Along most of the traditionally studied domains of intelligence, we do not outshine... Read more
Recap of the First ODSC Ai+ Deep Learning Bootcamp Session
We recently finished our first session of Jon Krohn’s Deep Learning Bootcamp, and we’re already excited for part 2. Here are a few highlights from the session, some thoughts from attendees, and what to expect from part 2 and beyond. Session 1 Recap: How Deep Learning... Read more
Guidelines for Choosing an Optimizer and Loss Functions When Training Neural Networks
There’s no one right way to train a neural network. These models serve various functions with multiple data sets, so what produces a high-performing model in one instance may not in another. As a result, effective training relies on a series of tools and strategies. Two... Read more
The ODSC Warmup Guide to Keras
Keras is a Python library for deep learning. Deep learning is a sub-branch of artificial intelligence that focuses on solving complex computations by emulating the working process of a human brain. Neural networks, computational graphs composed of nodes representing multiple operators for breaking down the tasks... Read more
Google AI Proposes Temporal Fusion Transformer for Multi-Horizon Time Series Forecasting
Time series forecasting is a useful data science tool for helping people predict what will happen in the future based on historical, time-stamped information. Google researchers recently explained how they developed and used the company’s Temporal Fusion Transformer (TFT) to achieve more progress with these types... Read more
Facebook AI and University of Guelph Open-Sources GHN-2 for Fast Initiation of Deep Learning Models
Deep learning has massive potential, but it’s often hard to achieve. These models’ complexity requires extensive training before they’re ready for use, making implementation long and often expensive. Facebook AI Research (FAIR) and the University of Guelph may have found a potential solution. In a recent... Read more
Jon Krohn on Deep Learning Advancements, PyTorch Lightning, and Going Beyond ML
Deep learning is becoming commonplace, as more and more companies are looking to take a deeper dive into their data – and going beyond just machine learning. We recently spoke with Dr. Jon Krohn about his upcoming deep learning bootcamp, what tools he uses, what platforms... Read more
Reviewing the TensorFlow Decision Forests Library
In their paper, Tabular Data: Deep Learning is Not All You Need, the authors argue that while deep learning methods have shown tremendous success in the image and text domains, traditional tree-based methods like XGBoost still continue to shine when it comes to tabular data. The authors... Read more
Get the Deep Learning Training You Need to Excel With This New Bootcamp
In the years since it was first discovered, the number of applications for deep learning has increased significantly and now includes automatic speech recognition, recommendation systems, NLP, financial fraud detection, and much more. With so many applications and uses, it’s essential to have a strong foundation... Read more