As the only applied virtual training conference series, ODSC offers an immersive, engaging, and unique experience for data science practitioners. Each conference features several days of hands-on training sessions that cover both essential theory and skill-building practice. Additionally, at the ODSC West Virtual Conference this October 27-30, the focus will be on the topics and tools that are shaping the future of the industry, such as self-supervised learning, recommendation systems, Spark NLP, human-machine collaboration, chat-bots, transfer learning for NLP, and much more with the ODSC West training sessions.
Below is a list of just a few of the exciting ODSC West training sessions and workshops that will be featured this October.
Reduce the amount of time you spend preparing data during this interactive workshop. Learn how to utilize Apache Drill to quickly explore a diverse set of data from many different sources without the intermediate step of writing code.
Recently there has been an increase in the use of semi-surpervised and unsupervised learning methods to solve real-world problems. In particular, GANs and reinforcement learning are being used for tasks such as controlling robots and helping create marketing plans. This training will teach you both the theoretical knowledge and the hands-on skills you need to utilize state-of-the-art AI models.
Get hands-on experience utilizing Ray RLib to implement reinforcement learning algorithms as well as add your own. Additionally, Ray RLib is versatile and integrates easily with PyTorch, OpenAI Gym, and TensorFlow. Ray RLib will do the work to leverage the resources and enable you to get the most out of your reinforcement learning algorithms.
“Advanced NLP with TensorFlow and PyTorch: LSTMs, Self-attention and Transformers”
There have been significant advances in Natural Language Processing (NLP) over the past couple of years, greatly expanding its capabilities. This training will focus both on theory and hands-on skills, providing you with the knowledge and skills needed to build start-of-the-art NLP models. Specifically, you will utilize TensorFlow and PyTorch to create transformer-based and recurrent layer architectures that can be used for machine translation, predictive text, and text classification.
Because of its many standards and features, creating an interactive data visualization in d3.js can be overwhelming. In this workshop, you will build a data visualization step-by-step from a CSV file. This deep dive into data visualization in d3.js will ensure that you know which aspects of the product are important and which are less essential.
Creating an NLP driven AI model is just the first step in building a successful solution, product, or service for marketing, customer engagement, social media, and employee collaboration. Even after the creation of an accurate model there are several challenges to solve: how to build and a monetized product around the model? How to efficiently scale the product? And how to ensure the product is private, secure, observable, and manageable to name just a few. This session will address these challenges and discuss the best ways of overcoming them.
The lack of flexibility that characterized the first iteration of Keras is now in the past. As a result, Keras is one of the most popular platforms for developing Deep Learning models. In this workshop, you’ll learn why you should add Keras to your toolbelt. Learn how to build a Keras model, first with the included components, then with customized components, and finally with the underlying TensorFlow platform, allowing you the maximum flexibility.
In this hands-on workshop, you’ll learn how to generate news headlines using a state-of-the-art summarization model. The model used in this workshop was trained partial on Reuters news data, ensuring that the training data reflected the rules of independence and integrity. This workshop will also begin to address the issue of AI explainability, taking the first step to answering the question of how to inspire trust in the model’s output.
Learn how to use Active Learning to better find knowledge gaps in your model (Diversity Sampling) and where the model is confused (Uncertainty Sampling). This session will start with techniques that require only a few lines of code and finish with those that utilize advances in transfer learning.
The increased use of voice assistants over the last couple of years has resulted in a need for tools that reduce the amount of time required to develop Conversational AI systems. To meet this need, you can use a DeepPavlov framework to develop multi-skill conversational agents. It’s designed to facilitate the creation of dialogue systems, even when you only have a limited amount of data. As such, it focuses on modularity, extensibility, and efficiency.