Deep learning continues to be a hot topic as increased demands for AI-driven applications, availability of data, and the need for increased explainability are pushing forward. All of this means that deep learning will not only continue to be a critical area of research in development today but will only increasingly become more important in the future. So let’s take a quick dive and see some big sessions about deep learning coming up at ODSC East May 9th-11th.
Unifying ML With One Line of Code
Can there be unity in machine learning frameworks? Well in this session, Ivy will prove that it is possible. Ivy, a tool that can be used to unify different machine learning frameworks by transpiling ML code to run in any other ML framework with the addition of a single function decorator. In the workshop, Ivy’s CEO Daniel Lenton, PhD will demonstrate how Ivy can be used to transpile DeepMind’s PerceiverIO implementation.
Additionally, the session will show how models can be implemented in Ivy directly, which can then be run with any ML framework in the backend without the need to transpile any code. The goal is to address concerns about creating a new incompatible standard and worsening ML fragmentation by showing that Ivy can help unify ML frameworks.
Deep Learning with PyTorch and TensorFlow Parts 1 & 2
Are you ready to tackle the fundamentals of deep learning and TensorFlow? Then this two-part workshop is for you. Dr. Jon Krohn will introduce participants to the essential theory behind deep learning and provides interactive examples using PyTorch, TensorFlow 2, and Keras – the principal Python libraries for deep learning. These workshops are designed to give students a complete intuitive understanding of deep learning’s underlying foundations and teach them how to train deep learning models following the latest best practices.
With Dr. Jon Krohn you’ll also get hands-on code demos in Jupyter notebooks and strategic advice for overcoming common pitfalls. This is the perfect session if you have no previous understanding of artificial neural networks.
Deepfakes: How’re They Made, Detected, and How They Impact Society
You’ve likely seen viral videos of convincing deep fakes circulating online over the last few years, but what’s the entire story? Join Noah Giansiracusa, PhD as he explores both the social impact and the technical side of how they’re made and detected. As the technology continues to advance, we’re already seeing how deepfake photos and videos are already having an impact on many industries; primarily with cybersecurity and privacy.
In this session, you’ll get a broad overview of the concepts, so that those without a technical background will have a good understanding. There will also be a brief tour at the end of more specific resources for those interested in exploring some relevant Python tools to detect deepfakes in greater detail.
Colossal-AI: A Unified Deep Learning System For Large-Scale Parallel Training
In this tutorial, James Demmel, PhD, will speak on the challenges faced by larger model sizes and how implementing complex distributed training solutions, especially model parallelism, requires domain expertise in computer systems and architecture, posing a challenge for AI researchers.
This is where you’ll be introduced to Colossal-AI, a unified parallel training system designed to seamlessly integrate different paradigms of parallelization techniques. This includes data parallelism, pipeline parallelism, multiple tensor parallelism, and sequence parallelism. Colossal-AI aims to simplify the process of writing distributed models for the AI community, allowing them to focus on developing the model architecture while separating the concerns of distributed training from the development process.
Building Computer Vision Models and Optimizing Hyperparameters using PyTorch and SAS Viya
In this two-part workshop, Ari Zitin & Robert Blanchard of SAS show participants how integrating PyTorch with SAS improves model development and deployment for computer vision applications using deep learning models. You’ll learn how to leverage the benefits of both technologies and how to improve model accuracy using combined global and local search strategies, such as genetic algorithms and generating set searches. Because of this, the workshop emphasizes the use of TorchScript language and SAS capabilities to develop and deploy deep learning models.
Building Recommendation Systems
Join SeMI Technologies’s Connor Shorten, PhD as he discusses the importance of personalized recommendation systems in an environment that sees generative AI creating more digital content. He’ll explain the underlying structure of most recommendation tasks and the need for efficient indexing and intuitive APIs. Here, Weaviate will be introduced as an open-source vector search database with unique features for serving millions of users worldwide. The talk also presents Ref2Vec, a new feature in Weaviate for representing users and building recommendation systems through a graph-structured interface.
Join us and you’ll also get a hands-on example of a personalized search using the open-source Weaviate engine which covers the details of Collaborative Filtering, HDBSCAN clustering, and Graph Neural Networks. You’ll come from this more knowledgeable about the impact of vector search technology on recommendation and how to use it to build their own applications.
Text and Code Embeddings
Arvind Neelakantan, PhD of OpenAI introduces the concept of embeddings, which are numerical representations of concepts that are converted into sequences of numbers. In this talk, Arvind will focus on how embeddings are useful for natural language and code tasks. He’ll highlight OpenAI development of embeddings that are available in their API, and how these embeddings outperform top models in three standard benchmarks. This includes a 20% relative improvement in code search.
From this talk, you’ll get insights into the importance of embeddings in enabling computers to understand language and code and showcases the performance of OpenAI’s embeddings in comparison to other models.
Causation, Collision, and Confusion: Avoiding the most dangerous error in Statistics
Join Brilliant.org’s Allen Downey, PhD as he explains what collision bias is and how it can be a dangerous error in statistics. And if these errors aren’t kept in check, then the reliability and generalizability of models are at risk. During this talk, you also learn how collision bias is the cause of several famous historical errors, and it can be induced by accident, be subtle, and cause an error that is more significant than the effect being measured.
Allen Downey, PhD will also provide examples of how collision bias can be caused by biased sampling processes or inappropriate statistical controls and introduces causal diagrams as a tool for representing causal hypotheses and diagnosing collision bias.
Video PreTraining (VPT): Learning to Act by Watching Unlabeled Online Videos
In this talk, Jeff Clune, PhD, Senior Research Advisor at DeepMind will discuss extending the GPT model to train a neural network to play Minecraft using a massive unlabeled video dataset of human players playing Minecraft with a small amount of labeled contractor data. He’ll also speak on how with a bit of fine-tuning how the model can learn to perform tasks like crafting diamond tools and using the native human interface of keypresses and mouse movements. All of this in the future could help automate much of the work humans do on computers.
Introduction to Topological Data Analysis and its Advantages in Machine Learning
Finally, join Christian Ramirez, Machine Learning Technical Leader at MercadoLibre as he introduces Topological Data Analysis (TDA) in this fascinating talk. TDA is a mathematical method for analyzing complex data sets and uncovering hidden patterns and features that traditional methods cannot easily identify. He’ll go on to cover key concepts such as topological spaces and persistent homology and discusses how TDA can be applied in machine learning using tools like the Mapper algorithm and the TDA package in R.
You’ll also learn about the advantages of TDA, including its ability to handle high-dimensional data, its robustness to noise and missing data, and its interpretability, with real-world examples and case studies to demonstrate its power.
Aren’t these some amazing tracks? The future will be built on applications built on the back of deep learning. We’re already seeing its impact in healthcare, finance, and transportation, and as more datasets become available and other technology advances, deep learning is poised to rock the foundations of robotics.
Learn more about Deep Learning at ODSC East’s Machine Learning/Deep Learning track. There you’ll learn from those who are driving the future.