fbpx
Distributed training with PyTorch and Azure ML
By Beatriz Stollnitz, Principal Cloud Advocate at Microsoft Suppose you have a very large PyTorch model, and you’ve already tried many common tricks to speed up training: you optimized your code, you moved training to the cloud and selected a fast GPU VM, you installed software packages that... Read more
Faster Training and Inference Using the Azure Container for PyTorch in Azure ML
By Beatriz Stollnitz, Principal Cloud Advocate at Microsoft If you’ve ever wished that you could speed up the training of a large PyTorch model, then this post is for you! The Azure ML team has recently released the public preview of a new curated environment that... Read more
Training Your PyTorch Model Using Components and Pipelines in Azure ML
By Beatriz Stollnitz, Principal Cloud Advocate at Microsoft In this post, we’ll explore how you can take your PyTorch model training to the next level, using Azure ML. In particular, we’ll see how you can split your training code into multiple steps that can be easily... Read more
Training and Deploying Your PyTorch Model in the Cloud with Azure ML
By Beatriz Stollnitz, Principal Cloud Advocate at Microsoft You’ve been training your PyTorch models on your machine, and getting by just fine. Why would you want to train and deploy them in the cloud? Training in the cloud will allow you to handle larger ML models... Read more
AMD Joins Other Big Tech as a Founding Member of the PyTorch Foundation
Microchip maker AMD (Advanced Micro Devices) has become a founding member of the PyTorch Foundation. This non-profit, which will be a part of the Linux Foundation, will work to help advance artificial intelligence tooling by creating an open-source environment that looks to help the popular framework... Read more
Daniel Voigt Godoy on Deep Learning and Starting PyTorch
PyTorch has quickly been gaining steam as a leading deep learning framework, with many practitioners choosing it over TensorFlow as their go-to framework of choosing lately. Ahead of his upcoming Ai+ Training session, PyTorch 101, coming up this August 24th, we spoke with Daniel Voigt Godoy,... Read more
Practical Quantization in PyTorch
Quantization is a cheap and easy way to make your DNN run faster and with lower memory requirements. PyTorch offers a few different approaches to quantize your model. In this blog post, we’ll lay a (quick) foundation of quantization in deep learning, and then take a... Read more
Higher-level PyTorch APIs: A short introduction to PyTorch Lightning 
In recent years, the PyTorch community developed several different libraries and APIs on top of PyTorch. PyTorch Lightning (Lightning for short) is one of them, and it makes training deep neural networks simpler by removing much of the boilerplate code. However, while Lightning’s focus lies in... Read more
Optimizing Your Model for Inference with PyTorch Quantization
Editor’s Note: Jerry is a speaker for ODSC East 2022. Be sure to check out his talk, “Quantization in PyTorch,” to learn more about PyTorch quantization! Quantization is a common technique that people use to make their model run faster, with lower memory footprint and lower... Read more
The ODSC Warmup Guide to PyTorch
PyTorch is an open-source framework built for developing machine learning and deep learning models. In particular, this framework provides the stability and support required for building computational models in the development phase and deploying them in the production phase.  PyTorch functionalities are extensible with other Python... Read more