Warning: Invalid argument supplied for foreach() in /home/customer/www/opendatascience.com/public_html/wp-includes/nav-menu.php on line 95
Warning: array_merge(): Expected parameter 2 to be an array, null given in /home/customer/www/opendatascience.com/public_html/wp-includes/nav-menu.php on line 102
Quantization is a cheap and easy way to make your DNN run faster and with lower memory requirements. PyTorch offers a few different approaches to quantize your model. In this blog post, we’ll lay a (quick) foundation of quantization in deep learning, and then take a... Read more
Deep learning is bringing many benefits to the world: solving the 50-year-old protein folding problem, detecting cancer, and improving the power grid. While there is so much that deep learning is powering, we also need to consider the costs. In the quest for more accurate and... Read more
Distillation is a hot research area. For distillation, you first train a deep learning model, the teacher network, to solve your task. Then, you train a student network, which can be any model. While the teacher is trained on real data, the student is trained on the teacher’s outputs. It... Read more
Emerging technologies in the scientific community are helping researchers achieve more goals and make discoveries. Revolutionary tech such as artificial intelligence (AI) and machine learning (ML) have already disrupted various industries, from manufacturing to retail and beyond. ML has expedited the discovery process, especially for grad... Read more
Editor’s note: Nicole Königstein is a speaker for ODSC Europe 2022. Be sure to check out her talk, Dynamic and Context-Dependent Stock Price Prediction Using Attention Modules and News Sentiment, there to learn more about financial time series prediction! The use of neural networks is relatively... Read more
One of the biggest challenges in building a deep learning model is choosing the right hyper-parameters. If the hyper-parameters aren’t ideal, the network may not be able to produce optimal results or development could be far more challenging. Perhaps the most difficult parameter to determine is... Read more
In early 2022, Google AI began releasing details about an exciting new method for training deep neural networks: DeepCTRL. Google’s AI team found a way to control rule strength and accuracy in deep neural networks, allowing for improvements in some crucial AI applications. DeepCTRL is more... Read more
Editor’s Note: Eric is a speaker for ODSC East 2022. Be sure to check out his talk, “Network Analysis Made Simple,” there! Graphs, also known as networks, are ubiquitous in our world. But did you know that graphs are also related to matrices and linear algebra?... Read more
Editor’s note: Laura is a speaker for ODSC East 2022. Be sure to check out her talk, “Vector Database Workshop Using Weaviate,” to learn more about vector-based search! Traditional search engines perform a keyword-based search. Such search engines return results that contain an exact match or... Read more
A team of researchers from NVIDIA including Thomas Muller, Alex Evans, Christoph Schied, and Alexander Keller, demonstrated a new method that should enable the efficient use of artificial neural networks for rendering computer graphics. Rendering is a notoriously slow process so this is a significant development... Read more