Attention and Memory in Deep Learning and NLP
A recent trend in Deep Learning are Attention Mechanisms. In an interview, Ilya Sutskever, now the research director of OpenAI, mentioned that Attention Mechanisms are one of the most exciting advancements, and that they are here to stay. That sounds exciting. But what are Attention Mechanisms? Attention Mechanisms in... Read more
Amazon Enters The Open-Source Deep Learning Fray
The Synergy Research Group’s last report of 2015 attributed 31% of the cloud computing market to Amazon’s Amazon Web Services (AWS), nearly four times as much as its nearest competitor, Microsoft. This would come as no surprise to any programmer, Data Engineer, or Data Scientist, AWS is a mainstay... Read more
Productionizing Deep Learning from the Ground Up – Adam Gibson ODSC Boston 2015
Productionizing Deep Learning From the Ground Up from odsc Presenter Bio Adam Gibson is the founder of and is a deep­ learning specialist based in San Francisco assisting Fortune 500 companies, hedge funds, PR firms and startup accelerators with their machine learning projects. Adam has a strong track... Read more
Recurrent Neural Networks for Text Analysis – Alec Radford ODSC Boston 2015
Recurrent Neural Networks for Text Analysis from odsc Recurrent Neural Networks hold great promise as general sequence learning algorithms. As such, they are a very promising tool for text analysis. However, outside of very specific use cases such as handwriting recognition and recently, machine translation, they have not seen... Read more
Scalable Data Science and Deep Learning with H2O – Arno Candel ODSC Boston 2015
Scalable Data Science and Deep Learning with H2O from odsc The era of Big Data has passed, and the era of sensory overload – that is, the proliferation of sensor data – is upon us. The challenge today is how to create the next generation of business and consumer... Read more
DIY Deep Learning with Caffe Workshop – Kate Saenko ODSC Boston 2015
DIY Deep Learning with Caffe Workshop from odsc Caffe (Convolutional Architecture for Fast Feature Embedding) is a deep learning framework made with expression, speed, and modularity in mind. It is developed by the Berkeley Vision and Learning Center (BVLC) and by community contributors. Caffe’s expressive architecture encourages application and... Read more