Recurrent Neural Networks hold great promise as general sequence learning algorithms. As such, they are a very promising tool for text analysis. However, outside of very specific use cases such as handwriting recognition and recently, machine translation, they have not seen wide spread use. Why has this been the case?
In this presentation, we will first introduce RNNs as a concept. Then we will sketch how to implement them and cover the tricks necessary to make them work well. With the basics covered, we will investigate using RNNs as general text classification and regression models, examining where they succeed and where they fail compared to more traditional text analysis models. A straightforward open-source Python and Theano library for training RNNs with a scikit-learn style interface will be introduced and we’ll see how to use it through a tutorial on a real world text dataset
Alec is addicted to exploring datasets and building practical products out of them, so his role as Head of Research at indico is pretty much the perfect fit. He spends a lot of his time exploring new areas of the field, leading machine learning workshops at conferences and generally teaching computers to do amazing things.
When he can be dragged away from his machine learning projects, Alec loves scuba diving (particularly off the southern coast of Thailand), destroying the competition in Dota 2, and creating computer art.