Building Neural Networks with Perceptron, One Year Later — Part III
This is the third part in a three-part series. The first part can be read here and the second part here. Inside Perceptron Each neuron in a neural network will, at some point, have a value. Each weight (the neuron links) will also have a value, all of which the user... Read more
Classic Regularization Techniques in Neural Networks
Neural networks are notoriously tricky to optimize. There isn’t a way to compute a global optimum for weight parameters, so we’re left fishing around in the dark for acceptable solutions while trying to ensure we don’t overfit the data. This is a quick overview of the most popular approaches... Read more
Choosing the right activation function in a neural network Activation functions are one of the many parameters you must choose to gain optimal success and performance with your neural network. In this article I’m going to assume you understand the basics of how a neural network works, and will... Read more
An Overview of Multi-Task Learning in Deep Neural Networks
Note: If you are looking for a review paper, this blog post is also available as an article on arXiv. Table of contents: Introduction Motivation Two MTL methods for Deep Learning Hard parameter sharing Soft parameter sharing Why does MTL work? Implicit data augmentation Attention focusing Eavesdropping Representation bias Regularization... Read more