The History of Neural Networks and AI: Part III
This article is the third and final article in a three-part series about the history of neural networks and artificial intelligence. To view the first article that dives into the earliest developments of artificial intelligence, click here. For a better picture of how neural networks and artificial intelligence technologies... Read more
The History of Neural Networks and AI: Part II
This article is the second article in a three-part series about the history of neural networks and artificial intelligence. To view the first article, click here. After the beginning era of AI, a British researcher specializing in artificial intelligence, Donald Michie, designed a machine made from matchboxes in 1963... Read more
An Overview of Proxy-label Approaches for Semi-supervised Learning
Note: Parts of this post are based on my ACL 2018 paper Strong Baselines for Neural Semi-supervised Learning under Domain Shift with Barbara Plank. Table of contents: Self-training Multi-view training Co-training Democratic Co-learning Tri-training Tri-training with disagreement Asymmetric tri-training Multi-task tri-training Self-ensembling Ladder networks Virtual Adversarial Training ΠΠ model Temporal Ensembling Mean Teacher... Read more
Datasets Are Books, Not Houses
What’s content addressing? What does it have to do with datasets? Why am I on this site in the first place? Read on, dear reader. Read on. The world of linked data is built on shaky foundations that prevent a true data commons from emerging. The problem isn’t with... Read more
The History of Neural Networks and AI: Part I
Although machine learning has only become mainstream in the last decade, there are many essential contributors to the field dating back as far as the 1940s. In order to understand the infinite possibilities presented today in the fields of AI, deep learning and more, it is important to understand... Read more
Understanding Neural Network Bias Values
In my other articles, I have discussed the many different neural network hyper parameters that contribute to optimal success. While hyper parameters are crucial  for training successful algorithms, the importance of neural network bias values are not to be forgotten as well. In this article I’ll delve into the... Read more
An Infinite Parade of Giraffes: Collaborative Cartooning with AI
What is recognizable about a particular artist’s style? What parts can be delegated to an assistant? Can AI play the role of assistant or even collaborator? How would we ever get enough data for training? How little data could we get away with? Exploring these questions using GANs, image... Read more
Pervasive Simulator Misuse with Reinforcement Learning
The surge of interest in reinforcement learning is great fun, but I often see confused choices in applying RL algorithms to solve problems. There are two purposes for which you might use a world simulator in reinforcement learning: Reinforcement Learning Research: You might be interested in creating reinforcement learning algorithms for... Read more
To solve machine learning problems, there is a wide range of different techniques and methods required, some suited better than others. As a data scientist it can be difficult to encapsulate all of them, and choose which work best for specific scenarios. If one is starting out in this... Read more
Requests for Research
Table of contents: Task-independent data augmentation for NLP Few-shot learning for NLP Transfer learning for NLP Multi-task learning Cross-lingual learning Task-independent architecture improvements It can be hard to find compelling topics to work on and know what questions are interesting to ask when you are just starting as a... Read more