The History of Neural Networks and AI: Part II The History of Neural Networks and AI: Part II
This article is the second article in a three-part series about the history of neural networks and artificial intelligence. To view... The History of Neural Networks and AI: Part II

This article is the second article in a three-part series about the history of neural networks and artificial intelligence. To view the first article, click here.

The History of Neural Networks, Continued

After the beginning era of AI, a British researcher specializing in artificial intelligence, Donald Michie, designed a machine made from matchboxes in 1963 that functioned similarly to neural networks.

The Start of Machine Learning in Gaming

Prior to his work in AI, Michie worked at Bletchley Park during World War II, contributing to the creation of algorithms that deciphered the key German teleprinter cipher and working alongside other brilliant minds that paved the way in computer science such as Alan Turing. Following his work during the war, Michie constructed one of the first programs with the ability to learn how to play a perfect game of Noughts and Crosses, also known as Tic-Tac-Toe, in 1960. The program was referred to as the Machine Educable Noughts And Crosses Engine (MENACE). Since computers as we know them were not readily available, Michie created MENACE from 300 match boxes and taught it how to play Noughts and Crosses

MENACE worked by each of the 300 match boxes containing a distinctive Noughts and Crosses board configuration. Each of the match boxes held colored beads that represented a maneuver in that specific board configuration.

MENACE learned to play Noughts and Crosses by playing hundred of games against another player. When it took a move, Michie would randomly pick a bead out of the matchbox that reflected the current board’s state. The board’s bead colours represented the potential positions of the machine’s (and opposing player’s) move options. If MENACE selected a bead that resulted in a poor move, that bead would be removed from that matchbox (or added, if it did well). MENACE’s abilities optimized after playing hundreds of games resulting in the program being able to win a game of Noughts and Crosses in as few moves as possible.

Many states start as random, as well as the moves it takes (where the beads are like activating weights), and the program then slowly becomes optimised. Once optimised, MENACE is like a simplified physical model that non-explicitly represents how to win a game of Noughts and Crosses.

From the initially random weight-like optimisation, to the way the system is rewarded or punished in specific areas of the model, it is interesting to think about how the principal of MENACE aligns with that of a modern neural network. MENACE introduced the idea of machine-human interactions via gaming, thus revealing the possibilities involved in evaluating various learning algorithms in a new environment. This experiment enabled the field of AI to  have a much clearer way to test out many machine learning theories in question.

History of Neural Networks

The Start of AI Conferences

In 1969, the first conference, the International Joint Conference on Artificial Intelligence (IJCAI), was held that brought together machine learning and AI researchers, scientists and others from all parts of the globe. The conference is now considered the most renowned conference in the field. The IJCAI offers multiple awards, such as the Best Paper Award, The Computer and Thought Award, and the Award for Research Excellence yearly to highlight various achievements in the AI community. The creation and continued gathering of this conference put AI in a more central part of the scientific community as the awards and the research shared provoked more motivation to continue pursuing various aspects of the field.

AI’s Growth Continues: Backpropagation is Introduced

The reaches and capabilities of AI continued to grow. In 1970, only a year after the first IJCAI, Finnish mathematician and computer scientist Seppo Linnainmaa came up with the first backpropagation algorithm as he introduced the reverse mode of automatic differentiation. It must be noted, however, Linnainmaa’s algorithm was not known as backpropagation or used for it at the time. Instead, it was more of an underlying principle for reversing Automated Differentiation, by recursively applying the chain rule to the building blocks of the function. Regardless, Linnainmaa’s creation proved essential to the advancement of artificial intelligence.

Backpropagation is absolutely imperative in comprehending the error rate in modern neural network models. In order for a deep learning model to correct itself, it must first find where its mistake lies among the many weights on the network. This is done by stepping back along the paths of the neural network to individual weights and reviewing how the model’s predictions compared to the actual outcome. If the prediction is far off from the actual outcome, then the model’s weights must be adjusted.

In 1986, the state of backpropagation as we know it today came about through a paper written by Geoffrey Hinton and Ronald J. Williams. Hinton and Williams’ paper demonstrated the algorithm’s general use with multi-layer neural networks. Based on their insights, backpropagation has become a staple in the teaching of neural networks and related research.

Hinton was later named the Godfather of AI, and continued to innovate in the field. He has since taken an interesting stance in the politics of AI, citing how it could be existentially dangerous when implemented by the military for intelligent weapon systems.

Today, neural networks, with their ability to tackle nonlinear tasks, have applications that range from medical diagnosis to machine translation, from face-identification to email-filtering. Even modern game-playing and decision-making makes use of neural networks, hearkening back to the field’s early roots in Tic-Tac-Toe. While it is important to consider the potential negative implications of neural networks in AI as Hinton noted, innovation in this realm continues to advance the state of the art in our quest to endow machines with human-like learning capacity.

Check back to OpenDataScience.com later for the third part of the History of Neural Networks.

Caspar Wylie, ODSC

Caspar Wylie, ODSC

My name is Caspar Wylie, and I have been passionately computer programming for as long as I can remember. I am currently a teenager, 17, and have taught myself to write code with initial help from an employee at Google in Mountain View California, who truly motivated me. I program everyday and am always putting new ideas into perspective. I try to keep a good balance between jobs and personal projects in order to advance my research and understanding. My interest in computers started with very basic electronic engineering when I was only 6, before I then moved on to software development at the age of about 8. Since, I have experimented with many different areas of computing, from web security to computer vision.