Building Neural Networks with Perceptron, One Year Later — Part I Building Neural Networks with Perceptron, One Year Later — Part I
Introduction Around one year ago now, I started writing for Open Data Science after presenting Perceptron at the ODSC conference. Since then, a lot... Building Neural Networks with Perceptron, One Year Later — Part I

Introduction

Around one year ago now, I started writing for Open Data Science after presenting Perceptron at the ODSC conference.

Since then, a lot has changed. People have found fascinating uses for the software, and also help contribute to it. In this series I’ll present a fresh overview of how Perception works, and all its new features.

Recap of Perceptron

Perceptron is a software that can help researchers, students, and programmers to design, compare, and test artificial neural networks. As it stands, there are few visual tools that do this for free, and so simply. The software is largely intended for educational purposes, allowing people to experiment and understand how different attributes within a neural network can result in different performances and results.

Perceptron’s goal is to help people learn standard neural nets at a deeper level via experimentation. It is not an attempt to compete with the highly impressive, bespoke models that are being used today. It does not employ a machine learning library like TensorFlow specifically because I want users to see and experiment with the deeper code that TensorFlow would otherwise do for you.

What Are Generative Adversarial Neural Networks?

GANs, in my opinion, are the truest form of machine learning. This is because once optimised, they no longer require any meaningful data to output new data. They

Neural networks are a fairly adequate metaphor for the human brain, hence the name. Like a brain, neural networks are made of neurons, take in multiple inputs, and produce a single output.

Because nearly all the neurons influence each other — and are therefore all connected in some way — the network is able to acknowledge and observe all aspects of the given data, and how these different bits of data may or may not relate to each other. It may find very complex patterns that would be invisible to us in a large amount of data.

neural network

In this visualization of an artificial neural network (ANN), there are three neuron layers. The input layer (left, red), a hidden layer (in blue), and then the output layer (right, red).

Assume this network is meant to predict the weather. The input values would be attributes relevant to weather such as time, humidity, air temperature, pressure, etc. These values would then be fed forward to the hidden layer while being manipulated by the weight values. Initially, weights are random unique values on every connection or synapse. The hidden layer’s new values are fed forward to the output, while being manipulated again by the weight values. 

At this point, it is important to recognize that the output would be completely random and incorrect. The manipulation during the feedforward step contained no actual logic relevant to the problem because the weights start as random. However, we are training the ANN with a huge dataset containing previous weather forecasts with the same attributes and the result of these attributes (the target value).

ANN

After the feedforward stage, we can compare the incorrect output to the desired target value and calculate the error margin. Then we can back-propagate the network and adjust the weight values based on how they contributed to that error. If we do this forward and back feeding 1,000 more times with each data item, the weights will start to manipulate future inputs in a relevant way. Oftentimes, even more success can come from training the same dataset multiple times.

The feed forward step could be seen as guessing, and the back-propagation step educates that guess based on the margin of error. Over time, the guessing will become extremely accurate.

Training a neural network is like drawing a maze. As the weights change, new paths are made and existing paths are connected. A fully optimized neural network is a near-perfect maze that directs all inputs to the correct outputs.

Neural networks are famously difficult to understand, and to explain in just a few paragraphs. In fact, neural networks are considered a black box because, although scientists know what is required to make them work, no one really understands the actual observations that neural networks make. This is because they are just thousands of decimal values that in some way come to represent a function; and these decimal values are derived from many layers of non-linear observational abstraction. They are only meaningful to the machine. Now there are even tools out there attempting to un-blackbox neural nets.

In the next part, I will demonstrate Perceptron and its new features.

Caspar Wylie, ODSC

Caspar Wylie, ODSC

My name is Caspar Wylie, and I have been passionately computer programming for as long as I can remember. I am currently a teenager, 17, and have taught myself to write code with initial help from an employee at Google in Mountain View California, who truly motivated me. I program everyday and am always putting new ideas into perspective. I try to keep a good balance between jobs and personal projects in order to advance my research and understanding. My interest in computers started with very basic electronic engineering when I was only 6, before I then moved on to software development at the age of about 8. Since, I have experimented with many different areas of computing, from web security to computer vision.

Open Data Science - Your News Source for AI, Machine Learning & more