Neural Network Programming Part 1

Overview

Lately, I've been programming a neural network. I decided to do this in PHP of all languages as it's convenient and familiar to me. Eventually, I plan to port it to Python, Java, and C++. For the rest of this article, I'm going to assume you have some background knowledge of what a neural network does.

How is a Neural Network trained?

A neural network is supposed to be trained in a similar way as a child is while growing up. For example, if you wanted to teach a child what a chair is, you show it a bunch of chairs and a bunch of non chairs. For each object you show the child you state whether or not it is a chair.

The neural network I created previously was programmed with XOR. The XOR truth table is pretty basic:

A|B|O

0|0|0

0|1|1

1|0|1

1|1|0

Basically, when input A and B are different, the output O is 1 and when they are the same the output O is 0.

I used test inputs A:1, B:1 which means the output should be 0. When I ran the neural network, the output was 0.025 which is very close to 0 and thus a successful and confident output. I tested it for the other 3 XOR cases, and they all output the correct answers.

XOR doesn't seem spectacular, but the framework I have set up can be easily expanded to work for large and complex data sets.

The Process Thus Far

The Neural Network was complex to setup as it involves calculus, graph theory, matrix theory, and an understanding of neural network theory as well as the equations which go along with neural networks. Now that I have created one neural network, I have a much better understanding of how it works and it should be much simpler in the future.

The general idea of Neural Networks at a scientific level is that you create a network of many neurons. D-uh! Well, inside the network of neurons there are many layers. There is a layer of input neurons, in the case of XOR there are 2 input neurons + 1 bias neuron. There is also layers of hidden neurons, think of these as intermediate steps. In the case of XOR there a single hidden layer with 2 hidden neurons + 1 bias neuron. Then, there is a layer of output neurons. In this case there is only a single output neuron. In a Neural network every neuron on a layer is connected to every neuron on the next layer.

Network.png

You can find the original image here.

A Neural Network attempts to change the weight values for each connection between neurons which causes some neurons to be more important than others. When the Neural Network is done being trained, the weights should be set to such a fine toothed amount that entering your own inputs should give the correct result.

The code for this version is here.

How we Improve Upon This

I added an additional parameter which allows for an arbitrary amount of hidden layers. In layman's terms, the more hidden layers a neural network has, the more complex of a problem it is able to solve/predict.

For example, a neural network with no hidden layers is called a 'Perception' and these are only capable of solving linear separable problems. For example the truth table of an AND gate or an OR gate.

Once a problem requires observing relationships between columns and patterns, intermediate hidden layers are required. Adding n amount of hidden layers doesn't come without a cost, though, as it causes the algorithm to run at a speed of approx. (n * iterations * set_size) ^ 3 whereas with only a single hidden layer it runs at approx. (iterations * set_size) ^ 2.

And, when we are talking about training with 10's of thousands of iterations and set sizes in the dozen's it can quickly take a significant performance impact.

Testing With More Hidden Layers

I ran a quick arbitrary test I made up. The objective was to determine whether a family of size x with a household income of y was living comfortably or not. I defined living comfortably as y >= x * 25000. It's a fairly simplistic objective, but more advanced then the XOR gate I talked about previously.

Here's an Excel Sheet which shows the training input set I used (rows 5-21) and then a few test inputs I used to test whether the neural network could properly predict whether the family is living comfortably or not (rows 24-25). The margin of error I received in the answers was within the range of (0.0001, 0.001) which is more than acceptable, in fact quite extraordinary.

The code for this version is here.

Future Improvements

The nice thing about the design I've constructed, is I should be able to add an arbitrary amount of inputs as well as outputs as I progress to testing more complex problems. For example, let's say the objective of a future problem is to determine whether an environment is capable of supporting 'Tree life', 'Shurb life' and/or 'Animal life'. This problem requires 3 boolean outputs. Now let's imagine the inputs are average temperature, average humidity, and the average hours of sunlight per day. If I input some training set data from various biomes across Earth and then input a set of data of an unknown biome, it should be able to return, within a small degree of error, whether that biome is capable of supporting tree, shrub, or animal life.

My next steps include rewriting the algorithm in a faster language, and adding a method of specifying arbitrary amounts of neurons for each hidden layer. Right now each hidden layer in a neural network has a constant n amount of neurons no matter what. I want each hidden layer to have Ni hidden layers.

Thanks for reading!


Shawn Clake

Freelance Developer

Software Engineering Student - U of R

Current: Assistant to Manager of Instructional Tech - U of R

Web|Unreal|C++|Java|Python|Go

Web: http://shawnclake.com

Email: shawn.lavawater@gmail.com

Email 2: shawn.clake@gmail.com

Posted in Technology on Feb 10, 2017