Neural Networks and Backpropagation: Part Two

This article continues from Neural Networks and Backpropagation: Part One and finishes working through the example of creating a neural network to compute an OR gate.  If you have yet to read through the article and the corresponding Jupyter notebook you should take the time now as it is important for understanding the rest of the working.


light_globeDon’t forget to grab the jupyter notebook for this article from github here or by cloning:
git clone git@github.com:tinhatben/or_gate.git


Where we left off

We have just completed the first pass of forward propagation through the network, computing the activations of each of the nodes in the network, now it is time to execute backpropagation through the network.

forprop.png

What is Backpropagation?

The process of backpropagation computes the amount that each weight in the network contributes to the final error produced by the network.  This process allows us to determine how much we need to adjust the weights to produce an improvement in the network performance.  What we are aiming to do is determine the ideal value for each of the weights in the network, such that we get the best performance  or lowest error score for the network.  The backpropagation process involves determining the error at the output of the network and projecting back through each of the layers and nodes in the network.

So the first thing we need to do is select a function that determines the error for the network.

Network Cost Function

For the OR gate we are going to select the least squared error cost function:

E = \frac{1}{2}\sum{(target-output)^2}

E = \frac{1}{2}\sum{(y_{train} - a_o)^2}

Looking at this function in more detail, the reason for squaring the difference in the output and the target is simply to remove any effect of positive or negative errors from the computation.  It doesn’t matter if we are +0.5 or -0.5 away from the target, the error is the same: 0.5.  The \frac{1}{2} is a constant multiplier that makes differentiating the cost function easier.  For future reference, the difference of the cost function with respect to the training data is:

\frac{\partial{E}}{\partial{a_o}} = (a_o - y_{train})

In our example y_{train} is the target output of the or gate.  As i_1 and i_2 are 0 and 1, then y_{train} = 1.

Referring back to Part One the output of the network:

a_o = 0.512

So,

E = \frac{1}{2}\sum{(y_{train} - a_o)^2} =\frac{1}{2}(1 - 0.512)^2 = 0.119072

The change in error with respect to the output of the network:

\frac{\partial{E}}{\partial{a_o}} = (a_o - y_{train}) = (0.512 - 1) = -0.488

Output Layer Backpropagation

Now that we have the error at the output, we need to project it back to determine the error component at h_o.  In order to do this we need the chain rule of differentiation:

\frac{\partial{x}}{\partial{z}}=\frac{\partial{x}}{\partial{y}}\frac{\partial{y}}{\partial{z}}

With this simple rule when we calculate how much every weight and node in the network contributes to the error.  So, calculating the error with respect to h_o:

\frac{\partial{E}}{\partial{h_o}}=\frac{\partial{E}}{\partial{a_o}}\frac{\partial{a_o}}{\partial{h_o}}

out_err.png

Now we already have the first half of the above equation in \frac{\partial{E}}{\partial{a_o}} and what we need is the rate of change of the linearity of the output node with respect to the input.  Recalling from Part One we are using the Sigmoid function as the linearity.  We will not cover the proof of the derivative of the sigmoid function in this article as it is reasonably straight forward to determine; but this derivative is exactly what we need, the rate of change of linearity. So:

g(z) = \frac{1}{1 + e^{-z}}

\frac{\partial{a_o}}{\partial{h_o}} = \frac{\partial{g}}{\partial{z}} = \sigma'(z)= g(z)(1 - g(z))

We can now combine the two halves of the chain rule to determine the contibution of h_o to the error:

\frac{\partial{E}}{\partial{h_o}}=(a_o - y_{train})g(h_o)(1 - g(h_o))

Recalling from Part One that a_o = 0.512 for i_1 = 1, i_2 = 0 we can calculate the output layer contribution to error:

\frac{\partial{a_o}}{\partial{h_o}} = (0.512 - 1)*0.512(1 - 0.512) = -0.122

Using the same chain rule of differentiation we can compute the error attributing to each of the weights:

\frac{\partial{E}}{\partial{w_5}}=\frac{\partial{E}}{\partial{a_o}}\frac{\partial{a_o}}{\partial{h_o}}\frac{\partial{h_o}}{\partial{w_5}}

The only part of this equation we are missing is \frac{\partial{h_o}}{\partial{w_5}}.  Again from Part One:

h_o = a_1w_5 + a_2w_6 + b_3

Thus:

\frac{\partial{h_o}}{\partial{w_5}} = a_1 and

\frac{\partial{E}}{\partial{w_5}}=(a_o - y_{train})g(h_o)(1 - g(h_o)) * a_1

\frac{\partial{E}}{\partial{w_5}}=(0.512 - 1)*0.512(1 - 0.512) * 0.69 = -0.084

Repeating the same pattern, let’s repeat the calculation for the other weight and bias term in the output layer:

\frac{\partial{E}}{\partial{w_6}}=\frac{\partial{E}}{\partial{a_o}}\frac{\partial{a_o}}{\partial{h_o}}\frac{\partial{h_o}}{\partial{w_6}}

Again from Neural Networks and Backpropagation: Part One:

h_o = a_1w_5 + a_2w_6 + b_3

Thus:

\frac{\partial{h_o}}{\partial{w_6}} = a_2 and

\frac{\partial{E}}{\partial{w_6}}=(a_o - y_{train})g(h_o)(1 - g(h_o)) * a_1

\frac{\partial{E}}{\partial{w_6}}=(0.512 - 1)*0.512(1 - 0.512) * 0.71 = -0.087

Finally the error with respect to the bias:

\frac{\partial{E}}{\partial{b_3}}=\frac{\partial{E}}{\partial{a_o}}\frac{\partial{a_o}}{\partial{h_o}}\frac{\partial{h_o}}{\partial{b_3}}

Thus:

\frac{\partial{h_o}}{\partial{b_3}} = 1 and

\frac{\partial{E}}{\partial{b_3}}=(a_o - y_{train})g(h_o)(1 - g(h_o)) * 1

\frac{\partial{E}}{\partial{b_3}}=(0.512 - 1)*0.512(1 - 0.512) = -0.122

Now we have completed the back propagation process for the output layer and weights.  So, moving onto the hidden layer we are essentially repeating the same process.

Hidden Layer Backpropagation

back2.png

Repeating chain rule differentiation we can determine the amount of error attributed by w_1, w_2, w_3, w_4, b_1 and b_2.

\frac{\partial{E}}{\partial{w_1}}=\frac{\partial{E}}{\partial{a_1}}\frac{\partial{a_1}}{\partial{h_1}}\frac{\partial{h_1}}{\partial{w_1}}

Expanding on \frac{\partial{E}}{\partial{a_1}}

\frac{\partial{E}}{\partial{a_1}} = \frac{\partial{E}}{\partial{h_o}}\frac{\partial{h_o}}{\partial{a_1}}

From the output layer backpropagation:

\frac{\partial{E}}{\partial{h_o}} = (a_o - y_{train})g(h_o)(1 - g(h_o))

From this point on we will refer to \frac{\partial{E}}{\partial{h_o}} as \delta_o.  Try to keep in mind that \delta_o is the product of the change in error with respect to the output activation a_o and the derivative of the Sigmoid function \sigma'.

Continuing with determining the components of the chain rule for \frac{\partial{E}}{\partial{a_1}}:

\frac{\partial{h_o}}{\partial{a_1}} = \frac{\partial{}}{\partial{a_1}}(b_3 + a_0w_5 + a_1w_6) = w_5

So,

\frac{\partial{E}}{\partial{a_1}} = (a_o - y_{train})g(h_o)(1 - g(h_o))w_5

As with all differentials of activations with respect to the input

\frac{\partial{a}}{\partial{h}} = g(h)(1 - g(h))

\frac{\partial{a_1}}{\partial{h_1}} = g(h_1)(1 - g(h_1))

Now:

\frac{\partial{h_1}}{\partial{w_1}}= \frac{\partial{}}{\partial{w_1}} (b_2 + i_1w_1 + i_2w_3) = i_1

So putting it all together:

\frac{\partial{E}}{\partial{w_1}}=\frac{\partial{E}}{\partial{a_1}}\frac{\partial{a_1}}{\partial{h_1}}\frac{\partial{h_1}}{\partial{w_1}}

\frac{\partial{E}}{\partial{w_1}}= \delta_ow_5\sigma'(h_1)i_1

The process described above forms the pattern for deriving all of the error components for the hidden layer weights.  The next obvious weight is w_3 which only differs from \frac{\partial{E}}{\partial{w_1}} in that the input is different, so:

\frac{\partial{E}}{\partial{w_3}}= \delta_ow_5\sigma'(h_1)i_2

Looking at the bias weight b_2 which is connected to h_1:

\frac{\partial{h_1}}{\partial{b_1}} = 1

\therefore \frac{\partial{E}}{\partial{w_1}}= \delta_ow_5\sigma'(h_1)

Now let’s repeat the process to compute \frac{\partial{E}}{\partial{w_2}} , \frac{\partial{E}}{\partial{w_4}} and \frac{\partial{E}}{\partial{b_1}}:

\frac{\partial{E}}{\partial{w_2}}=\frac{\partial{E}}{\partial{a_2}}\frac{\partial{a_2}}{\partial{h_2}}\frac{\partial{h_2}}{\partial{w_2}}

Again, from the output layer and as per above:

\frac{\partial{E}}{\partial{a_2}} = \frac{\partial{E}}{\partial{h_o}}\frac{\partial{h_o}}{\partial{a_2}} = (a_o - y_{train})g(h_o)(1 - g(h_o))w_6

So:

\frac{\partial{E}}{\partial{w_2}} = (a_o - y_{train})g(h_o)(1 - g(h_o))w_6i_1

We can repeat this process for \frac{\partial{E}}{\partial{w_4}}

\frac{\partial{E}}{\partial{w_4}} = (a_o - y_{train})g(h_o)(1 - g(h_o))w_6i_2

and :

\frac{\partial{E}}{\partial{b_2}} = (a_o - y_{train})g(h_o)(1 - g(h_o))w_6

Calculating the corresponding values:

\frac{\partial{a_o}}{\partial{h_o}} = \delta_o = (a_o - y_{train})g(h_o)(1 - g(h_o)) = -0.122

The output layer values as above:

\frac{\partial{E}}{\partial{b_3}} = -0.122

\frac{\partial{E}}{\partial{w_5}} =-0.084

\frac{\partial{E}}{\partial{w_6}} = -0.087

\frac{\partial{E}}{\partial{w_1}} = \delta_ow_5\sigma'(h_1)i_1 = -0.122(0.01)(1) = -0.00122

\frac{\partial{E}}{\partial{w_2}} = \delta_ow_6\sigma'(h_2)i_1 = -0.122(0.02)(1) = -0.00244

\frac{\partial{E}}{\partial{w_3}} = \delta_ow_5\sigma'(h_1)i_2 = 0

\frac{\partial{E}}{\partial{w_4}} = \delta_ow_6\sigma'(h_2)i_2 = 0

\frac{\partial{E}}{\partial{b_2}} = \delta_ow_5 = -0.00122

 \frac{\partial{E}}{\partial{b_1}} = \delta_ow_6 = -0.00244

Generalised Form

Reviewing the details of the backpropagation process above it can be observed that there is a general pattern to the process.  Let’s say that we are completing the process for layer l and l + 1 is the next layer in the network closer to the output layer.  We can calculate:

\frac{\partial{E}}{\partial{h_l}} = \delta_l = \delta_{l+1}w_{l+1}\sigma'(h_l)

\frac{\partial{E}}{\partial{w_l}} =\delta_la_{l-1}

Where: a_{l-1} is the activation values of the layer before the current layer l.

In coding this process through a number of layers we can now easily loop by applying the generalised form.

Update Weights

Now that we have computed each of the derivatives of error with respect to the weights, we can update the values for the weights, hopefully to reduce the error produced by the network.  To do this we need to introduce another term, the learning rate \eta; this is how much of the derivative is to be applied during the update to the weights.  This is an important term in neural networks: a learning rate that is too small will lead to a network that is very slow to train, while a large value can cause instability in the network which may never find a solution.  The weights are updated as follows:

W \leftarrow W - \eta\frac{\partial{E}}{\partial{W}}

where $\leftarrow$ indicates a simultaneous update of all weights in the network.

Training Process

The training process for a network is essentially a repetitious loop through the forward propagation, backpropagation and weight update processes; adjusting the weights until hopefully a minimum error is achieved.  This is a very simplistic view and there are a number of technical aspects that must be correct for training to be successful, or can be executed to speed up the process.  The details of training are being left for a later post as they lie outside the exact workings of backpropagation.

In Closing

To re-iterate, this process may take a few reads to understand and get a good grasp on.  The effort is well worth it, particularly if trying to debug issues with training neural networks.  I also recommend that you read the jupyter notebook that partners this article as it demonstrates the practical applications of this working.


light_globeDon’t forget to grab the jupyter notebook for this article from github here or by cloning:
git clone git@github.com:tinhatben/or_gate.git


Advertisements

Neural Networks and Backpropagation: Part One

Lately I’ve been implementing my own neural network code base.  I’ve had a play around with some of the common libraries such as Lasagne and TensorFlow and while these libraries are great, it is important to understand the content itself.  Firstly is the thorough understanding of what exactly the network is doing; which in my opinion is certainly worth the effort!  At times using pre-existing libraries is lot like a black box, where you are not sure of what exactly it is doing but are pretty confident it is working correctly.  For your standard use cases, the black box can be ok, but you really should have an understanding of what the library is doing.  The second and one of the most important reasons, is that you have complete control over your implementation and have the ability to add unique / remove / modify unique features features and network architectures as you see fit.  This is not as easily done in pre-written libraries as you may be constrained by a particular framework / need a thorough understanding of a potentially large and broad code base.


light_globeDon’t forget to grab the jupyter notebook for this article from github here or by cloning:
git clone git@github.com:tinhatben/or_gate.git


Approach

For the purposes of this article we are going to examine a simple neural network that computes an OR gate.  In computer science / engineering an OR gate is one of the most common logic structures.  An OR gate takes two binary values (1 or 0) as an input and will return a 1 if at least one of the input binary values is 1.  The following truth table demonstrates the possible combinations of values

Input 1 Input 2 Output
0 0 0
0 1 1
1 0 1
1 1 1

We are going to construct a neural network that when fed with the inputs above, return the correct results and will do this from two perspectives.  Firstly we will implement the math by hand, calculating the various values along the way, then we will recreate these calculations in python.  Using this combined method we should get a good understanding of how to implement the network / complete backpropagation.

It should be noted that this post does not cover the intuitions behind neural networks as a concept.  It does not cover the typical analogy with biology regarding myelin sheaths, activation potentials etc.

A Word About Backprop

One of the most common statements written in various backpropagation tutorials is don’t expect to understand the concept of process immediately.  The basics are reasonably intuitive but it takes a few reads / re-reads of the content to get a really thorough understanding and be confident in your implementation.  One of the reasons I am writing this article is to solidify my own understadning after reading the content over the last week.

Part One: Forward Propagation

Getting Started

The first step is to decide on an initial network architecture, now I say initial as your network performance will vary depending on the choices made and may require tweaking.  Having implemented a network, there may be specific reasons for changing the number of layers, nodes, bias units etc.  So in deciding the architecture for the OR gate we know two things:

  1. As per the truth table above we provide the network with 2 input values i.e. 0, 0 or 0, 1 or 1, 0 or 1, 1
  2. The network returns a single value 1 or 0

These two facts will help in the design of the network as they need to be accommodated.  Now quite often you can get very good network performance using a single hidden layer, so it is a good choice here as well.  One of the most commonly varied aspects of the network architecture that is varied is the number of hidden units in the hidden layer.  This will vary depending on the application and to be confident you have made the right selection you will just have to try some.  For now we are going to choose 2 hidden units, this is the same as the number of input units which is often an indicator that the network is capable of learning the training set.  We can visualise our network right now as the following:

net

The two inputs e.g. 1, 0 are denoted as i_1 and i_2, the hidden units are denoted as h_1 and h_2 while the output layer is denoted as h_o.  Other applications will probably have differing numbers of inputs and output units, while as stated before the number of hidden units should be actively investigated and varies with the application at hand.  Between each of the units are weights (w), these are the values that the neural network needs to determine in order to make accurate predictions.  The whole point of training a neural network is to determine the values for the weights that produce the best predictions.  Now there are two other units missing from our network right now, the bias units.  Remembering back to the post on Linear Regression the bias units act like the y-intercepts in our linear regression model.  These units adjust the “firing threshold” of the neurons by applying another set of weights to a constant input of 1.  Note that value of 1 is not unique to the OR gate problem, all bias inputs have a fixed input of 1 as this is how the offset is applied.

net_b

 

So now we have the additional weights for the bias units, to assist in clarity they are labelled b_1, b_2 and b_3.

Initialising Weights

Now normally, when we initialise the weights of a network we typically want to use random values close to zero, say -0.5 to 0.5.  There are 2 reasons why we wish to do this:

  1. Initialise with random values prevents any unintentional trends from occurring during training.  Weights with the same values may increase or decrease together in a pattern which would lead to errors.
  2. Having values close to zero allows for smaller changes at the start of the training process.  Large changes in weight values can lead to an unstable network and may prevent convergence.

Initialising weights is a critical part of the network design process.  If we choose weights that are too large, then it may take a really long time during the training process to approach the optimal values (if at all).  We will examine this in more detail when we look at linearity functions.

For the purposes of this example we do not want to use random values for the weights but rather known values.  This will help in understanding the training process and back propagation.  We are going to start with the following values:

  • w_1 = 0.1
  • w_2 = 0.2
  • w_3 = 0.3
  • w_4 = 0.4
  • b_1 = 0.5
  • b_2 = 0.5 (Note this does differ from point 1 above.  In real applications b1 and b2 should have different random values)
  • w_5 = 0.01
  • w_2 = 0.02
  • b_3 = 0.03
  • i_1 = 1
  • i_1 = 0

net_weights

Linearities

Up to now, we have selected the network topology, and have initialised the weights.  We now need to select a linearity or activation function for each of the layers.  The linearity function is applied to the sum of the product of the weights and the previous activation values; in the case of the hidden layer the previous activation is the input to the network.  This function will determine how the neuron “fires” at each stage.  For this example we are going to select the Sigmoid function however we could select other functions such as tanh, square or linear linear.

The Sigmoid Function

g(z) = \frac{1}{1 + e^{-z}}

The Sigmoid function effectively caps the output of the neurons between 0 and 1, as can be seen in the graph below; as z approaches +infinity, the sigmoid approaches 1 and vice versa with -infinity and 0.  This is exactly what we want for a neural network that computes the OR gate.  We want the output to either be 0 or 1.  Using the Sigmoid function in the hidden layer also provides stability but preventing the activations from becoming too large or too small; capping them at 0 and 1.  Again, referring to the graph below but also to what we said previously about intialising weights around 0.  The sigmoid function given z = 0: g(z = 0) = 0.5.  This is half way between the capped values but is also at the point of greatest gradient within the function.  By initialising the weights close to 0, we are given the neurons a roughly equal chance of approaching 0 or 1 and with the high gradient allowing them to get there faster.

sigmoid.png

Forward Propagation

Finally we can combine all of the work above to determine what the initial outputs of the network will be.  Firstly lets calculate the activations of the hidden layer:

net_all

Summing the previous activations (input values):

h_1 = i_1w_1 + i_2w_3 + b_2

h_2 = i_1w_2 + i_2w_4 + b_1

We can also represent these equations in matrix form:

H = \begin{bmatrix}  w_1 & w_3 & b_2 \\  w_2 & w_4 & b_1 \\  \end{bmatrix}  \begin{bmatrix}  i_1 \\  i_2 \\  1 \\  \end{bmatrix}

Substituting the values:

h_1 = 1(0.1) + 0(0.3) + 0.5 = 0.6

h_2 = 1(0.2) + 0(0.4) + 0.5 = 0.7

H = \begin{bmatrix}  0.6 \\  0.7 \\  \end{bmatrix}

 

Now the activations a_x :

a_1 = g(h_1) = 0.646

a_2 = g(h_2) = 0.668

A = \begin{bmatrix}  0.646 \\  0.668 \\  \end{bmatrix}

Now we can continue this process to calculate the activation value at the output neuron:

h_o = a_1w_5 + a_2w_6 + b_3

Substituting the values:

h_o = 0.646(0.01) + 0.668(0.02) + 0.03 = 0.04982

Again calculating the activations:

a_o = g(0.04982) = 0.512

 

Next Up: Part Two: Backpropagation

Linear Regression

Other than being useful in its own right, understanding and being able to implement linear regression is an important first stepping stone to understanding and executing some more complicated algorithms such as perceptrons and neural networks.  So let’s take that first step to bigger things!


light_globeDon’t forget to grab the jupyter notebook for this blog post from github here or by cloning:
git clone git@github.com:tinhatben/linear_regression.git


 

Machine learning problems in general fall into one of two different camps:

  • regression problems involve the use of continuous data e.g. real numbers (1, 2, 3, 8, 1.2., 34.5 etc) and aim to make a model of the observed system such that predictions can be made which are also continuous in nature.
  • classification problems involve use the of discrete data and attempts to group the information.  In classification problems, observations are of a particular type, in a particular group or they are not.

So obviously linear regression is a regression problem, and what we are trying to achieve is a straight line model to a set of continuous data such that we can make predictions for other input values.  As mentioned above linear regression is a setting stone towards perceptrons and neural networks, tools that are generally used to solve classification problems.  So we will be using regression techniques as a smaller part in solving larger classification problems.

Linear regression is also a very common tool for statisticians, and you can even execute basic linear regression in spreadsheets such as Excel or LibreOffice Calc.  In these programs linear regression models are called trend lines, if you are interested here are instructions on how to complete trend lines in:

One very important difference between spreadsheet trend lines and the linear regression techniques shown in this blog post is the ability to fit lines across more than 2 dimensions.  The techniques we use can be applied across n dimensions, not just 2!

Revision: Equation of a Straight Line

Remembering back to high school maths, at least in Australia (fyi: we do not have middle school; upon completion of primary you graduate into high school), the equation of a straight line is defined as follows:

y = mx + b

Where:

m is the gradient or slope and can be calculated as m = \frac{\delta y}{\delta x} ; and

\boldmath{b} is the y intercept (where the line intersects the y axis)

 

As an example y = 10x + 20 would produce the following line:

straight line anatomy

In linear regression, given a set of data such as that shown below we are trying to find the best values for m and b that gives the least error on the predictions.  So let’s try and find the linear model for the following data.

uni vs sleep

The Hypothesis

We are trying to find the gradient and y-intercept of a straight line that best matches the observed data we are interested in.  So let’s define a hypothesis function to describe our best guess:

h(x^{(i)}) = \theta_0 + \theta_1x^{(i)}

This format should look quite familiar as it is essentially y = mx + b but to simplify the function a little more we can define x_0 = 1 so:

h(x^{(i)})  = \theta_0x_0^{(i)} + \theta_1x_1^{(i)} = \sum_{i=0}^n \theta x

Cost Function & Gradient Descent

Now in order to find the best fit line for the data, we need to know what the error or cost of the fit is.  To define the cost of the fit we will look at the distance between the hypothesis function for a given point and the observed value.  We will be using the least squares method of error estimation which examines the square of the difference between two values, so the cost function:

J(\theta) = \frac{1}{2}\sum_{i=0}^n(h(x^{(i)}) - y^{(i)})^2

is the sum of the square of differences between the guessed and observed value for each point in the training set.  The one-half fraction is included to make the gradient descent calculation easier later on, but is simply used as a scaling factor here.

Gradient Descent

The process of gradient descent is the means by which we determine the values of \theta to minimise the cost function; during gradient descent we start with an initial guess for \theta and take a step down the slope of the cost function and re-evaluate theta. This process is repeated until the minima of the cost function has been found and there are no more steps to be taken that can reduce the cost.  The image below graphically represents the steps taken in gradient descent.

 

Gradient descent.svg
By Gradient_descent.png: The original uploader was Olegalexandrov at English Wikipedia
derivative work: Zerodamage – This file was derived from  Gradient descent.png: , Public Domain, https://commons.wikimedia.org/w/index.php?curid=20569355

For each iteration of gradient descent we need to execute the following:

\theta_j := \theta_j - \left(\frac{\alpha}{n}\right) \frac{d}{d\theta_j}J(\theta)

Where:
\frac{d}{d\theta_j} = (h(x^{(i)}) - y^{(i)})x^{(i)} and;

\alpha is defined as the learning rate

The learning rate is the size of the step taken down the slope of the cost function, and needs to be balanced.  A large \alpha will take large steps down the slope, while a smaller \alpha will produce smaller steps.  The definition of \alpha does more than just adjust the speed of convergence of gradient descent.  While a small \alpha will take longer to converge, it can also get stuck at local minima.  Conversely values for \alpha that are large will take bigger steps but may never converge, stepping over the minima each time and never actually reaching it.

The appropriate value for alpha will vary depending upon the data and the model, the github example uses a learning rate of 0.01 and converges after approximately 400 iterations.  After convergence we have a model and are able to make predictions using:

h(x^{(i)}) = \theta_0 + \theta_1x^{(i)}

lienar-regression-model

As we can see the model is a reasonable fit for the data; it covers most of the points well, while not trying to fit all of the points exactly, allowing for generalisation.  In a later post we will discuss the characteristics of a good fit in more detail, but for the moment this is a good description of fit.

Using Linear Regression to Fit Polynomial Models

Now we can also use the technique of gradient descent to determine polynomial models for the data.  Say we have a data set that looked far less linear, and more cubic e.g.

polynomial_dataNow we want to fit a model that is cubic:

h(\theta) = \theta_0x_0  + \theta_1x_1 + \theta_2x_1^2 + \theta_3x_1^3

Firstly we need to set up the feature vector including the bias term x_o :

X = \begin{bmatrix} 1 & x_{11} & x_{11}^2 & x_{11}^3 \\ 1 & x_{12} & x_{12}^2 & x_{12}^3 \\ \vdots & \vdots & \vdots& \vdots\\ 1 & x_{1n} & x_{1n}^2 & x_{1n}^3\\ \end{bmatrix}

To prevent the polynomial terms from dominating the fit of the model inappropriately we need to scale each of the terms.  There are a number of different scaling factors one could use, however the one we will select is as follows, subtracting the mean and dividing by the standard deviation (ignoring the first column of X):

x = \frac{X - \mu_X}{\sigma_X}

We will need to store the values for $latex\mu_X$ and $latex\sigma_X$ as these will be used to make predictions.

Now we can train the model using gradient descent as we have above, determining the appropriate values for theta and the model for the data.  This time instead of producing 2 values for \theta, we will determine 4.  Now, when we make a prediction we need to normalise the data first by subtracting \mu_X and dividing by \sigma_X. As before, the prediction can be made using:

h(x) = \theta'x

poly-model

Coming Up

Logistic Regression & Assessing Model Fit

Mahalanobis Distance

Sorry for the delay between posts.  I have been busy preparing a paper for an upcoming conference.  This post will be covering Mahalanobis Distance, something which is somewhat related to the previous post on PCA.

You can find the Python code for this post on github

If you have yet to prepare your development environment, please see this post

The Mahalanobis distance is unitless measure of distance of a data point p from a distribution of points D.  If p is positioned at the mean (μ) of the distribution then the Mahalanobis distance is zero.  As the point moves along each principal axis of the distribution then the Mahalanobis distance begins to increase, indicating the number of standard deviations the point is away from the mean of D.  It is through the relationship with the principal axes that the Mahalanobis distance takes into account the correlation within the data set.    It is important to note that the Mahalanobis Distance is not the euclidean (ordinary straight line) distance.

One common use of the Mahalanobis distance within machine learning is in its use as a cost function in learning algorithms; many algorithms need to determine how similar or ‘close’ a data point or points is / are from a set of other points.  This is where Mahalanobis can help; by minimising the Mahalanobis distance, an algorithm is maximising the liklihood that a test point is from the distribution.

So let’s look at how to compute the Mahalanobis distance:  Say we had a distribution as shown in gray points in the image below and we wanted to know the distance of each of the test points from the distribution.

distribution & test points

Distribution with test points

To compute the distance of each points p from a distribution D, we need to know the following statistics:

  • The mean (μ) of the distribution
  • The covariance matrix (S) of the distribution (D)
  • The location of point (p)

To compute the Mahalanobis Distance:

D_m = \sqrt{(p - \mu)S^{-1}(p - \mu)^T}

It is here we can see that if p = μ then the distance given would be zero.  The other important aspect to note is that the INVERSE of the covariance matrix is used, not the covariance matrix itself.  When computing the inverse matrix in numpy, it may be useful / necessary to use the pseudoinverse, which allows the inverse to be created in scenarios where it may otherwise fail; such as non square matrices.

So as per the code example on github, if we had the following points:

  • p1 = 17.5, 27.3)
  • p2 = μ
  • p3 = (24, 20)

Then the corresponding Mahanobis distance is:

  • Dm1: 5.77
  • Dm2= 0
  • Dm3 = 4.01

Principal Component Analysis Tutorial

Recently I have been reading up on Principal Component Analysis (PCA).  PCA is essentially a compression technique, providing a means of representing larger or high dimensional data sets with less information.  Using PCA you can take a large m x n matrix and provide a means of reconstructing the data with a smaller matrix a vector of principal components.

You can find the Python code for this post on github

If you have yet to prepare your development environment, please see this post

Before we get started on PCA, there are a couple of more basic statistics concepts that we need to ensure we understand first.    Assuming absolutely no prior knowledge:

Basic Statistics

Mean

The mean is simply the average value for the data set and is calculated as:

 

\mu = \frac{1}{n}\sum_{k=1}^n x_k

 

Where n is the number of values in the sample and x_k represents each of the individual values in the set.

When we think of the calculating the mean , we typically think of applying the calculation of a single set of 1-D array of data.  The mean however can be computed across a matrix

 

\overline{\mu} = \frac{1}{n}\sum_{k=1}^n \overline{x_k}

Standard Deviation

The standard deviation is the spread of the data set or the average distance of a data point from the mean of the data.  Data that has a low standard deviation is clumped around the mean, while data that has a large standard deviation is generally positioned further from the mean.

 

 

stdev.png

The standard deviation can be easily calculated using the mean, and again the standard deviation can also be calculated across a matrix :

 

\sigma = \sqrt{\frac{\sum_{k=1}^n(x_k - \mu)^2}{n-1}}

 

Interestingly, though the standard deviation is the average distance from the mean, the sum of differences is divided by {n-1} and not n.  The reason for this lies in the fact that most data sets are a sample from a greater population, not the entire population itself and is thus more accurate.  If you happened to have data for the entire population, feel free to use n, however almost always you will use {n-1}&s=2$.

One of the more useful statistical relationships is that in a normally distributed data set, 68.2% of the data lies within one standard deviation of the mean.

normal distribution

Normally distributed data

Image Sourced from:  https://en.wikipedia.org/wiki/File:Standard_deviation_diagram.svg#metadata

Another term, commonly associated with the standard deviation is the variance.  The variance is simply the square of the standard deviation and is denoted by \sigma^2.

 

Covariance

So now we know how to calculate the variance of a data set, let’s look at covariance.  The variance we calculated above is only for one dimensional data, even if we compute the variance on a 2D dimensional data set, each row or column will be considered independent of the other.  If we want to look at the variance of one data set with respect to another we use covariance.  Say, for example we have two data sets and Y we can compute the covariance:

 

cov(X,Y) = \frac{\sum_{k=1}^n(X_k - \mu_X)(Y_k - \mu_Y)}{n-1}

As we can see from the covariance equation, that if we were to compute the covariance of X with respect to itself, we would simple arrive with the variance of X.  One significant difference between variance and covariance is that covariance can be negative where as variance only gives positive values.  When a negative covariance is calculated, it indicates that one data set decreases in value while the other increases.  Conversely a positive covariance value indicates that the data sets increase together or decrease together.

Now say, we had more than 2 dimensions in our data set and we wanted to investigate the relationships between them.  We can compute a series of covariances and put them into a covariance matrix.  This is particularly useful if we have more than 3 dimensions  and wish to investigate the trends between them as we lose the ability to plot the data in a simple way.  Consider an example where we wanted to look at the relationship between X, Y, and Z data sets.  We would construct a covariance matrix using a combination of covariances:

\overline{cov} = \begin{bmatrix} cov(x,x) & cov(x, y) & cov(x, z) \\ cov(y, x) & cov(y, y) & cov(y, z) \\ cov(z, x) & cov(z, y) & cov(z, z) \end{bmatrix}

A couple of interesting things to note about the covariance matrix:

  1. The diagonal is the variance of the individual columns of the set
  2. The matrix is symmetrical about the diagonal i.e. cov(X, Y) = cov(Y, X)

Eigenvectors and Eigenvalues

In order to complete PCA we need to use and understand eigenvalues and eigenvectors. These are two important concepts in linear algebra and the proofs are quite difficult to say the least.  As such we will not cover the proof of eignvectors and values but rather enough to get an understanding of what they do and how you can use them.  Please pay attention to the Jupyter notebook provided with the post, in particular to the section demonstrating how to calculate the eignenvalues and eigenvectors in Python.

A vector v is said to be an eigenvector of a linear transform if it is a non zero vector that does not change its direction when a linear transformation T is applied to it.  The eigenvalue λ  is a scalar that multplies by the eigenvector to give the transformed eignvector i.e.

T(\boldmath{v}) = \boldmath{\lambda.v}

If the linear transform T is a square matrix then:

\boldmath{Av} = \boldmath{\lambda.v}

Wikipedia has this excellent visual representation to help explain the physical properties of eigenvalues and eigenvectors.  Consider the following image of the Mona Lisa, the image on the right has been sheared, as you can see from the change in direction of the red array and the effect on the image.  The blue arrow is an eigenvector as its direction has not changed between the left and right images; the corresponding eigenvalue is 1 since the length of the eigenvector has not changed.  When we consider eigenvalues and eigenvectors, we should consider them as pairs as they are almost always used in conjunction with each other

mona_lisa_eigenvector_grid

Smith (2002) provides a good summary of the properties of eigenvalues and eigenvectors:

  1. You can only find eigenvectors for square matrices
  2. Not all square matrices have eigenvectors
  3. Multiplying an eigenvector by a scalar does not change its direction, only its length
  4. All eigenvectors of a matrix are perpendicular to one another, irrespective of however many dimensions are in the matrix.  This is one advantage in that visually it is impossible to represent more than 3 orthogonal axis.

You can calculate eigenvectors and eigenvalues of small matrices e.g. 3 x 3 by hand, however anything larger requires special techniques.  In practice though you would rarely do this, instead a software package such as Python or Octave would be used.  The Jupyter notebook for the post demonstrates how to calculate the eigenvalues and eigenvectors in Python.

The PCA Process

So now we have all of the components required to complete principal component analysis.  So let’s go through the process.  In the Jupyter notebook I have added some fake data, corresponding to the number of coffees consumed vs the number of pages of thesis written.  This data has been loaded into the variable coffee_v_thesis though feel free to use other data sets as you wish.

  1.  Once we have our data we first need to subtract the mean for the corresponding axes.

X = X - \overline{x}

or

x = x - \mu_x

y = y - \mu_y

2.  Next we calculate the covariance matrix for the mean adjusted data.

3.  Calculate the eigenvalues (\lambda) and eigenvectors (E) of the covariance matrix.  Order the eigenvalues such that they are in decreasing order.  Each eigenvalue provides the variance of the data about the mean for the corresponding eigenvector.  The largest eigenvalue is known as the principal component as it contributes the most variance about the mean for the data set.  The total variance can be determined by summing the eigenvalues:

V_T = \sum_{t}\lambda_i

4.  Now we have the total variance of the data and the proportion of variance provided by each eigenvalue.  This is where we can begin to “compress” the data; first we choose the amount of variance we wish to represent in our model or how much we wish to “compress” the data.  Say we want to represent 98% of variance (f_v) in our model for example.  The next step is to take a cumulative sum of the individual eigenvalues divided by V_T and select  the first eigenvalues that add up to give at least the specified variance f_v.  Once the eigenvalues have been selected take note of the corresponding eigenvectors as these will be needed in the next step.

5. With the selected eigenvalues, take the corresponding selected eigenvectors and assign them to P

\boldmath{P} \subset \boldmath{E}

6. Now with P we can transform the mean adjusted data set.  Once we execute the transform we can no longer consider the data as parameterized by the original x and y parameters, or in the case of the github example coffees consumed vs pages written.  The data now lies in this separate space that is parameterized by P. The data can be transformed by:

\boldmath{x_{transformed}} = P\boldmath{X}^T

Recovering the Data

Say we have done some work with the data and we now want to return to the original parameterized space.  To do this we need to transform with P again and return the mean values we subtracted earlier.

x = \overline{x} + \boldmath{P}x_{transformed}^T

One thing to note is that unless you have used all of the eigenvectors during the selection process (step 3 above) you will not be able to completely recreate the original data set.  By not using all of the eigenvectors some of the information has been lost.  The image below demonstrates how much information can be lost if only the principal component is used.

recovered data

 This brings us to the end of an initial look into PCA, if you have any questions or comments regarding this blog post or the github code please leave a post and I will do my best to answer them.  Also any corrections / suggestions for improving the post are more than welcome.

Cheers,

tinhatben

References

L. I. Smith, “A tutorial on Principal Components Analysis Introduction,” Statistics (Ber)., vol. 51, p. 52, 2002.

Development Environment

Before we delve into the technical details of various machine learning tools I will describe the development environment I am using and what software packages will be needed to run any of the code posted on the tinhatben GitHub account.  That being said data science and machine learning techniques are not specific to any programming language or tools and can be adapted to any language.  Throughout the course of this blog I will be using Python, specifically Python 2.7.  This blog is not intended to teach Python and requires some knowledge of how to use it.  If you are interested in learning I highly encourage it!  Some useful resources for learning Python include:

For those familiar with Python: I completely understand that Python 2.7 is destined for obsolescence and that Python 3 is the future! I am using 2.7 out of pure laziness as it is the default Python interpreter installed on Linux Ubuntu. Most of the code should work on Python 3 as well, however I will not be testing to confirm this is the case and there are some compatibility issues between the two versions (Python2vs3).  If you are learning Python for the first time I strongly encourage you to start with Python 3.

Irrespective of your operating system you can install all of the required packages using the pip package manager via the command line if you already have it installed.

pip install scipy numpy matplotlib jupyter pandas scikit-learn

During the Python Windows installation process there is a checkbox to tick if you would like pip to be installed. If however you would like to simply use a Windows installer some of the packages make them available. See:

On Debian based Linux distributions you can run (as root):

apt-get install python-pip (there is a similar command using Yum for Fedora systems)

Alternatively if you prefer to use the apt-get package manager you can install by:
apt-get install python-scipy python-numpy python-matplotlib python-pandas python-scikits-learn
pip install jupyter

Though the pip package manager tends to carry the latest versions compared to apt-get.

Assuming each of the above requirements installed correctly you should be able to launch jupyter using the command:
jupyter notebook

This command will launch a web page that is running locally on your machine. Jupyter is a very handy tool for running data analytics and provides an easy way to run only specific segments of code. If the jupyter page launched correctly you should see something similar to:

jupyter

You can start a new notebook, by clicking on New and selecting Python notebook.

While not mandatory installing Git will make downloading code from our GitHub account a lot easier.    To install Git visit https://git-scm.com/ or on a Debian System:apt-get install git-all

 

If you have got this far, you are good to go! So let’s jump in.  If you are having issues, feel free to post comments and I will help where I can.  Otherwise stackoverflow is a great resource for technical help!

 

See you soon

Ben

print(“Hello World”)

hello_world

Hi all, I’m Ben a PhD student from Sydney Australia. I am currently researching the use of predictive data analytics in the field of medical science; developing predictive models to help in the treatment of disease. This topic captures two things I am greatly passionate about: data science and using these skills to help people! There are many examples of using Big Data and machine / deep learning to make a billion dollars or power big brother (hence the tin hat part of tinhatben.com) but it is my belief that we can use these tools to do a whole lot more for society than that.

I am writing this blog for two reasons, starting with the most selfish: I would like to use the blog as a record of my notes on particular topics in data science as well as improving my understanding of the content itself.  It is well known that by teaching others, you improve your understanding of the subject matter; so hopefully I can benefit from the process.  However considering this I cannot guarantee that the content is error free, I too am learning so I am sure I will make some mistakes along the way.  Any corrective and constructive feedback is more than welcome.

Secondly, I am an advocate of open source programming and open publishing.  By writing this blog I hope to help anyone else trying to understand the topics at hand; even if only one person finds a post useful then tinhatben.com has been a success.

I hope you find this blog useful and interesting!

Ben

%d bloggers like this: