Uncategorized

Implementation of Perceptron Algorithm using Python

Perceptron

Perceptron is the first step towards learning Neural Network. It is a model inspired by brain, it follows the concept of neurons present in our brain. Several inputs are being sent to a neuron along with some weights, then for a corresponding value neuron fires depending upon the threshold being set in that neuron. For understanding the theory of Perceptron please go through our article on The Perceptron: A theoretical approach.

Algorithm of Perceptron

The perceptron algorithm is divided into three phases:

  • Initialization phase : In this phase, we set the weights to any random values. They can be positive or negative.
  • Training phase: 
    • For T iterations
      • For each input vector
        • Forward Propagation: Compute the activation and predicted outputs using the following equations:

Capture

outp

 Backward Propagation: Update the weights using the following equation:

weights

  • Recall Phase: Compute the activation of the neuron j using the final weights using the following equations:

outp

 

Backpropagation

It is a method for calculating the errors when values of Activation function and Target values mismatch. By using Backpropagation we will automatically update weights. For example:

  1. Consider output of Activation function is A(i) and Target Value is T(i)
  2. Calculate error between A and T, that is, ‘A(i) – T(i)’
  3. Let Input vector corresndin to above A(i) is x(k).
  4. Weighted Error W_error(k,i) = (A(i) – T(i)) * x(k)
  5. New_weight = Old_weight + (eta) * W_error

‘eta’ is the Learning Rate. It helps in controlling the rate of change in Weights. Sometimes the system gets unstable if weights change very rapidly and it never sets down, in such scenarios we use Learning Rate (eta). Typically we take eta as (0.1 < eta < 0.4).
Now whenever we get any kind of mismatch in the Activation and Target values, we do Backpropagation to update our weights.

Implementation of Perceptron

Creating of Input Matrix

In [49]:
import numpy as np

values=([2.7810836,2.550537003],
[1.465489372,	2.362125076],
[3.396561688,	4.400293529],
[1.38807019,1.850220317],
[3.06407232,3.005305973],
[7.627531214,	2.759262235],
[5.332441248,	2.088626775],
[6.922596716,	1.77106367],
[8.675418651,	-0.242068655],
[7.673756466,	3.508563011])

print("Training input values without Bias\n", values)
Training input values without Bias
 ([2.7810836, 2.550537003], [1.465489372, 2.362125076], [3.396561688, 4.400293529], [1.38807019, 1.850220317], [3.06407232, 3.005305973], [7.627531214, 2.759262235], [5.332441248, 2.088626775], [6.922596716, 1.77106367], [8.675418651, -0.242068655], [7.673756466, 3.508563011])

Adding Bias to the Input Matrix

Bias value is like a cheat code to make training more reliable, now we will add biad value to Input matrix using ‘concatenate()’ method of numpy library.

In [50]:
test2 = [[-1]] * len(values)
values = np.concatenate((test2, values), axis = 1)  
print("Training input values with bias in it\n",values)
Training input values with bias in it
 [[-1.          2.7810836   2.550537  ]
 [-1.          1.46548937  2.36212508]
 [-1.          3.39656169  4.40029353]
 [-1.          1.38807019  1.85022032]
 [-1.          3.06407232  3.00530597]
 [-1.          7.62753121  2.75926224]
 [-1.          5.33244125  2.08862677]
 [-1.          6.92259672  1.77106367]
 [-1.          8.67541865 -0.24206865]
 [-1.          7.67375647  3.50856301]]

Creating Random Weights

We will generate random values of weights for Weight vector, using ‘rand()’ method of numpy library. Size of Weight vector will be equal to total number of values in a row of Input vector. In our case it is ‘3’, including bias value.

In [51]:
m=3     #number of elements in each row of inputs
n=1 
weights = np.random.rand(m,n)*0.1 - 0.5
print("Initial random weights\n",weights)
Initial random weights
 [[-0.45303395]
 [-0.46630376]
 [-0.47939269]]

Target values Matrix

In [52]:
final = ([[0],[0],[0],[0],[0],[1],[1],[1],[1],[1]])
print("Training data target values are\n", final)
Training data target values are
 [[0], [0], [0], [0], [0], [1], [1], [1], [1], [1]]

Method for updating Weights

If the value of Activation function is not equal to Target value, then we will use this method to update weights. We have introduced a new variable too, that is, Learning Rate (eta). It helps in controlling the rate of change in Weights. Sometimes the system gets unstable if weights change very rapidly and it never sets down, in such scenarios we use Learning Rate (eta). Typically we take eta as (0.1 < eta < 0.4). In our case we are taking it ‘0.25’

In [53]:
def updateWeights(weights, inputs, activation, targets):
    eta = 0.25
    weights += eta*np.dot(np.transpose(inputs), targets - activation)
    return weights

Creating method for Learning

This method calculates the Activation value and compares it with Target values, if they mismatch then the ‘updateWeights()’ method is being called to update weigts, then further Activation values are being calculated.

In [54]:
def  prediction (inputs, weights, targets):
    #representing Activation function with 'ack [[]]' variable
    ack = [[0]] * len(inputs)
    for i in range(0, len(inputs)):    
        for j in range(0,len(weights)):
            ack[i] += inputs[i][j] * weights[j]
        ack[i] = np.where(ack[i]>0, 1, 0)
        #checking values with target
        if(targets[i] != ack[i]):
            weights = updateWeights(weights, inputs, ack[i], targets)
        print(ack[i])
    return weights

Training our model and extracting stable weights

Now we will do the training of our model, to do so we will use all the above data and methods and wil execute ‘prediction()’ method for ‘4’ times or until the Activation value gets equal to Target values and weights become stable.

In [55]:
iterations = 4
for temp in range(0, iterations):
    print("\nIteration ",temp+1,"\n")
    weights = prediction(values, weights, final)
    
print("\nTrained Weights\n", weights)
Iteration  1 

[0]
[0]
[0]
[0]
[0]
[0]
[1]
[1]
[1]
[1]

Iteration  2 

[1]
[1]
[0]
[0]
[0]
[1]
[1]
[1]
[1]
[1]

Iteration  3 

[0]
[0]
[0]
[0]
[0]
[1]
[1]
[1]
[1]
[1]

Iteration  4 

[0]
[0]
[0]
[0]
[0]
[1]
[1]
[1]
[1]
[1]

Trained Weights
 [[ 0.79696605]
 [ 2.54399373]
 [-5.09227188]]

Testing our own data (Recall)

We will provide some data and our Perceptron will provide us some result corresponding to it. For this we will use Trained Weights that we extracted in above part and a new method ‘perceptronPredict()’

In [57]:
def perceptronPredict(weights, newInput):
    activation = np.dot(newInput, weights)
    activation = np.where(activation>0, 1, 0)
    print(activation)


newInput = ([-1.0, 1.786745, 2.94569],
            [-1.0, 7.023323, 1.9999])
perceptronPredict(weights, newInput)
[[0]
 [1]]

Hence we got the results corresponding to each Input vector that we have supplied in the form of ‘newInput’ matrix.

Above post gave us an insight that how a basic Neural Network works and how it does the learning. In our next post we will learn a Multi-layer Neural Network!!
So stay tuned and keep learning!!!

3 thoughts on “Implementation of Perceptron Algorithm using Python

Leave a Reply

Back To Top

Discover more from Machine Learning For Analytics

Subscribe now to keep reading and get access to the full archive.

Continue reading