Site icon Machine Learning For Analytics

Ridge Regression Using Python

Ridge Regression Using Python

Hi Everyone! Today, we will learn about ridge regression, the mathematics behind ridge regression and how to implement it using Python!

Foundation for implementation

Sum of squares function

Gaussian distribution and probability

Likelihood function

Hence. at this value of µ, the likelihood function is maximized.

Why we needed L2 regularization?

Cost function and penalties

Implementation of Ridge Regression using Python

Now, let’s see how to implement Ridge regression or L2 regularization in Python.

Importing the libraries

In [14]:
import numpy as np              #importing the numpy package with alias np
import matplotlib.pyplot as plt #importing the matplotlib.pyplot as plt

Setting number of observations

In [15]:
No_of_observations = 50                         #Setting number of observation = 50

Defining input and output

In [16]:
X_input = np.linspace(0,10,No_of_observations)        #Generating 50 equally-spaced data points between 0 to 10.
Y_output = 0.5*X_input + np.random.randn(No_of_observations)   #setting Y_outputi = 0.5X_inputi + some random noise
In [17]:
Y_output[-1]+=30   #setting last element of Y_output as Y_output + 30
Y_output[-2]+=30   #setting second last element of Y_output as Y_output + 30

Visualizing input and output

In [18]:
plt.scatter(X_input, Y_output)
plt.title('Relationship between Y and X[:, 1]')
plt.xlabel('X[:, 1]')
plt.ylabel('Y')
plt.show()
In [19]:
X_input = np.vstack([np.ones(No_of_observations), X_input]).T       #appending bias data points colummn to X

Finding weights

In [20]:
w_maxLikelihood = np.linalg.solve(np.dot(X_input.T, X_input), np.dot(X_input.T, Y_output))     #finding weights for maximum likelihood estimation
Y_maxLikelihood = np.dot(X_input, w_maxLikelihood)                                     #Finding predicted Y corresponding to w_ml

Visualizing maximum likelihood function

In [21]:
plt.scatter(X_input[:,1], Y_output)
plt.plot(X_input[:,1],Y_maxLikelihood, color='red')
plt.title('Graph of maximum likelihood method(Red line: predictions)')
plt.xlabel('X[:, 1]')
plt.ylabel('Y')
plt.show()

Defining L2 co-efficients

In [23]:
L2_coeff = 1000    #setting L2 regularization parameter to 1000
w_maxAPosterior = np.linalg.solve(np.dot(X_input.T, X_input)+L2_coeff*np.eye(2), np.dot(X_input.T, Y_output))     #Finding weights for MAP estimation
Y_maxAPosterior = np.dot(X_input, w_maxAPosterior)            #Finding predicted Y corresponding to w_maxAPosterior

MAP v/s Maximum Likelihood

In [28]:
plt.scatter(X_input[:,1], Y_output)
plt.plot(X_input[:,1],Y_maxLikelihood, color='red',label="maximum likelihood")
plt.plot(X_input[:,1],Y_maxAPosterior, color='green', label="map")
plt.title('Graph of MAP v/s ML method')
plt.legend()
plt.xlabel('X[:, 1]')
plt.ylabel('Y')
plt.show()

Ridge regression using Python: Conclusion

Thus, green line(MAP) fits really well to the trend and doesn’t bend towards the outlier while red line(ML) fails to do so.

So, guys, with this I conclude this tutorial. In the next tutorial, I will talk about L1 regularization or Lasso Regression. Stay tuned!

Exit mobile version