You are currently viewing A detailed explanation on Ordinary least square regression

A detailed explanation on Ordinary least square regression

Loading

In this tutorial we are going to learn about ordinary least square regression which is usually considered as most basic type of regression. This is the pillar of all the other regression algorithms.

Some other blog post that you may want to read is

Introduction

Let us understand it by an example, In the given graph on the x axis is the price of petrol an on the y-axis is the quantity of petrol.

In this case we are using price of petrol as explanatory variable whereas the quantity of petrol is response variables.

In this graph we are seeing a downward trend or negative slope which means if the price of petrol is high then people buying less petrol.

The main idea behind ordinary least square is we are trying to find out the best fit line which is used to minimize the distance between the line and the data points.

So the question arises is how do we find the best fit line. As it is clear from the name we are choosing squares to find the best fit line.

We are taking the summation of squares for each of the data points. An the goal is that the total area of the squares to be minimum.

If the area of squares is minimum which means we are very close to the each and every data point while in other case if the area of the squares is very big then it would be very bad for our approximation function.

Now the main thing is we are trying to find the intercept values so that the area of squares to be the minimum.

The math behind Ordinary least squares regression

Here ε(y,y’) tells us about the error between the actual output and the the predicted output.

Note that ε(y,y’) is never be negative, it may be 0, which means no error or positive.

Let’s do the calculation. Suppose that the data are theNpoints((x1,y1),(x2,y2),(x3,y3)…. And we’re going to try to approximate them with the line=+y=mx+b. We want to find the values of and b that minimize the total error.

The total error is

Now we have calculated the error for each and every data point and where the line y=mx+b predicts it should be (that’s mxi+b) and calculate the error of the prediction. We add up the error for each point that the line was supposed to predict, and that’s the total error for the line.

Now from the first equation which is (,′)=(−′)2. We put the equation-1 in equation 2 which is the total error.

Now let us expand the equation no-3. On expanding we get

Now we add up each of the six components separately. And the final result that we get is this.

Now let us write these six components into the short form which is

With these short forms we can write the total error as

Now after that we find out the minimum values of m and b by differentiating the equation with respect to m and b.

Now we have find out the values of m and b. Now what if all the points happen to lie on a (non-vertical) straight line? Then there are some numbers a and c so that yi=axi+c for all i. We hope that the equations for m and b will give m=a and b=c.

Ordinary least square regression in python using statsmodel

We are going to implement ordinary least square regression in python using stats model library.

from sklearn.datasets import load_boston
import statsmodels.api as sm

boston = load_boston()
X = bos["RM"]
y = bos["PRICE"]
X = sm.add_constant(X)
model = sm.OLS(y, X).fit() # yeah, sm.OLS take paramteres in inverted order )
model.summary()

Wrap up this session

In this tutorial, we have learned about how to implement OLS regression in python along with its math behind it.

So if you like this blog post, please like it and subscribe to our data spoof community to get real-time updates. You can follow our Facebook page to get notifications whenever we upload any post so you can never miss any update from us.