Open In App

Multicollinearity in Data

Last Updated : 27 Sep, 2021
Improve
Improve
Like Article
Like
Save
Share
Report

The variable should have a robust relationship with independent variables. However, any unbiased variables shouldn’t have robust correlations among other independent variables. Collinearity can be a linear affiliation among explanatory variables. Two variables are perfectly collinear if there’s a particular linear relationship between them. 
Multicollinearity refers to a situation at some stage in which two or greater explanatory variables in the course of a multiple correlation model are pretty linearly related. We’ve perfect multicollinearity if the correlation between impartial variables is good to 1 or -1. In practice, we do not often face ideal multicollinearity for the duration of an information set. More commonly, the difficulty of multicollinearity arises when there’s an approximately linear courting between two or more unbiased variables. 
In easy words, Multicollinearity can be defined as it’s far an event wherein one or greater of the unbiased variables are strongly correlated with one another. In such incidents, we ought to usually use just one in every correlated impartial variable.
VIF(Variance Inflation Factor) is a hallmark of the life of multicollinearity, and statsmodel presents a characteristic to calculate the VIF for each experimental variable and worth of greater than 10 is that the rule of thumb for the possible lifestyles of high multicollinearity. The excellent guiding principle for VIF price is as follows, VIF = 1 manner no correlation exists, VIF > 1, but < 5 then correlation exists.  
 

$V F_{i}=1 / 1-R_{i}^{2}$ where $\mathrm{R}^{2}$ is the coefficient of determination of variable.
 


What Causes Multicollinearity? 
The principal types are: 

  • Data-based multicollinearity: as a result of poorly designed experiments, statistics that’s 100% observational, or data collection methods that can’t be manipulated. In some cases, variables could also be particularly correlated (usually way to collecting facts from purely observational studies) and there’s no error on the researcher’s part. For this reason, you ought to behaviour experiments every time possible, placing the extent of the predictor variables beforehand.
  • Structural multicollinearity: because of you, the researcher, when they are attempting to create new predictor variables.


Causes for multicollinearity also can consist of:  

  • Insufficient facts. In some cases, collecting extra statistics can resolve the issue.
  • Dummy variables could also be incorrectly used. For instance, the researcher may fail to exclude one category, or add a dummy variable for every category (e.g. Spring, summer, autumn, winter.
  • Including a variable within the regression that’s a mixture of other variables. For instance, consisting of “general investment profits” while total investment income = earnings from stocks and bonds + profits from savings interest.
  • Including identical (or almost identical) variables. For instance, weight in pounds and weight in kilos, or investment earnings and savings/bond earnings.

Example: You can also locate that multicollinearity may be a characteristic of the making plans of the test. 
In the material manufacturer case, we can without problems see that advertising and marketing and volume are correlated predictor variables, main to foremost swings inside the impact of marketing while the quantity is and aren’t included within the version. In a similar test, you’ll find out that the product producer may additionally introduce multicollinearity between volume and advertising as it’s far part of the experimental design using assigning an excessive ad price range to cities with smaller stores and an espresso ad finances to cities with larger shops. 
If you are geared up to re-do the market test, you’ll address this difficulty via restructuring the experiment to make sure an honest aggregate of excessive ad/low volume, excessive ad/high quantity, low ad/high quantity, and low ad/low quantity stores. this may let you remove the multicollinearity in the facts set. It is regularly not possible though, to re-do an experiment. that is regularly why it’s crucial to very cautiously analyze the planning of a controlled experiment before starting so that you’ll avoid by chance causing such problems. If you’ve got located multicollinearity because of the experimental design and also you can’t re-do the experiment, you’ll deal with the multicollinearity by which include controls. inside the case of the cloth producer, it’ll be vital to incorporate extent inside the version as an effect to urge a much higher proper estimate for the impact of advertising. Other answers to addressing multicollinearity in instances like this consist of shrinkage estimations like principal additives regression or partial least-squares analysis.
Code: Python code to remove Multicollinearity from the dataset using the VIF factor.
 


 

python3

# read the dataset
import pandas as pd
data = pd.read_csv('https://docs.google.com / spreadsheets / d/e / 2PACX-1vQRtMKSAzDVoUFeP_lvpxSPt0pb7YR3_SPBdnq0_2nIgfZUMB8fMgJXaMETqLmrV3uw2yOqkZLEcTvt / pub?output = csv')
data.head(3)
 
# Remove the price from the dataset
Y = data["price"]
iv = data.columns
iv = iv.delete(0)
X = data[iv]
 
# calculate the variance inflation factor
from statsmodels.stats.outliers_influence import variance_inflation_factor as vif
 
# compare with each column
[vif(data[iv].values, index) for index in range(len(iv))]
 
# Removing multicollinearity from the dataset using vif
from statsmodels.stats.outliers_influence import variance_inflation_factor as vif
 
# compare with each columns
for i in range(len(iv)):
    vif_list = [vif(data[iv].values, index) for index in range(len(iv))]
    maxvif = max(vif_list)
    print("Max VIF value is ", maxvif)                     
    drop_index = vif_list.index(maxvif)
    print("For Independent variable", iv[drop_index])
    
    if maxvif > 10:
        
        print("Deleting", iv[drop_index])
        iv = iv.delete(drop_index)
        print("Final Independent_variables ", iv)

                    

Output: 
 

Max VIF value is  15.213540834822062
For Independent variable bedrooms
Deleting bedrooms
Final Independent_variables  Index(['lotsize', 'bathrms', 'stories', 'driveway', 'recroom', 'fullbase',
       'gashw', 'airco', 'garagepl', 'prefarea'],
      dtype='object')
Max VIF value is  7.738793387948324
For Independent variable bathrms
Max VIF value is  7.738793387948324
For Independent variable bathrms
Max VIF value is  7.738793387948324
For Independent variable bathrms
Max VIF value is  7.738793387948324
For Independent variable bathrms
Max VIF value is  7.738793387948324
For Independent variable bathrms
Max VIF value is  7.738793387948324
For Independent variable bathrms
Max VIF value is  7.738793387948324
For Independent variable bathrms
Max VIF value is  7.738793387948324
For Independent variable bathrms
Max VIF value is  7.738793387948324
For Independent variable bathrms
Max VIF value is  7.738793387948324
For Independent variable bathrms


We can notice that VIF analysis has eliminated bedrooms has its greater than 10, however, stories_one and stories_two has been retained. To test the model performs the common practice is to split the dataset into 80/20 (or 70/30) for train/test respectively and use the training dataset to build the model, then apply the trained model on the test dataset to evaluate the performance of the model. We can also evaluate the performance of a model by finding the r2 score.
 



Like Article
Suggest improvement
Previous
Next
Share your thoughts in the comments

Similar Reads