Skip to content
Related Articles

Related Articles

Logistic Regression using Statsmodels
  • Last Updated : 28 Jul, 2020
GeeksforGeeks - Summer Carnival Banner

Prerequisite: Understanding Logistic Regression

Logistic regression is the type of regression analysis used to find the probability of a certain event occurring. It is the best suited type of regression for cases where we have a categorical dependent variable which can take only discrete values.

The dataset :
In this article, we will predict whether a student will be admitted to a particular college, based on their gmat, gpa scores and work experience. The dependent variable here is a Binary Logistic variable, which is expected to take strictly one of two forms i.e., admitted or not admitted.

Builiding the Logistic Regression model :

Statsmodels is a Python module which provides various functions for estimating different statistical models and performing statistical tests

  • First, we define the set of dependent(y) and independent(X) variables. If the dependent variable is in non-numeric form, it is first converted to numeric using dummies. The file used in the example for training the model, can be downloaded here.
  • Statsmodels provides a Logit() function for performing logistic regression. The Logit() function accepts y and X as parameters and returns the Logit object. The model is then fitted to the data.

    # importing libraries
    import statsmodels.api as sm
    import pandas as pd 
    # loading the training dataset 
    df = pd.read_csv('logit_train1.csv', index_col = 0)
    # defining the dependent and independent variables
    Xtrain = df[['gmat', 'gpa', 'work_experience']]
    ytrain = df[['admitted']]
    # building the model and fitting the data
    log_reg = sm.Logit(ytrain, Xtrain).fit()

    Output :

    Optimization terminated successfully.
             Current function value: 0.352707
             Iterations 8

    In the output, ‘Iterations‘ refer to the number of times the model iterates over the data, trying to optimise the model. By default, the maximum number of iterations performed is 35, after which the optimisation fails.

    The summary table :

    The summary table below, gives us a descriptive summary about the regression results.

    # printing the summary table

    Output :

                               Logit Regression Results                           
    Dep. Variable:               admitted   No. Observations:                   30
    Model:                          Logit   Df Residuals:                       27
    Method:                           MLE   Df Model:                            2
    Date:                Wed, 15 Jul 2020   Pseudo R-squ.:                  0.4912
    Time:                        16:09:17   Log-Likelihood:                -10.581
    converged:                       True   LL-Null:                       -20.794
    Covariance Type:            nonrobust   LLR p-value:                 3.668e-05
                          coef    std err          z      P>|z|      [0.025      0.975]
    gmat               -0.0262      0.011     -2.383      0.017      -0.048      -0.005
    gpa                 3.9422      1.964      2.007      0.045       0.092       7.792
    work_experience     1.1983      0.482      2.487      0.013       0.254       2.143

    Explanation of some of the terms in the summary table:

    • coef : the coefficients of the independent variables in the regression equation.
    • Log-Likelihood : the natural logarithm of the Maximum Likelihood Estimation(MLE) function. MLE is the optimisation process of finding the set of parameters which result in best fit.
    • LL-Null : the value of log-likelihood of the model when no independent variable is included(only an intercept is included).
    • Pseudo R-squ. : a substitute for the R-squared value in Least Squares linear regression. It is the ratio of the log-likelihood of the null model to that of the full model.

    Predicting on New Data :

    Now we shall test our model on new test data. The test data is loaded from this csv file.

    The predict() function is useful for performing predictions. The predictions obtained are fractional values(between 0 and 1) which denote the probability of getting admitted. These values are hence rounded, to obtain the discrete values of 1 or 0.

    # loading the testing dataset  
    df = pd.read_csv('logit_test1.csv', index_col = 0)
    # defining the dependent and independent variables
    Xtest = df[['gmat', 'gpa', 'work_experience']]
    ytest = df['admitted']
    # performing predictions on the test datdaset
    yhat = log_reg.predict(Xtest)
    prediction = list(map(round, yhat))
    # comparing original and predicted values of y
    print('Acutal values', list(ytest.values))
    print('Predictions :', prediction)

    Output :

    Optimization terminated successfully.
             Current function value: 0.352707
             Iterations 8
    Acutal values [0, 0, 0, 0, 0, 1, 1, 0, 1, 1]
    Predictions : [0, 0, 0, 0, 0, 0, 0, 0, 1, 1]

    Testing the accuracy of the model :

    from sklearn.metrics import (confusion_matrix, 
    # confusion matrix
    cm = confusion_matrix(ytest, prediction) 
    print ("Confusion Matrix : \n", cm) 
    # accuracy score of the model
    print('Test accuracy = ', accuracy_score(ytest, prediction))

    Output :

    Confusion Matrix : 
     [[6 0]
     [2 2]]
    Test accuracy =  0.8


    My Personal Notes arrow_drop_up
  • Recommended Articles
    Page :