Open In App

Demonstration of multi-metric evaluation on cross_val_score and GridSearchCV in Scikit Learn

Last Updated : 03 Jan, 2024
Improve
Improve
Like Article
Like
Save
Share
Report

In scikit learn, we can demonstrate multi-metric evaluation with the help of two functions cross_val_score and GridSearchCV. They help you check the performance of the model based on multiple metrics with a single click rather than writing repetitive code. In this article, we will first discuss the implementation of cross_val_score and then GridSearchCV. Then finally, we will see how they can work together.

What is cross_val_score?

cross_val_score is a function in Scikit Learn that will help you to perform cross-validation scoring of an estimator. Generally, cross-validation helps us to understand how well a model has generalized to an independent dataset.

You need to provide the following parameters as an input:

  • estimator
  • input features
  • target values
  • other optional parameters

An estimator is a machine learning model on which you train your dataset. Input features are the independent variables and target value is a dependent variable that we have to determine. There are other optional parameters like cv, scoring, n_jobs which you can check in scikit learn documentation.

When we pass all these parameters to the function, it will perform k-fold cross-validation. Here, your dataset is split into k subsets (folds), and the model is trained and evaluated k times. Each time a different fold is chosen as test set and remaining are chosen as train set.

As a result, you get an array of k values, where each value determines how the model performed on that fold based on the scoring metric.

What is GridSearch CV?

GridSearch CV function of scikit learn library allows you to perform an exhaustive search over a specified parameter grid. And as a result you get the best hyperparameters for your model. This function will help you to combine cross-validation with grid search algorithm. Therefore, you can easily evaluate a model’s performance by different combinations of hyperparameter values.

Implementation of multi-metric evaluation on cross_val_score and GridSearchCV

Import Libraries

We have imported numpy, matplotlib, sklearn.

Python3




import numpy as np
from matplotlib import pyplot as plt
 
from sklearn.datasets import load_iris
from sklearn.metrics import accuracy_score, make_scorer
from sklearn.model_selection import GridSearchCV
from sklearn.ensemble import RandomForestClassifier


Loading Dataset

Loading iris dataset.

Python3




# Load the Iris dataset
iris = load_iris()
X, y = iris.data, iris.target


Setting Evaluation Metrics

Setting evaluation metrics to AUC and accuracy score.

Python3




scoring = {"AUC": "roc_auc", "Accuracy": make_scorer(accuracy_score), }


Setting Classifier and Grid Search

We have created a Random Forest Classifier and set up GridSearchCV to search for the optimal value of the “min_samples_split” hyperparameter. This is a common approach to tune hyperparameters and find the best combination for your model. However, I notice that you’ve used the variable scoring but haven’t defined it in the provided code snippet. Make sure you have defined the scoring metric before using it.

Python3




# Create a RandomForestClassifier
rf_classifier = RandomForestClassifier()
 
#grid search
Grid_Search = GridSearchCV(
    rf_classifier,
    param_grid={"min_samples_split": range(2, 403, 20)},
    scoring=scoring,
    refit="AUC",
    n_jobs=2,
    return_train_score=True,
)
Grid_Search.fit(X, y)
results = Grid_Search.cv_results_


Visualization

  • We have set up the figure size along with title and axis and defined the plot limits.
  • Plotting
    • Loop over each scorer (e.g., AUC, Accuracy) and associated color.
    • For each scorer, plot the mean training and test scores with shaded regions representing the standard deviation.
    • Use different line styles for training and test scores.
    • Plot a vertical dotted line at the point of the best test score for each scorer.
    • Annotate the best test score on the plot.
  • Added legend with scorer names and types and displayed the plot.

Python3




plt.figure(figsize=(7, 7))
plt.title("GridSearchCV evaluating using multiple scorers simultaneously", fontsize=16)
 
plt.xlabel("min_samples_split")
plt.ylabel("Score")
 
ax = plt.gca()
ax.set_xlim(0, 55)
ax.set_ylim(0.8, 1.05)
 
# Get the regular numpy array from the MaskedArray
X_axis = np.array(results["param_min_samples_split"].data, dtype=float)
 
for scorer, color in zip(sorted(scoring), ["g", "k"]):
    for sample, style in (("train", "--"), ("test", "-")):
        sample_score_mean = results["mean_%s_%s" % (sample, scorer)]
        sample_score_std = results["std_%s_%s" % (sample, scorer)]
        ax.fill_between(
            X_axis,
            sample_score_mean - sample_score_std,
            sample_score_mean + sample_score_std,
            alpha=0.1 if sample == "test" else 0,
            color=color,
        )
        ax.plot(
            X_axis,
            sample_score_mean,
            style,
            color=color,
            alpha=1 if sample == "test" else 0.7,
            label="%s (%s)" % (scorer, sample),
        )
 
    best_index = np.nonzero(results["rank_test_%s" % scorer] == 1)[0][0]
    best_score = results["mean_test_%s" % scorer][best_index]
 
    # Plot a dotted vertical line at the best score for that scorer marked by x
    ax.plot(
        [
            X_axis[best_index],
        ]
        * 2,
        [0, best_score],
        linestyle="-.",
        color=color,
        marker="x",
        markeredgewidth=3,
        ms=8,
    )
 
    # Annotate the best score for that scorer
    ax.annotate("%0.2f" % best_score, (X_axis[best_index], best_score + 0.005))
 
plt.legend(loc="best")
plt.grid(False)
plt.show()


Output:

Demonstration of multi-metric evaluation on cross_val_score and GridSearchCV in Scikit Learn

This visualization provides a comprehensive view of the model performance across different values of the “min_samples_split” hyperparameter for multiple scoring metrics. It helps in identifying the optimal hyperparameter value based on different evaluation criteria.



Like Article
Suggest improvement
Share your thoughts in the comments

Similar Reads