Open In App

Recursive Feature Elimination with Cross-Validation in Scikit Learn

Improve
Improve
Like Article
Like
Save
Share
Report

In this article, we will earn how to implement recursive feature elimination with cross-validation using scikit learn package in Python.

What is Recursive Feature Elimination (RFE)?

Recursive Feature Elimination (RFE) is a feature selection algorithm that is used to select a subset of the most relevant features from a dataset. It is a recursive process that starts with all the features in the dataset and then iteratively removes the least essential features until the desired number of features is reached.

The main logic behind RFE is that the most relevant features will have the highest impact on the target variable, and thus will be more useful for predicting the target. RFE uses a model (such as a linear regression or support vector machine) to evaluate the importance of each feature, and the features with the lowest importance are eliminated in each iteration.

The role of recursion in RFE is to repeatedly perform the feature selection process until the desired number of features is reached. In each iteration, the algorithm removes the least important features and then refits the model with the remaining features. This process is repeated until the desired number of features is reached or the performance of the model no longer improves.

RFE is a useful algorithm for feature selection because it is simple to implement and can be applied to a variety of models. It is especially useful for datasets with a large number of features, as it can help to reduce the dimensionality of the dataset and improve the performance of the model.

What is Cross Validation?

Cross-validation is a technique for evaluating the performance of a machine learning model by training it on a subset of the data and then testing it on a different subset. It is a way to assess the generalization ability of a model, i.e., how well the model is able to make predictions on unseen data.

There are several types of cross-validation techniques, but the most common is k-fold cross-validation. In k-fold cross-validation, the dataset is randomly partitioned into k equal-sized subsets. The model is trained on k-1 subsets and then tested on the remaining subset. This process is repeated k times, with a different subset being used as the test set in each iteration. The performance of the model is then averaged across all k iterations.

Cross-validation is a useful technique for a number of reasons. It allows you to evaluate the performance of a model using a large portion of the data, rather than just a single train-test split. It also helps to reduce the risk of overfitting, as the model is trained and tested on different subsets of the data.

Cross-validation is often used to tune the hyperparameters of a model, as it provides a more reliable estimate of the model’s performance. It is also commonly used to compare the performance of different models or the performance of a model with and without certain features.

In scikit-learn, RFE with cross-validation can be performed using the RFECV class. This class is a meta-estimator that wraps an estimator and performs RFE with cross-validation to find the optimal number of features.

Here is an example of how to use the RFECV class in scikit-learn to perform RFE with cross-validation for a decision tree model:

Python3




from sklearn.datasets import load_iris
from sklearn.feature_selection import RFECV
from sklearn.tree import DecisionTreeClassifier
  
# Load the iris dataset
X, y = load_iris(return_X_y=True)
  
# Create a decision tree classifier
estimator = DecisionTreeClassifier()
  
# Use RFE with cross-validation to 
# find the optimal number of features
selector = RFECV(estimator, cv=5)
selector = selector.fit(X, y)
  
# Print the optimal number of features
print("Optimal number of features: %d" % selector.n_features_)
  
# Print the selected features
print("Selected features: %s" % selector.support_)


Output:

Optimal number of features: 3
Selected features: [False  True  True  True]

In this example, we first load the iris dataset using the load_iris function from scikit-learn. We then create a decision tree classifier using the DecisionTreeClassifier class.

Next, we create an instance of the RFECV class and pass it into the decision tree classifier as the estimator. We also specify the number of folds for cross-validation using the cv parameter. We then call the fit method on the ‘RFECV’ instance to perform RFE with cross-validation on the iris dataset. This will find the optimal number of features and select the most relevant features. Finally, we print the optimal number of features and the selected features.

Once RFE with cross-validation has been performed, you can use the selected features to train your final model. In the example above, you can use the support_ attribute of the RFECV instance to get a boolean array of the selected features. You can then use this array to select only the relevant features from the dataset, and then train your final model using these features.

Python3




# Select only the relevant features
X_rfe = X[:, selector.support_]
  
# Train the final model using the 
# selected features
estimator.fit(X_rfe, y)


In this code, we use the boolean array from the support_ attribute to select only the relevant features from the dataset. We then use these features to train the final decision tree model.

You can also use the predict method of the RFECV instance to make predictions on new data using the selected features. For example:

Python3




# Make predictions using the selected features
y_pred = selector.predict(X_new)


In this code, we use the predict method of the RFECV instance to make predictions on the X_new dataset using the selected features. The predictions will be returned in the y_pred array.

RFE with cross-validation is a useful technique for identifying the most relevant features of a given model. It can help improve the performance and interpretability of your machine-learning models.



Last Updated : 23 Jan, 2023
Like Article
Save Article
Previous
Next
Share your thoughts in the comments
Similar Reads