Sklearn.StratifiedShuffleSplit() function in Python
In this article, we’ll learn about the StratifiedShuffleSplit cross validator from sklearn library which gives train-test indices to split the data into train-test sets.
What is StratifiedShuffleSplit?
StratifiedShuffleSplit is a combination of both ShuffleSplit and StratifiedKFold. Using StratifiedShuffleSplit the proportion of distribution of class labels is almost even between train and test dataset. The major difference between StratifiedShuffleSplit and StratifiedKFold (shuffle=True) is that in StratifiedKFold, the dataset is shuffled only once in the beginning and then split into the specified number of folds. This discards any chances of overlapping of the train-test sets.
However, in StratifiedShuffleSplit the data is shuffled each time before the split is done and this is why there’s a greater chance that overlapping might be possible between train-test sets.
Syntax: sklearn.model_selection.StratifiedShuffleSplit(n_splits=10, *, test_size=None, train_size=None, random_state=None)
n_splits: int, default=10
Number of re-shuffling & splitting iterations.
test_size: float or int, default=None
If float, should be between 0.0 and 1.0 and represent the proportion of the dataset to include in the test split.
train_size: float or int, default=None
If float, should be between 0.0 and 1.0 and represent the proportion of the dataset to include in the train split.
Controls the randomness of the training and testing indices produced.
Below is the Implementation.
Step 1) Import required modules.
Step 2) Load the dataset and identify the dependent and independent variables.
The dataset can be downloaded from here.
Step 3) Pre-process data.
Step 4) Create object of StratifiedShuffleSplit Class.
Step 5) Call the instance and split the data frame into training sample and testing sample. The split() function returns indices for the train-test samples. Use a regression algorithm and compare accuracy for each predicted value.
Please Login to comment...