Open In App

Curse of Dimensionality in Machine Learning

The Curse of Dimensionality in Machine Learning arises when working with high-dimensional data, leading to increased computational complexity, overfitting, and spurious correlations. Techniques like dimensionality reduction, feature selection, and careful model design are essential for mitigating its effects and improving algorithm performance. Navigating this challenge is crucial for unlocking the potential of high-dimensional datasets and ensuring robust machine-learning solutions.

What is the Curse of Dimensionality?

How to Overcome the Curse of Dimensionality?

To overcome the curse of dimensionality, you can consider the following strategies:

Dimensionality Reduction Techniques:

Data Preprocessing:

Python Implementation of Mitigating Curse Of Dimensionality

Here we are using the dataset uci-secom.

Import Necessary Libraries

Import required libraries including scikit-learn modules for dataset loading, model training, data preprocessing, dimensionality reduction, and evaluation.

import numpy as np
import pandas as pd
from sklearn.feature_selection import SelectKBest, f_classif, VarianceThreshold
from sklearn.decomposition import PCA
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score
from sklearn.impute import SimpleImputer

Loading the dataset

The Dataset is stored in a CSV file named 'your_dataset.csv', and have a timestamp column named 'Time' and a target variable column named 'Pass/Fail'.

df = pd.read_csv('your_dataset.csv')

# Assuming 'X' contains your features and 'y' contains your target variable
X = df.drop(columns=['Time', 'Pass/Fail'])
y = df['Pass/Fail']

Remove Constant Features

# Remove constant features
selector = VarianceThreshold()
X_selected = selector.fit_transform(X)

# Impute missing values
imputer = SimpleImputer(strategy='mean')
X_imputed = imputer.fit_transform(X_selected)

Splitting the data and standardizing

# Split the data into training and test sets
X_train, X_test, y_train, y_test = train_test_split(X_imputed, y, test_size=0.2, random_state=42)

# Standardize the features
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test)

Feature Selection and Dimensionality Reduction

  1. Feature Selection: SelectKBest is used to select the top k features based on a specified scoring function (f_classif in this case). It selects the features that are most likely to be related to the target variable.
  2. Dimensionality Reduction: PCA (Principal Component Analysis) is then used to further reduce the dimensionality of the selected features. It transforms the data into a lower-dimensional space while retaining as much variance as possible.
# Perform feature selection
selector_kbest = SelectKBest(score_func=f_classif, k=20)
X_train_selected = selector_kbest.fit_transform(X_train_scaled, y_train)
X_test_selected = selector_kbest.transform(X_test_scaled)

# Perform dimensionality reduction
pca = PCA(n_components=10)
X_train_pca = pca.fit_transform(X_train_selected)
X_test_pca = pca.transform(X_test_selected)

Training the classifiers

  1. Training Before Dimensionality Reduction: Train a Random Forest classifier (clf_before) on the original scaled features (X_train_scaled) without dimensionality reduction.
  2. Evaluation Before Dimensionality Reduction: Make predictions (y_pred_before) on the test set (X_test_scaled) using the classifier trained before dimensionality reduction, and calculate the accuracy (accuracy_before) of the model.
  3. Training After Dimensionality Reduction: Train a new Random Forest classifier (clf_after) on the reduced feature set (X_train_pca) after dimensionality reduction.
  4. Evaluation After Dimensionality Reduction: Make predictions (y_pred_after) on the test set (X_test_pca) using the classifier trained after dimensionality reduction, and calculate the accuracy (accuracy_after) of the model.
# Train a classifier (e.g., Random Forest) without dimensionality reduction
clf_before = RandomForestClassifier(n_estimators=100, random_state=42)
clf_before.fit(X_train_scaled, y_train)

# Make predictions and evaluate the model before dimensionality reduction
y_pred_before = clf_before.predict(X_test_scaled)
accuracy_before = accuracy_score(y_test, y_pred_before)
print(f'Accuracy before dimensionality reduction: {accuracy_before}')
# Train a classifier (e.g., Random Forest) on the reduced feature set
clf_after = RandomForestClassifier(n_estimators=100, random_state=42)
clf_after.fit(X_train_pca, y_train)

# Make predictions and evaluate the model after dimensionality reduction
y_pred_after = clf_after.predict(X_test_pca)
accuracy_after = accuracy_score(y_test, y_pred_after)
print(f'Accuracy after dimensionality reduction: {accuracy_after}')

Complete Code:

import numpy as np
import pandas as pd
from sklearn.feature_selection import SelectKBest, f_classif, VarianceThreshold
from sklearn.decomposition import PCA
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score
from sklearn.impute import SimpleImputer

df = pd.read_csv('your_dataset.csv')

# Assuming 'X' contains your features and 'y' contains your target variable
X = df.drop(columns=['Time', 'Pass/Fail'])
y = df['Pass/Fail']

# Remove constant features
selector = VarianceThreshold()
X_selected = selector.fit_transform(X)

# Impute missing values
imputer = SimpleImputer(strategy='mean')
X_imputed = imputer.fit_transform(X_selected)

# Perform feature selection
selector_kbest = SelectKBest(score_func=f_classif, k=20)
X_train_selected = selector_kbest.fit_transform(X_train_scaled, y_train)
X_test_selected = selector_kbest.transform(X_test_scaled)

# Perform dimensionality reduction
pca = PCA(n_components=10)
X_train_pca = pca.fit_transform(X_train_selected)
X_test_pca = pca.transform(X_test_selected)

# Train a classifier (e.g., Random Forest) without dimensionality reduction
clf_before = RandomForestClassifier(n_estimators=100, random_state=42)
clf_before.fit(X_train_scaled, y_train)

# Make predictions and evaluate the model before dimensionality reduction
y_pred_before = clf_before.predict(X_test_scaled)
accuracy_before = accuracy_score(y_test, y_pred_before)
print(f'Accuracy before dimensionality reduction: {accuracy_before}')
# Train a classifier (e.g., Random Forest) on the reduced feature set
clf_after = RandomForestClassifier(n_estimators=100, random_state=42)
clf_after.fit(X_train_pca, y_train)

# Make predictions and evaluate the model after dimensionality reduction
y_pred_after = clf_after.predict(X_test_pca)
accuracy_after = accuracy_score(y_test, y_pred_after)
print(f'Accuracy after dimensionality reduction: {accuracy_after}')

Output:

Accuracy before dimensionality reduction: 0.8745
Accuracy after dimensionality reduction: 0.9235668789808917


The accuracy before dimensionality reduction is 0.8745, while the accuracy after dimensionality reduction is 0.9236. This improvement indicates that the dimensionality reduction technique (PCA in this case) helped the model generalize better to unseen data.

Article Tags :