Open In App

Curse of Dimensionality in Machine Learning

Last Updated : 03 Apr, 2024
Improve
Improve
Like Article
Like
Save
Share
Report

The Curse of Dimensionality in Machine Learning arises when working with high-dimensional data, leading to increased computational complexity, overfitting, and spurious correlations. Techniques like dimensionality reduction, feature selection, and careful model design are essential for mitigating its effects and improving algorithm performance. Navigating this challenge is crucial for unlocking the potential of high-dimensional datasets and ensuring robust machine-learning solutions.

What is the Curse of Dimensionality?

  • The Curse of Dimensionality refers to the phenomenon where the efficiency and effectiveness of algorithms deteriorate as the dimensionality of the data increases exponentially.
  • In high-dimensional spaces, data points become sparse, making it challenging to discern meaningful patterns or relationships due to the vast amount of data required to adequately sample the space.
  • The Curse of Dimensionality significantly impacts machine learning algorithms in various ways. It leads to increased computational complexity, longer training times, and higher resource requirements. Moreover, it escalates the risk of overfitting and spurious correlations, hindering the algorithms’ ability to generalize well to unseen data.

How to Overcome the Curse of Dimensionality?

To overcome the curse of dimensionality, you can consider the following strategies:

Dimensionality Reduction Techniques:

  • Feature Selection: Identify and select the most relevant features from the original dataset while discarding irrelevant or redundant ones. This reduces the dimensionality of the data, simplifying the model and improving its efficiency.
  • Feature Extraction: Transform the original high-dimensional data into a lower-dimensional space by creating new features that capture the essential information. Techniques such as Principal Component Analysis (PCA) and t-distributed Stochastic Neighbor Embedding (t-SNE) are commonly used for feature extraction.

Data Preprocessing:

  • Normalization: Scale the features to a similar range to prevent certain features from dominating others, especially in distance-based algorithms.
  • Handling Missing Values: Address missing data appropriately through imputation or deletion to ensure robustness in the model training process.

Python Implementation of Mitigating Curse Of Dimensionality

Here we are using the dataset uci-secom.

Import Necessary Libraries

Import required libraries including scikit-learn modules for dataset loading, model training, data preprocessing, dimensionality reduction, and evaluation.

import numpy as np
import pandas as pd
from sklearn.feature_selection import SelectKBest, f_classif, VarianceThreshold
from sklearn.decomposition import PCA
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score
from sklearn.impute import SimpleImputer

Loading the dataset

The Dataset is stored in a CSV file named 'your_dataset.csv', and have a timestamp column named 'Time' and a target variable column named 'Pass/Fail'.

df = pd.read_csv('your_dataset.csv')

# Assuming 'X' contains your features and 'y' contains your target variable
X = df.drop(columns=['Time', 'Pass/Fail'])
y = df['Pass/Fail']

Remove Constant Features

  • We are using VarianceThreshold to remove constant features and SimpleImputer to impute missing values with the mean.
# Remove constant features
selector = VarianceThreshold()
X_selected = selector.fit_transform(X)

# Impute missing values
imputer = SimpleImputer(strategy='mean')
X_imputed = imputer.fit_transform(X_selected)

Splitting the data and standardizing

# Split the data into training and test sets
X_train, X_test, y_train, y_test = train_test_split(X_imputed, y, test_size=0.2, random_state=42)

# Standardize the features
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_test_scaled = scaler.transform(X_test)

Feature Selection and Dimensionality Reduction

  1. Feature Selection: SelectKBest is used to select the top k features based on a specified scoring function (f_classif in this case). It selects the features that are most likely to be related to the target variable.
  2. Dimensionality Reduction: PCA (Principal Component Analysis) is then used to further reduce the dimensionality of the selected features. It transforms the data into a lower-dimensional space while retaining as much variance as possible.
# Perform feature selection
selector_kbest = SelectKBest(score_func=f_classif, k=20)
X_train_selected = selector_kbest.fit_transform(X_train_scaled, y_train)
X_test_selected = selector_kbest.transform(X_test_scaled)

# Perform dimensionality reduction
pca = PCA(n_components=10)
X_train_pca = pca.fit_transform(X_train_selected)
X_test_pca = pca.transform(X_test_selected)

Training the classifiers

  1. Training Before Dimensionality Reduction: Train a Random Forest classifier (clf_before) on the original scaled features (X_train_scaled) without dimensionality reduction.
  2. Evaluation Before Dimensionality Reduction: Make predictions (y_pred_before) on the test set (X_test_scaled) using the classifier trained before dimensionality reduction, and calculate the accuracy (accuracy_before) of the model.
  3. Training After Dimensionality Reduction: Train a new Random Forest classifier (clf_after) on the reduced feature set (X_train_pca) after dimensionality reduction.
  4. Evaluation After Dimensionality Reduction: Make predictions (y_pred_after) on the test set (X_test_pca) using the classifier trained after dimensionality reduction, and calculate the accuracy (accuracy_after) of the model.
# Train a classifier (e.g., Random Forest) without dimensionality reduction
clf_before = RandomForestClassifier(n_estimators=100, random_state=42)
clf_before.fit(X_train_scaled, y_train)

# Make predictions and evaluate the model before dimensionality reduction
y_pred_before = clf_before.predict(X_test_scaled)
accuracy_before = accuracy_score(y_test, y_pred_before)
print(f'Accuracy before dimensionality reduction: {accuracy_before}')
# Train a classifier (e.g., Random Forest) on the reduced feature set
clf_after = RandomForestClassifier(n_estimators=100, random_state=42)
clf_after.fit(X_train_pca, y_train)

# Make predictions and evaluate the model after dimensionality reduction
y_pred_after = clf_after.predict(X_test_pca)
accuracy_after = accuracy_score(y_test, y_pred_after)
print(f'Accuracy after dimensionality reduction: {accuracy_after}')

Complete Code:

Python3
import numpy as np
import pandas as pd
from sklearn.feature_selection import SelectKBest, f_classif, VarianceThreshold
from sklearn.decomposition import PCA
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.ensemble import RandomForestClassifier
from sklearn.metrics import accuracy_score
from sklearn.impute import SimpleImputer

df = pd.read_csv('your_dataset.csv')

# Assuming 'X' contains your features and 'y' contains your target variable
X = df.drop(columns=['Time', 'Pass/Fail'])
y = df['Pass/Fail']

# Remove constant features
selector = VarianceThreshold()
X_selected = selector.fit_transform(X)

# Impute missing values
imputer = SimpleImputer(strategy='mean')
X_imputed = imputer.fit_transform(X_selected)

# Perform feature selection
selector_kbest = SelectKBest(score_func=f_classif, k=20)
X_train_selected = selector_kbest.fit_transform(X_train_scaled, y_train)
X_test_selected = selector_kbest.transform(X_test_scaled)

# Perform dimensionality reduction
pca = PCA(n_components=10)
X_train_pca = pca.fit_transform(X_train_selected)
X_test_pca = pca.transform(X_test_selected)

# Train a classifier (e.g., Random Forest) without dimensionality reduction
clf_before = RandomForestClassifier(n_estimators=100, random_state=42)
clf_before.fit(X_train_scaled, y_train)

# Make predictions and evaluate the model before dimensionality reduction
y_pred_before = clf_before.predict(X_test_scaled)
accuracy_before = accuracy_score(y_test, y_pred_before)
print(f'Accuracy before dimensionality reduction: {accuracy_before}')
# Train a classifier (e.g., Random Forest) on the reduced feature set
clf_after = RandomForestClassifier(n_estimators=100, random_state=42)
clf_after.fit(X_train_pca, y_train)

# Make predictions and evaluate the model after dimensionality reduction
y_pred_after = clf_after.predict(X_test_pca)
accuracy_after = accuracy_score(y_test, y_pred_after)
print(f'Accuracy after dimensionality reduction: {accuracy_after}')

Output:

Accuracy before dimensionality reduction: 0.8745
Accuracy after dimensionality reduction: 0.9235668789808917


The accuracy before dimensionality reduction is 0.8745, while the accuracy after dimensionality reduction is 0.9236. This improvement indicates that the dimensionality reduction technique (PCA in this case) helped the model generalize better to unseen data.



Like Article
Suggest improvement
Share your thoughts in the comments

Similar Reads