Open In App

Comparison of LDA and PCA 2D projection of Iris dataset in Scikit Learn

LDA and PCA both are dimensionality reduction techniques in which we try to reduce the dimensionality of the dataset without losing much information and preserving the pattern present in the dataset. In this article, we will use the iris dataset along with scikit learn pre-implemented functions to perform LDA and PCA with a single line of code. Converting it into 2D and then visualizing them in two dimensions helps us to identify the patterns present between the different classes of the dataset.

Implementing PCA using Scikit Learn




from sklearn import datasets
import pandas as pd
import numpy as np
import seaborn as sb
import matplotlib.pyplot as plt
  
iris = datasets.load_iris()
iris.keys()

Output:



dict_keys(['data', 'target', 'frame', 'target_names', 'DESCR',
 'feature_names', 'filename', 'data_module'])

There are some keys in the dataset that we can use to access particular data. For instance, you can specify iris[‘data’] to get information about the length and width of iris flowers.

Pandas is a fantastic tool for preprocessing and exploring datasets, among other dataset-related tasks. So let’s transform our dataset, which is currently in the form of matrices, into rows and columns.






iris = pd.DataFrame(
    data=np.c_[iris['data'], iris['target']],
    columns=iris['feature_names'] + ['target']
)
iris.head()

Output:

Iris dataset first five rows

Now, let’s separate the features and the target variable.




# As we only require the measurements,
# we will drop the target and species.
X = iris.drop(['target'], axis=1)
Y = iris['target']

Now from the sklearn.decomposition module we will import PCA and then use it to convert our dataset to 2D from 4D.




from sklearn.decomposition import PCA
pca = PCA(n_components=2)
iris_pca = pca.fit_transform(X)

Now the iris_pca contains the data in the desired format. Let’s plot this on a 2D plane to visualize the pattern between the classes.




sb.scatterplot(iris_pca[:, 0],
               iris_pca[:, 1],
               hue=iris['target'])
plt.show()

Output:

Visualising data obtained by using PCA

Let’s the percentage of the variance or the information of the original dataset retained after reducing the dimensionality of the dataset.




ret_variance = pca.explained_variance_ratio_[0]
ret_variance

Output:

0.9246187232017271

Implementing LDA using Scikit Learn

We import an LDA model from the Scikit Learn Library in this step.




from sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA
lda = LDA(n_components=2)
iris_lda = lda.fit_transform(X, iris['target'])

Now let’s plot the lower dimensional data on a 2D plane and try to visualize the distinction between the three classes.




sb.scatterplot(iris_lda[:, 0],
               iris_lda[:, 1],
               hue=iris['target'])
plt.show()

Output:

Visualising data obtained by using LDA

LDA maximizes the distance between different classes, whereas PCA maximizes the variance of the data. When there are fewer samples in each class, PCA performs better. LDA, however, performs better on large datasets with many classes.


Article Tags :