Skip to content
Related Articles

Related Articles

Gaussian Discriminant Analysis
  • Last Updated : 09 Dec, 2020

There are two types of Supervised Learning algorithms used for classification in Machine Learning.

  1. Discriminative Learning Algorithms
  2. Generative Learning Algorithms

Discriminative Learning Algorithms include Logistic Regression, Perceptron Algorithm, etc. which try to find a decision boundary between different classes during the learning process. For example, given a classification problem to predict whether a patient has malaria or not a Discriminative Learning Algorithm will try to create a classification boundary to separate two types of patients, and when a new example is introduced it is checked on which side of the boundary the example lies to classify it. Such algorithms try to model P(y|X) i.e. given a feature set X for a data sample what is the probability it belongs to the class ‘y’.

On the other hand, Generative Learning Algorithms follow a different approach, they try to capture the distribution of each class separately instead of finding a decision boundary among classes. Considering the previous example, a Generative Learning Algorithm will look at the distribution of infected patients and healthy patients separately and try to learn each of the distribution’s features separately, when a new example is introduced it is compared to both the distributions, the class to which the data example resembles the most will be assigned to it. Such algorithms try to model P(X|y) for a given P(y) here, P(y) is known as a class prior

The predictions for generative learning algorithms are made using Bayes Theorem as follows:

P(y|X) = \dfrac{P(X|y).P(y)}{P(X)} \\where, P(X) = P(X|y=1).P(y=1) + P(X|y=0).P(y=0)\\



Using only the values of P(X|y) and P(y) for the particular class we can calculate P(y|X) i.e given the features of a data sample what is the probability it belongs to the class ‘y’.

Gaussian Discriminant Analysis is a Generative Learning Algorithm and in order to capture the distribution of each class, it tries to fit a Gaussian Distribution to every class of the data separately. Below images depict the difference between the Discriminative and Generative Learning Algorithms. The probability of a prediction in the case of the Generative learning algorithm will be high if it lies near the centre of the contour corresponding to its class and it decreases as we move away from the centre of the contour.  

 

Generative Learning Algorithm (GDA)

Discriminative Learning Algorithm

 

Let’s consider a binary classification problem such that all the data samples are IID (Independently and Identically distributed), therefore to calculate P(X|y) we can use Multivariate Gaussian Distribution to form a probability density function for each individual class. And to calculate P(y) or class prior for each class we can use Bernoulli distribution as all the data samples in binary classification can either take value 1 or 0.

Therefore, the probability distribution and class prior to a data sample can be defined using the general form of Gaussian and Bernoulli distribution respectively: 

P(x|y=0) = \dfrac{1}{(2\pi)^{n/2}*|\Sigma|^{1/2}} exp({-1/2(x-\mu_0)^{T}\Sigma^{-1}(x-\mu_0))}\hspace{1mm}-\hspace{1mm}\textbf{Eq\hspace{1mm}1}  \\ P(x|y=1) = \dfrac{1}{(2\pi)^{n/2}*|\Sigma|^{1/2}} exp({-1/2(x-\mu_1)^{T}\Sigma^{-1}(x-\mu_1))}\hspace{1mm} -\hspace{1mm} \textbf{Eq\hspace{1mm}2}  \\ P(y)  =  \phi^y . (1-\phi)^{1-y}\hspace{1mm}-\hspace{1mm}\textbf{Eq\hspace{1mm}\hspace{1mm}3}\\
\\\\ In\hspace{1mm}the\hspace{1mm}above\hspace{1mm}equations:\\ \mu_0\hspace{1mm}is\hspace{1mm}the\hspace{1mm}mean\hspace{1mm} of\hspace{1mm} data\hspace{1mm} samples\hspace{1mm} corresponding\hspace{1mm} to\hspace{1mm} class\hspace{1mm}0\hspace{1mm}of\hspace{1mm}dimensions\hspace{1mm}\R^{n*1}\\\mu_1\hspace{1mm}is\hspace{1mm}the\hspace{1mm}mean\hspace{1mm} of\hspace{1mm} data\hspace{1mm} samples\hspace{1mm} corresponding\hspace{1mm} to\hspace{1mm} class\hspace{1mm} 1\hspace{1mm}of\hspace{1mm}dimensions\hspace{1mm}\R^{n*1}\\ \newline\Sigma\hspace{1mm}is \hspace{1mm}the \hspace{1mm}co-variance \hspace{1mm}matrix\hspace{1mm}of\hspace{1mm}dimensions\hspace{1mm}\R^{n*n}. \newline\hspace{1mm}\phi\hspace{1mm}is\hspace{1mm}the\hspace{1mm}probability\hspace{1mm}that\hspace{1mm}a\hspace{1mm}data\hspace{1mm}sample\hspace{1mm}belongs\hspace{1mm}to\hspace{1mm}class\hspace{1mm}y        \newline



In order to view the probability distributions as a function of parameters mentioned above, we can define a Likelihood function which is equal to the product of probability distribution and class prior of each data sample (Taking product of the probabilities is reasonable as all the data samples are considered IID).

\newline L(\phi, \mu_0,\mu_1,\Sigma) = \Pi_{i=1}^{m}P(x^{(i)},y^{(i)};\phi,\mu_0,\mu_1,\Sigma)\\ \hspace{2.3cm}=\Pi_{i=0}^{m}P(x^{(i)}|y^{(i)}).P(y^{(i)}) \hspace{1mm} - \textbf{Eq \hspace{1mm} 4}

 

According to the principle of Maximum Likelihood estimation we have to choose the value of parameters in a way to maximize the probability function given in Eq 4. To do so instead of maximizing the Likelihood function we can maximize Log-Likelihood Function which is a strictly increasing function.

Therefore,\hspace{1mm} Log-Likelihood \hspace{1mm}function = log(L(\phi,\mu_0,\mu_1,\Sigma)) \newline On \hspace{1mm}maximizing \hspace{1mm}Log-Likelihood \hspace{1mm}following \hspace{1mm}parameters \hspace{1mm}are \hspace{1mm}obtained

\newline\phi = \dfrac{1}{m}\Sigma_{i=1}^{m}1\{y^{(i)} = 1\}\\ \mu_0 = \dfrac{\Sigma_{i=1}^{m}\mathbb{1}\{y^{(i)} = 0\}.x^{(i)}}{\Sigma_{i=1}^{m}\mathbb{1}\{y^{(i)} = 0\}}\\ \mu_1 = \dfrac{\Sigma_{i=1}^{m}\mathbb{1}\{y^{(i)} = 1\}.x^{(i)}}{\Sigma_{i=1}^{m}\mathbb{1}\{y^{(i)} = 1\}}\\ \Sigma = \dfrac{1}{m}\Sigma_{i=1}^{m}(x^{(i)} - \mu_{y^{(i)}}).(x^{(i)} - \mu_{y^{(i)}})^{T}\\

In the above equations, the function “1{condition}” is the indicator function which returns 1 if the condition is true else returns 0. For example, 1{y=1} will return 1 only when the class of that data sample is 1 else returns 0 similarly, in case of 1{y=0} will return 1 only when the class of that data sample is 0 else it returns 0. 

The values of the parameters obtained can be plugged in Eq 1, 2, and 3 to find the probability distribution and class prior to all the data samples. These values obtained can be further multiplied to find the Likelihood function given in Eq 4. As mentioned earlier the likelihood function i.e P(X|y).P(y) can be plugged into the Bayes formula to predict P(y|X) (i.e predict the class ‘y‘ of a data sample for the given features ‘X‘). 

NOTE: The data samples in this model is considered to be IID which is an assumption made about the model, Gaussian Discriminant Analysis will perform poorly if the data is not a Gaussian distribution, therefore, it is always suggested visualizing the data to check if it has a normal distribution and if not attempts can be made to do so by using methods like log-transform etc. (Do not confuse Gaussian Discriminant Analysis with Gaussian Mixture model which is an unsupervised learning algorithm).

Therefore, Gaussian Discriminant Analysis works quite well for a small amount of data (say a few thousand examples) and can be more robust compared to Logistic Regression if our underlying assumptions about the distribution of the data are true 

Reference: http://cs229.stanford.edu/notes2020spring/cs229-notes2.pdf

machine-learning-img

My Personal Notes arrow_drop_up
Recommended Articles
Page :