Open In App

ML | Face Recognition Using Eigenfaces (PCA Algorithm)

Last Updated : 24 Sep, 2021
Improve
Improve
Like Article
Like
Save
Share
Report
In 1991, Turk and Pentland suggested an approach to face recognition that uses dimensionality reduction and linear algebra concepts to recognize faces. This approach is computationally less expensive and easy to implement and thus used in various applications at that time such as handwritten recognition, lip-reading, medical image analysis, etc. PCA (Principal Component Analysis) is a dimensionality reduction technique that was proposed by Pearson in 1901. It uses Eigenvalues and EigenVectors to reduce dimensionality and project a training sample/data on small feature space. Let’s look at the algorithm in more detail (in a face recognition perspective). Training Algorithm:
  • Let’s Consider a set of m images of dimension N*N (training images).
    lfw_people_training_images

    Training Image with True Label (LFW people’s dataset)

  • We first convert these images into vectors of size N2 such that: imagetovector
     x_{1},x_{2},x_{3}...x_{m}
  • Now we calculate the average of all these face vectors and subtract it from each vector
     \psi =\dfrac{1}{m}\sum_{i=1}^{m}x_i \\  a_{i} = x_{i}-\psi
    average_face

    average_face

  • Now we take all face vectors so that we get a matrix of size of N2 * M.
     A = \begin{bmatrix} a_{1} &a_{2}  &a_{3}  &....  & a_{m} \end{bmatrix}
  • Now, we find Covariance matrix by multiplying A with AT. A has dimensions N2 * M, thus AT has dimensions M * N2. When we multiplied this gives us matrix of N2 * N2, which gives us N2 eigenvectors of N2 size which is not computationally efficient to calculate. So we calculate our covariance matrix by multiplying AT and A. This gives us M * M matrix which has M (assuming M << N2) eigenvectors of size M.
     Cov = A^{T}A
  • In this step we calculate eigen values and eigenvectors of above covariance matrix using the formula below.  A^{T}A\nu_{i} = \lambda_{i}\nu_{i} \\ \\ AA^{T}A\nu_{i} = \lambda_{i}A\nu_{i} \\ \\ C{}'u_{i} = \lambda_{i}u_{i}
    where,  C{}' = AA^{T} and u_{i} = A\nu_{i} From the above statement It can be concluded that C_{}' and C have same eigenvalues and their eigenvectors are related by the equation u_{i} = A\nu_{i}. Thus, the M eigenvalues (and eigenvectors) of covariance matrix gives the M largest eigenvalues(and eigenvectors) of C_{}'
  • Now we calculate Eigenvector and Eigenvalues of this reduced covariance matrix and map them into the C_{}' by using the formula u_{i} = A\nu_{i}.
  • Now we select the K eigenvectors of C_{}' corresponding to the K largest eigenvalues (where K < M). These eigenvectors has size N2.
  • In this step we used the eigenvectors that we got in previous step. We take the normalized training faces (face – average face)  x_{i} and represent each face vectors in the linear of combination of the best K eigenvectors (as shown in the diagram below).
      x_{i} -\psi = \sum_{j=1}^{K} w_{j}u_{j}
    These  u_{j} are called EigenFaces.
    EigenFaces

    EigenFaces

  • In this step, we take the coefficient of eigenfaces and represent the training faces in the form of a vector of those coefficients.
     x_{i} = \begin{bmatrix} w_{1}^i\\  w_{2}^i\\  w_{3}^i\\  .\\ . \\ w_{k}^i  \end{bmatrix}
  • Linear Combination of EigenFaces

    Linear Combination of EigenFaces

Testing/Detection Algorithm :
Test Images With true labels

Test Images With true labels

  • Given an unknown face y, we need to first preprocess the face to make it centered in the image and have the same dimensions as the training face
  • Now, we subtract the face from the average face  \psi .
     \phi = y - \psi
    Test Images - Average Images

    Test Images – Average Images

  • Now, we project the normalized vector into eigenspace to obtain the linear combination of eigenfaces.
     \phi = \sum_{i=1}^{k}w_{i}u_{i}
  • From the above projection, we generate the vector of the coefficient such that
     \Omega= \begin{bmatrix} w_{1}\\  w_{2}\\  w_{3}\\  .\\ .\\ w_{k}  \end{bmatrix}
  • We take the vector generated in the above step and subtract it from the training image to get the minimum distance between the training vectors and testing vectors
  •  e_r = min_{l}\left \|  \Omega - \Omega_{l}\right \|
  • If this e_r is below tolerance level Tr, then it is recognised with l face from training image else the face is not matched from any faces in training set.
  • test_images_result

    Test images With prediction

Advantages:
  • Easy to implement and computationally less expensive.
  • No knowledge (such as facial feature) of the image required (except id).
Limitations :
  • Proper centered face is required for training/testing.
  • The algorithm is sensitive to lightining, shadows and also scale of face in the image .
  • Front view of the face is required for this algorithm to work properly.
Reference :

Like Article
Suggest improvement
Previous
Next
Share your thoughts in the comments

Similar Reads