Let’s see how to train a model of support vector machine, save the trained model and test the model to check the percentage of its prediction accuracy using the OpenCV.
autocrop , we collect data from the web, crop faces and resize them to smaller sizes in bulk. The collected data needs to be organized meaningfully so we can access it programmatically and manually. Use the below folder structure-
FFR_dataset/ |-- Age | |-- adult | |-- child | |-- old | |-- teen |-- Emotion | |-- anger | |-- contempt | |-- happy | |-- neutral | |-- sad | |-- surprise |-- Gender |-- female |-- male
We use the same directory names in the code to access them to train, save and predict the recognition results. A minimum of 50 images in each folder is required to train the models to get good prediction results. Training more images can improve the results but not recommended as it takes a lot of time to execute that and does not give significant improvements.
By making use of the sample provided in the official opencv repo to train the SVM with HOG, train_HOG.cpp, we implement the c++ code to train, save and predict the facial features on an image with multiple faces.
There are three feature types- Age, Emotion and Gender. Four age groups, six emotions and two gender types. Hence an n-class classifier is implemented to recognize each feature on a face data.
Step #1: For each feature type i.e. (Age, Emotion or Gender) loop through ‘n’ run times.
Step #2: In each run, iterate through the feature values in the feature type and get the images into a vector or array, i.e. get all the images from folders Gender->Male and Gender->Female.
Step #3 to #6:
- Crop the images in the vector to the face rectangles and update the images vector with the new faces list.
- Perform any pre-processing tasks such as resizing to smaller size (64, 64) on each image in faces list.
- Shuffle the pre-processed face images in vector randomly to introduce random input data.
- Split the dataset into training(80%) and prediction(20%) data.
Step #7 : Compute HOG for each image in training data.
Step #8 : Convert the training data vector to opencv Mat object to train SVM.
Step #9 : Pass the training data Mat object to svm train function along with a vector of labels for the training data.
Step #10: Save the trained model.
Step #11: Predict the model by computing the HOG for each prediction image, convert the prediction dataset to opencv mat object and call svm predict with a vector of labels to store the result.
Step #12 and #13: Calculate the percentage of its accuracy by comparing the expected prediction labels with the predicted labels.
Run the executable with below command line arguments.
./train_hog --test --in=
Note: Due to long hair for all three people in the image it is detecting the gender as ‘female’ which is a false positive. In machine learning algorithms the false positives are always common given the input sample image has ambiguous features.
- Classifying data using Support Vector Machines(SVMs) in R
- Classifying data using Support Vector Machines(SVMs) in Python
- vector::crend() & vector::crbegin() with example
- vector :: cbegin() and vector :: cend() in C++ STL
- vector::push_back() and vector::pop_back() in C++ STL
- vector::empty() and vector::size() in C++ STL
- vector::front() and vector::back() in C++ STL
- vector::at() and vector::swap() in C++ STL
- vector::begin() and vector::end() in C++ STL
- ML | What is Machine Learning ?
- Machine Learning - Applications
- Demystifying Machine Learning
- Getting started with Machine Learning
- Clustering in Machine Learning
- Machine Learning | Outlier
If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to email@example.com. See your article appearing on the GeeksforGeeks main page and help other Geeks.
Please Improve this article if you find anything incorrect by clicking on the "Improve Article" button below.