Feature Scaling is a technique to standardize the independent features present in the data in a fixed range. It is performed during the data pre-processing to handle highly varying magnitudes or values or units. If feature scaling is not done, then a machine learning algorithm tends to weigh greater values, higher and consider smaller values as the lower values, regardless of the unit of the values.
Example: If an algorithm is not using feature scaling method then it can consider the value 3000 meter to be greater than 5 km but that’s actually not true and in this case, the algorithm will give wrong predictions. So, we use Feature Scaling to bring all values to same magnitudes and thus, tackle this issue.
Techniques to perform Feature Scaling
Consider the two most important ones:
- Min-Max Normalization: This technique re-scales a feature or observation value with distribution value between 0 and 1.
- Standardization: It is a very effective technique which re-scales a feature value so that it has distribution with 0 mean value and variance equals to 1.
Download the dataset:
Go to the link and download Data_for_Feature_Scaling.csv
Below is the Python Code:
Country Age Salary Purchased 0 France 44 72000 0 1 Spain 27 48000 1 2 Germany 30 54000 0 3 Spain 38 61000 0 4 Germany 40 1000 1 Original data values : [[ 44 72000] [ 27 48000] [ 30 54000] [ 38 61000] [ 40 1000] [ 35 58000] [ 78 52000] [ 48 79000] [ 50 83000] [ 37 67000]] After min max Scaling : [[ 0.33333333 0.86585366] [ 0. 0.57317073] [ 0.05882353 0.64634146] [ 0.21568627 0.73170732] [ 0.25490196 0. ] [ 0.15686275 0.69512195] [ 1. 0.62195122] [ 0.41176471 0.95121951] [ 0.45098039 1. ] [ 0.19607843 0.80487805]] After Standardisation : [[ 0.09536935 0.66527061] [-1.15176827 -0.43586695] [-0.93168516 -0.16058256] [-0.34479687 0.16058256] [-0.1980748 -2.59226136] [-0.56487998 0.02294037] [ 2.58964459 -0.25234403] [ 0.38881349 0.98643574] [ 0.53553557 1.16995867] [-0.41815791 0.43586695]]
Attention reader! Don’t stop learning now. Get hold of all the important CS Theory concepts for SDE interviews with the CS Theory Course at a student-friendly price and become industry ready.
- ML | Feature Scaling - Part 1
- Feature Scaling - Part 3
- Overview of Scaling: Vertical And Horizontal Scaling
- Python | How and where to apply Feature Scaling?
- Parameters for Feature Selection
- ML | Feature Mapping
- Feature Extraction Techniques - NLP
- ML | Chi-square Test for feature selection
- Chi-Square Test for Feature Selection - Mathematical Explanation
- ML | Extra Tree Classifier for Feature Selection
- Pyspark | Linear regression with Advanced Feature Dataset using Apache MLlib
- Sklearn | Feature Extraction with TF-IDF
- Feature Encoding Techniques - Machine Learning
- Feature Selection using Branch and Bound Algorithm
- ML | Types of Learning - Part 2
- Python | Part of Speech Tagging using TextBlob
- NLP | Part of Speech - Default Tagging
- NLP | Part of speech tagged - word corpus
- NLP | Distributed Tagging with Execnet - Part 1
- NLP | Distributed Tagging with Execnet - Part 2
If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to firstname.lastname@example.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.
Please Improve this article if you find anything incorrect by clicking on the "Improve Article" button below.
Improved By : Vijay Sirra