• Courses
  • Tutorials
  • Jobs
  • Practice
  • Contests
May 04, 2022 |49.7K Views
Decision Tree with Gini Index as Impurity Measure
  Share  1 Like
Description
Discussion

Machine learning is the study of algorithms that can improve automatically through experiences by using trained data. And while creating the Machine learning model with those data or models, it should be clean and classified in the proper ways. In this video, we will discuss the Decision Tree with the Gini Impurity Measure.

A decision tree is supervised learning which is used for both classification and regression problems. Generally, we use tree representation to solve the given question for each leaf node corresponding to a class label, and attributes are represented on the internal node of the tree.

Decision trees are based on many classical machine learning algorithms - Random Forests, Bagging and Boosted Decision Trees. It is now widely used in machine learning for predictive modeling, including both classification and regression.

To estimate the impurity in a decision node, we have two factors - one is Entropy and the second is Gini Impurity

Entropy - It is randomness and it is an information theory that quantifies the impurity in a group observation. It also determines the decision tree on how to choose the split data so we can get the Leaf nodes. We will discuss the formula within the video to get an entropy with an example.

Information Gain - It helps to get the order of attributes in the node of a decision tree. Information gain is a measure of the change in entropy. Gini impurity - It is used to build Decision Trees to determine how the features of a dataset should split nodes to form the tree

Related Articles : https://www.geeksforgeeks.org/gini-impurity-and-entropy-in-decision-tree-ml/

Read More