LightGBM is a gradient-boosting framework based on decision trees to increase the efficiency of the model and reduces memory usage.
It uses two novel techniques:
- Gradient-based One Side Sampling(GOSS)
- Exclusive Feature Bundling (EFB)
These techniques fulfill the limitations of the histogram-based algorithm that is primarily used in all GBDT (Gradient Boosting Decision Tree) frameworks. The two techniques of GOSS and EFB described below form the characteristics of the LightGBM Algorithm. They comprise together to make the model work efficiently and provide it a cutting edge over other GBDT frameworks
Gradient-based One Side Sampling Technique for LightGBM:
Different data instances have varied roles in the computation of information gain. The instances with larger gradients(i.e., under-trained instances) will contribute more to the information gain. GOSS keeps those instances with large gradients (e.g., larger than a predefined threshold, or among the top percentiles), and only randomly drops those instances with small gradients to retain the accuracy of information gain estimation. This treatment can lead to a more accurate gain estimation than uniformly random sampling, with the same target sampling rate, especially when the value of information gain has a large range.
Algorithm for GOSS:
Input:
I: training data
d: number of iterations
a: sampling ratio of large gradient data
b: sampling ratio of small gradient data
loss: loss function
L: weak learner
Models <- {} # a list of weak models
fact <- (1-a)/b
topN <- a * len(I) # number of top samples to be included
randN <- b * len(I) # number of random samples to be included
for i = 1 to d do:
preds <- Models.predict(I)
g <- loss(I, preds)
w <- {1, 1, ...} # initialize sample weights
sorted <- GetSortedIndices(abs(g))
topSet <- sorted[1:topN]
randSet <- RandomPick(sorted[topN:len(I)], randN)
usedSet <- topSet + randSet # combine the top and random samples
w[randSet] <- w[randSet] * fact # assign weight to the small gradient data
newModel <- L(I[usedSet], g[usedSet], w[usedSet]) # train a new model on the used samples
Models.append(newModel) # add the new model to the model list
Mathematical Analysis for GOSS Technique (Calculation of Variance Gain at splitting feature j)
The GOSS (Gradient-based One-Side Sampling) method is used in gradient boosting on a training set with n instances {x1, · · ·, xn}, where each instance xi is a vector with dimension s in space Xs. In each iteration of gradient boosting, the negative gradients of the loss function with respect to the output of the model are represented as {g1, · · ·, gn}. The instances in the training set are ranked in descending order based on their absolute gradient values, and the top-a × 100% instances with the largest gradients are selected to form a subset A.
For the remaining set Ac, consisting of (1- a) × 100% instances with smaller gradients, a random subset B with a size of b × |Ac| is sampled. The instances are then split based on the estimated variance gain at vector Vj (d) over subset A ? B, where

Here,
The coefficient (1-a)/b is used to normalize the sum of the gradients over B back to the size of Ac
Exclusive Feature Bundling Technique for LightGBM
High-dimensional data are usually very sparse which provides us the possibility of designing a nearly lossless approach to reduce the number of features. Specifically, in a sparse feature space, many features are mutually exclusive, i.e., they never take nonzero values simultaneously. The exclusive features can be safely bundled into a single feature (called an Exclusive Feature Bundle). Hence, the complexity of histogram building changes from O(data × feature) to O(data × bundle), while bundle<<feature. Hence, the speed of the training framework is improved without hurting accuracy.
Algorithm for Exclusive Feature Bundling Technique:
Input:
numData: the number of data points in the dataset
F: a bundle of exclusive features
Output:
newBin: a new feature vector obtained from bundling the input features in F
binRanges: a list of bin ranges used to map the original feature values to the new feature values
Algorithm:
Initialize binRanges as [0], and totalBin as 0.
For each feature f in F, add f.numBin to totalBin and append the result to binRanges.
Create a new empty feature vector newBin with numData elements.
For each data point i in the dataset:
a. Initialize newBin[i] to 0.
b. For each feature j in F:
i. If F[j].bin[i] is not equal to 0, add F[j].bin[i] and binRanges[j] to newBin[i].
Return newBin and binRanges as the output.
Architecture of LightBGM
LightGBM splits the tree leaf-wise as opposed to other boosting algorithms that grow tree level-wise. It chooses the leaf with the maximum delta loss to grow. Since the leaf is fixed, the leaf-wise algorithm has a lower loss compared to the level-wise algorithm. Leaf-wise tree growth might increase the complexity of the model and may lead to overfitting in small datasets.
Below is a diagrammatic representation of Leaf-Wise Tree Growth:

Architecture of LightBGM
Python Implementation of LightGBM Model
The data set used for this example is Breast Cancer Prediction. Click on this to get the dataset: Link to Data set.
python
pip install lightgbm
import pandas as pd
import lightgbm as lgb
from lightgbm import LGBMClassifier
data = pd.read_csv("cancer_prediction.csv)
data = data.drop(columns = [ 'Unnamed: 32' ], axis = 1 )
data = data.drop(columns = [ 'id' ], axis = 1 )
data[ 'diagnosis' ] = pd.get_dummies(data[ 'diagnosis' ])
train = data[ 0 : 400 ]
test = data[ 400 : 568 ]
x_train = train.drop(columns = [ 'diagnosis' ], axis = 1 )
y_train = train_data[ 'diagnosis' ]
x_test = test_data.drop(columns = [ 'diagnosis' ], axis = 1 )
y_test = test_data[ 'diagnosis' ]
model = LGBMClassifier()
model.fit(x_train, y_train)
pred = model.predict(x_test)
print (pred)
accuracy = model.score(x_test, y_test)
print (accuracy)
|
Output
Prediction array :
[0 1 1 1 1 1 1 1 0 1 1 1 1 0 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 1 0 1 0 0 1 0 1
1 1 1 1 0 1 1 0 1 0 1 1 0 1 0 1 1 1 1 1 1 1 1 0 0 1 1 1 1 1 1 0 1 1 1 1 1
1 1 1 1 1 0 1 1 1 1 1 1 1 0 1 0 1 0 0 1 1 1 1 1 0 0 1 0 1 0 1 1 1 1 1 0 1
1 0 1 0 1 0 0 1 1 1 0 1 1 1 1 1 1 1 1 1 1 1 0 1 0 0 1 1 1 1 0 0 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 0 0 0 0 0 0]
Accuracy Score :
0.9702380952380952
Parameter Tuning of LightBGM Model
A few important parameters and their usage is listed below :
- max_depth : It sets a limit on the depth of tree. The default value is 20. It is effective in controlling over fitting.
- categorical_feature : It specifies the categorical feature used for training model.
- bagging_fraction : It specifies the fraction of data to be considered for each iteration.
- num_iterations : It specifies the number of iterations to be performed. The default value is 100.
- num_leaves : It specifies the number of leaves in a tree. It should be smaller than the square of max_depth.
- max_bin : It specifies the maximum number of bins to bucket the feature values.
- min_data_in_bin : It specifies the minimum amount of data in one bin.
- task : It specifies the task we wish to perform which is either train or prediction. The default entry is train. Another possible value for this parameter is prediction.
- feature_fraction : It specifies the fraction of features to be considered in each iteration. The default value is one.
Advantages of LightBGM Model
- LightBGM Algorithm has faster speed and higher accuracy
- It has lower Memory usage
- It has better accuracy
- It supports parallel and distributed GPU learning
- It is capable of handling large scale data
Whether you're preparing for your first job interview or aiming to upskill in this ever-evolving tech landscape,
GeeksforGeeks Courses are your key to success. We provide top-quality content at affordable prices, all geared towards accelerating your growth in a time-bound manner. Join the millions we've already empowered, and we're here to do the same for you. Don't miss out -
check it out now!
Last Updated :
23 May, 2023
Like Article
Save Article