**Gradient Descent** is an optimization technique used in Machine Learning frameworks to train different models. The training process consists of an objective function (or the error function), which determines the error a Machine Learning model has on a given dataset.

While training, the parameters of this algorithm are initialized to random values. As the algorithm iterates, the parameters are updated such that we reach closer and closer to the optimal value of the function.

However, **Adaptive Optimization Algorithms** are gaining popularity due to their ability to converge swiftly. All these algorithms, in contrast to the conventional Gradient Descent, use statistics from the previous iterations to robustify the process of convergence.

### Momentum-based Optimization:

An Adaptive Optimization Algorithm which uses exponentially weighted averages of gradients over previous iterations to stabilize the convergence, resulting in quicker optimization. For example, in most real-world applications of Deep Neural Networks, the training is carried out on noisy data. It is, therefore, necessary to reduce the effect of noise when the data are fed in batches during Optimization. This problem can be tackled using **Exponentially Weighted Averages** (or Exponentially Weighted Moving Averages).

**Implementing Exponentially Weighted Averages:**

In order to approximate the trends in a noisy dataset of size N:

, we maintain a set of parameters . As we iterate through all the values in the dataset, we calculate the parameters as below:

On iteration t: Get next

This algorithm averages the value of over its values from previous iterations. This averaging ensures that only the trend is retained and the noise is averaged out. This method is used as a strategy in momentum based gradient descent to make it robust against noise in data samples, resulting in faster training.

As an example, if you were to optimize a function on the parameter , the following pseudo code illustrates the algorithm:

On iteration t: On the current batch, compute

The HyperParameters for this Optimization Algorithm are , called **the Learning Rate** and, , similar to acceleration in mechanics.

Following is an implementation of Momentum-based Gradient Descent on a function :

`import` `math ` ` ` `# HyperParameters of the optimization algorithm ` `alpha ` `=` `0.01` `beta ` `=` `0.9` ` ` `# Objective function ` `def` `obj_func(x): ` ` ` `return` `x ` `*` `x ` `-` `4` `*` `x ` `+` `4` ` ` `# Gradient of the objective function ` `def` `grad(x): ` ` ` `return` `2` `*` `x ` `-` `4` ` ` `# Parameter of the objective function ` `x ` `=` `0` ` ` `# Number of iterations ` `iterations ` `=` `0` ` ` `v ` `=` `0` ` ` `while` `(` `1` `): ` ` ` `iterations ` `+` `=` `1` ` ` `v ` `=` `beta ` `*` `v ` `+` `(` `1` `-` `beta) ` `*` `grad(x) ` ` ` ` ` `x_prev ` `=` `x ` ` ` ` ` `x ` `=` `x ` `-` `alpha ` `*` `v ` ` ` ` ` `print` `(` `"Value of objective function on iteration"` `, iterations, ` `"is"` `, x) ` ` ` ` ` `if` `x_prev ` `=` `=` `x: ` ` ` `print` `(` `"Done optimizing the objective function. "` `) ` ` ` `break` |

*chevron_right*

*filter_none*

## Recommended Posts:

- Difference between Batch Gradient Descent and Stochastic Gradient Descent
- Intuition of Adam Optimizer
- Intuition behind Adagrad Optimizer
- ML | Stochastic Gradient Descent (SGD)
- Optimization techniques for Gradient Descent
- Gradient Descent in Linear Regression
- ML | Mini-Batch Gradient Descent with Python
- Gradient Descent algorithm and its variants
- ML | XGBoost (eXtreme Gradient Boosting)
- LightGBM (Light Gradient Boosting Machine)
- Multivariate Optimization - Gradient and Hessian
- Difference between Gradient descent and Normal equation
- ML - Gradient Boosting
- Vectorization Of Gradient Descent
- Introduction To Machine Learning using Python
- Introduction to Dimensionality Reduction
- Artificial Intelligence | An Introduction
- An introduction to Machine Learning
- Introduction to Hill Climbing | Artificial Intelligence
- Decision Tree Introduction with example

If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to contribute@geeksforgeeks.org. See your article appearing on the GeeksforGeeks main page and help other Geeks.

Please Improve this article if you find anything incorrect by clicking on the "Improve Article" button below.