Tensorflow.js tf.train.adam() Function
Adam optimizer (or Adaptive Moment Estimation) is a stochastic gradient descent method that is based on adaptive estimation of first-order and second-order moments. The optimization technique is highly efficient in when working with a large sets of data and parameters. For more details refer to this article.
In Tensorflow.js tf.train.adam() function is used which creates tf.AdamOptimizer that uses the adam algorithm.
tf.train.adam (learningRate?, beta1?, beta2?, epsilon?)
- learningRate: The learning rate to use for the Adam gradient descent algorithm. It is optional.
- beta1: The exponential decay rate for the 1st moment estimates. It is optional.
- beta2: The exponential decay rate for the 2nd moment estimates. It is optional.
- epsilon: A small constant for numerical stability. It is optional.
Return Value: AdamOptimizer.
Example 1: A quadratic function is defined with taking x, y input tensors and a, b, c as random coefficients. Then we calculate the mean squared loss of the prediction and pass it to adam optimizer to minimize the loss with and change the coefficient ideally.
l: 0.5212615132331848, m: 0.4882013201713562, n: 0.9879841804504395 l: 0.5113212466239929, m: 0.49809587001800537, n: 0.9783468246459961 l: 0.5014950633049011, m: 0.5077731013298035, n: 0.969675600528717 l: 0.49185076355934143, m: 0.5170749425888062, n: 0.9630305171012878 l: 0.48247095942497253, m: 0.5257879495620728, n: 0.9595866799354553 l: 0.47345229983329773, m: 0.5336435437202454, n: 0.9596782922744751 l: 0.4649032950401306, m: 0.5403363704681396, n: 0.9626657962799072 l: 0.4569399356842041, m: 0.5455683469772339, n: 0.9677067995071411 l: 0.4496782124042511, m: 0.5491118431091309, n: 0.9741682410240173 l: 0.44322386384010315, m: 0.5508641004562378, n: 0.9816395044326782 x: 0, pred: 0.9816395044326782 x: 1, pred: 0.8739992380142212 x: 2, pred: 3.4257020950317383 x: 3, pred: 11.29609203338623
Example 2: Below is the code where we designed a simple model and we define an optimizer by tf.train.adam with the learning rate parameter of 0.01 and pass it to model compilation.