Python PyTorch – backward() Function
In this article, we are going to discuss the backward() method in Pytorch with detailed examples.
The backward() method in Pytorch is used to calculate the gradient during the backward pass in the neural network. If we do not call this backward() method then gradients are not calculated for the tensors. The gradient of a tensor is calculated for the one having requires_grad is set to True. We can access the gradients using .grad. If we do not call the backward() method or even for the tensors whose requires_grad is set to False, the result is None.
Syntax: tensor.backward( )
In the below code, we define the tensors and a function and checked the gradient values without calling the backward method. So the gradient of the tensor is None.
Tensor-A: tensor(5., requires_grad=True) x: tensor(125., grad_fn=<PowBackward0>) A.grad: None
In this example, we defined a tensor A with requires_grad=True and created a function x using the defined tensor. Then called the backward method for the function x to compute gradient value.
Tensor-A: tensor(5., requires_grad=True) x: tensor(125., grad_fn=<PowBackward0>) A.grad: tensor(75.)
In this example, we are considering two tensors where one with requires_grad is set to True and the another with False and then create a function using the two defined tensors and compute the gradient values. The tensor B returned a None value.
Tensor-A: tensor(2., requires_grad=True) Tensor-B: tensor(5.) x: tensor(10., grad_fn=<MulBackward0>) A.grad: tensor(5.) B.grad: None