Open In App

What is “with torch no_grad” in PyTorch?

Last Updated : 05 Jun, 2022
Improve
Improve
Like Article
Like
Save
Share
Report

In this article, we will discuss what does with a torch.no_grad() method do in PyTorch.

torch.no_grad() method

With torch.no_grad() method is like a loop in which every tensor in that loop will have a requires_grad set to False. It means that the tensors with gradients currently attached to the current computational graph are now detached from the current graph and no longer we will be able to compute the gradients with respect to that tensor. Until the tensor is within the loop it is detached from the current graph. As soon as the tensor defined with gradient is out of the loop, it is again attached to the current graph. This method disables the gradient calculation which reduces the memory consumption for computations.

Example 1

In this example, we will define a tensor A with requires_grad=true and then we will define a function B inside the torch.no_grad() using the tensor A. Now tensor A is within the loop so the requires_grad is set to false.

Python3




# import necessary libraries
import torch
  
# define a tensor
A = torch.tensor(1., requires_grad=True)
print("Tensor-A:", A)
  
# define a function using A tensor 
# inside loop
with torch.no_grad():
    B = A + 1
print("B:-", B)
  
# check gradient
print("B.requires_grad=", B.requires_grad)


Output

Tensor-A: tensor(1., requires_grad=True)
B:- tensor(2.)
B.requires_grad= False

Example 2

In this example, we will define two functions, one should be inside the loop, and the other will be outside of it after that we will check the difference between the requires_grad parameter value for the two functions. In the output, we will see that the requires_grad is set to False because the function y is inside the loop which generally disables the gradient calculation, whereas requires_grad is set to True for function x which is outside of the loop.

Python3




# import necessary libraries
import torch
  
# define  tensors
A = torch.tensor(1., requires_grad=False)
print("Tensor-A:", A)
B = torch.tensor(2.2, requires_grad=True)
print("Tensor-B:", B)
  
# define a function x outside loop and 
# check gradient
x = A+B
print("x:-", x)
print("x.requires_grad=", x.requires_grad)
  
# define a function y inside loop and 
# check gradient
with torch.no_grad():
    y = B - A
print("y:-", y)
print("y.requires_grad=", y.requires_grad)


Output

Tensor-A: tensor(1.)
Tensor-B: tensor(2.2000, requires_grad=True)
x:- tensor(3.2000, grad_fn=<AddBackward0>)
x.requires_grad= True
y:- tensor(1.2000)
y.requires_grad= False

Example 3

In this example, we are just defining a tensor A using the tensor method the requires_grad is set to be True and by defining a function x using the tensor A inside the no_grad() method it checked the requires_grad value for the function x. In the output the function x has requires_grad is set to False because it is specified in the loop, which generally disables the gradient calculation.

Python3




# import necessary libraries
import torch
  
# define a tensor
A = torch.tensor(5., requires_grad=True)
print("Tensor-A:", A)
  
# define a function x inside loop and 
# check gradient
with torch.no_grad():
    x = A**2
print("x:-", x)
print("x.requires_grad=", x.requires_grad)


 Output

Tensor-A: tensor(5., requires_grad=True)
x:- tensor(25.)
x.requires_grad= False


Like Article
Suggest improvement
Share your thoughts in the comments

Similar Reads