Open In App

How to Check if Tensorflow is Using GPU

Last Updated : 28 Oct, 2022
Improve
Improve
Like Article
Like
Save
Share
Report

In this article, we are going to see how to check whether TensorFlow is using GUP or not.

GPUs are the new norm for deep learning. GPUs have a higher number of logical cores through which they can attain a higher level of parallelization and can provide better and fast results to computation as compared to CPUs. As in deep learning tasks, the number of parameters to train can go up to billions. As we know in a neural network to adjust the weight matrix operations such as matrix multiplication are performed which is a costly operation to be done on a CPU. To perform these faster multiple operations need to be done simultaneously. GPU has better parallelization support and also the memory required for deep learning models is also huge and can be suitable for a GPU.

You may have a GPU but your model might not be using it. In this case, the training will be done on the CPU by default. Hence it is necessary to check whether Tensorflow is running the GPU it has been provided.

If you want to know whether TensorFlow is using the GPU acceleration or not we can simply use the following command to check.

Python3




import tensorflow as tf
tf.config.list_physical_devices('GPU')


Output:

 

The output should mention a GPU. tf.keras models if GPU available will by default run on a single GPU. If you want to use multiple GPUs you can use a distribution strategy.

Once you get this output now go to the terminal and type “nvidia-smi“. It is a command-line utility intended to monitor the GPU devices by NVIDIA. It is based on top of the Nvidia Management Library (NVML). Upon typing, it should yield something similar to this.

nvidia-smi

 

This command will return a table consisting of the information of the GPU that the Tensorflow is running on. It contains information about the type of GPU you are using, its performance, memory usage and the different processes it is running. To know whether your ML model is being trained on the GPU simply note down the process id of your model and compare it with the processes tab of the given table. If the model is running on the GPU then there will be a process id of the model mentioned in the processes tab.

You can also set the device logging to true to know which devices your operations and tensors are assigned to. This will let you know which GPU is performing your operations and storing the results.

tf.debugging.set_log_device_placement(True)

This should return “Executing op _EagerConst in device /job:localhost/replica:0/task:0/device:GPU:0

Check devices available to TensorFlow

Python3




from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())


Output:

[name: "/device:CPU:0"
device_type: "CPU"
memory_limit: 268435456
locality {
}
incarnation: 13414683734401945856
xla_global_id: -1
]

Like Article
Suggest improvement
Previous
Next
Share your thoughts in the comments

Similar Reads