Skip to content
Related Articles

Related Articles

Improve Article

Python – tensorflow.GradientTape.jacobian()

  • Last Updated : 10 Jul, 2020

TensorFlow is open-source Python library designed by Google to develop Machine Learning models and deep learning  neural networks. 

jacobian() is used to compute the jacobian using operations recorded in context of this tape.

Syntax: jacobian( target, source, unconnected_gradients, parallel_iterations, experimental_use_pfor )

Parameters:

  • target: It is a Tensor having minimum rank 2.
  • source: It is a Tensor having minimum rank 2.
  • unconnected_gradients(optional): It’s value can either be zero or None. Default value is None.
  • parallel_iterations(optional): It is used to control parallel iterations and memory usage.
  • experimental_use_pfor(optional): It is a boolean with default value True. It uses pfor to calculate jacobian when set to true otherwise tf.while_loop is used.

Returns: It returns a Tensor. 



Example 1:

Python3




# Importing the library
import tensorflow as tf
  
x = tf.constant([[4, 2],[1, 3]], dtype=tf.dtypes.float32)
  
# Using GradientTape
with tf.GradientTape() as gfg:
  gfg.watch(x)
  y = x * x * x
  
# Computing jacobian
res  = gfg.jacobian(y, x) 
  
# Printing result
print("res: ",res)

Output:



res:  tf.Tensor(
[[[[48.  0.]
   [ 0.  0.]]

  [[ 0. 12.]
   [ 0.  0.]]]


 [[[ 0.  0.]
   [ 3.  0.]]

  [[ 0.  0.]
   [ 0. 27.]]]], shape=(2, 2, 2, 2), dtype=float32)


Example 2:

Python3




# Importing the library
import tensorflow as tf
  
x = tf.constant([[4, 2],[1, 3]], dtype=tf.dtypes.float32)
  
# Using GradientTape
with tf.GradientTape() as gfg:
  gfg.watch(x)
  
  # Using nested GradientTape for calculating higher order jacobian
  with tf.GradientTape() as gg:
    gg.watch(x)
    y = x * x * x
  # Computing first order jacobian
  first_order = gg.jacobian(y, x)
  
# Computing Second order jacobian
second_order  = gfg.batch_jacobian(first_order, x) 
  
# Printing result
print("first_order: ",first_order)
print("second_order: ",second_order)

Output:



first_order:  tf.Tensor(
[[[[48.  0.]
   [ 0.  0.]]

  [[ 0. 12.]
   [ 0.  0.]]]


 [[[ 0.  0.]
   [ 3.  0.]]

  [[ 0.  0.]
   [ 0. 27.]]]], shape=(2, 2, 2, 2), dtype=float32)
second_order:  tf.Tensor(
[[[[[[24.  0.]
     [ 0.  0.]]

    [[ 0.  0.]
     [ 0.  0.]]]


   [[[ 0.  0.]
     [ 0.  0.]]

    [[ 0.  0.]
     [ 0.  0.]]]]



  [[[[ 0.  0.]
     [ 0.  0.]]

    [[ 0. 12.]
     [ 0.  0.]]]


   [[[ 0.  0.]
     [ 0.  0.]]

    [[ 0.  0.]
     [ 0.  0.]]]]]




 [[[[[ 0.  0.]
     [ 0.  0.]]

    [[ 0.  0.]
     [ 0.  0.]]]


   [[[ 0.  0.]
     [ 6.  0.]]

    [[ 0.  0.]
     [ 0.  0.]]]]



  [[[[ 0.  0.]
     [ 0.  0.]]

    [[ 0.  0.]
     [ 0.  0.]]]


   [[[ 0.  0.]
     [ 0.  0.]]

    [[ 0.  0.]
     [ 0. 18.]]]]]], shape=(2, 2, 2, 2, 2, 2), dtype=float32)




 Attention geek! Strengthen your foundations with the Python Programming Foundation Course and learn the basics.  

To begin with, your interview preparations Enhance your Data Structures concepts with the Python DS Course. And to begin with your Machine Learning Journey, join the Machine Learning – Basic Level Course




My Personal Notes arrow_drop_up
Recommended Articles
Page :