Python – tensorflow.GradientTape.batch_jacobian()
Last Updated :
10 Jul, 2020
TensorFlow is open-source Python library designed by Google to develop Machine Learning models and deep learning neural networks.
batch_jacobian() is used to compute and stack the per example jacobian.
Syntax: batch_jacobian( target, source, unconnected_gradients, parallel_iterations, experimental_use_pfor )
Parameters:
- target: It is a Tensor having minimum rank 2.
- source: It is a Tensor having minimum rank 2.
- unconnected_gradients(optional): It’s value can either be zero or None. Default value is None.
- parallel_iterations(optional): It is used to control parallel iterations and memory usage.
- experimental_use_pfor(optional): It is a boolean with default value True. It uses pfor to calculate jacobian when set to true otherwise tf.while_loop is used.
Returns: It returns a Tensor.
Example 1:
Python3
import tensorflow as tf
x = tf.constant([[ 4 , 2 ],[ 1 , 3 ]], dtype = tf.dtypes.float32)
with tf.GradientTape() as gfg:
gfg.watch(x)
y = x * x * x
res = gfg.batch_jacobian(y, x)
print ( "res: " ,res)
|
Output:
res: tf.Tensor(
[[[48. 0.]
[ 0. 12.]]
[[ 3. 0.]
[ 0. 27.]]], shape=(2, 2, 2), dtype=float32)
Example 2:
Python3
import tensorflow as tf
x = tf.constant([[ 4 , 2 ],[ 1 , 3 ]], dtype = tf.dtypes.float32)
with tf.GradientTape() as gfg:
gfg.watch(x)
with tf.GradientTape() as gg:
gg.watch(x)
y = x * x * x
first_order = gg.batch_jacobian(y, x)
second_order = gfg.batch_jacobian(first_order, x)
print ( "first_order: " ,first_order)
print ( "second_order: " ,second_order)
|
Output:
first_order: tf.Tensor(
[[[48. 0.]
[ 0. 12.]]
[[ 3. 0.]
[ 0. 27.]]], shape=(2, 2, 2), dtype=float32)
second_order: tf.Tensor(
[[[[24. 0.]
[ 0. 0.]]
[[ 0. 0.]
[ 0. 12.]]]
[[[ 6. 0.]
[ 0. 0.]]
[[ 0. 0.]
[ 0. 18.]]]], shape=(2, 2, 2, 2), dtype=float32)
Share your thoughts in the comments
Please Login to comment...