This tutorial demonstrates how to generate images of handwritten digits using graph mode execution in TensorFlow 2.0 by training an Autoencoder.
An AutoEncoder is a data compression and decompression algorithm implemented with Neural Networks and/or Convolutional Neural Networks. the data is compressed to a bottleneck that is of a lower dimension than the initial input. The decompression uses the intermediate representation to generate the same input image again. Let us code up a good AutoEncoder using TensorFlow 2.0 which is eager by default to understand the mechanism of this algorithm. AutoEncoders are considered a good pre-requisite for more advanced generative models such as GANs and CVAEs.
Firstly, download the TensorFlow 2.0 depending on the available hardware. If you are using Google Colab follow along with this IPython Notebook or this colab demo. Make sure that the appropriate versions of CUDA and CUDNN are available for GPU installs. Visit the official downloads instructions on the TensorFlow page here.
Code: Importing libraries
After confirming the appropriate TF download, imprt the other dependencies for data augmentation and define custom functions as shown below. The standard scaler sclaes the data by tarnsforming the columns. The get_random_block_from_data function is useful when using tf.GradientTape to peform AutoDiff (Automatic Differentiation) to get the gradients.
AutoEncoders may have a lossy intermediate representation also known as a compressed representation. This dimensionality reduction is useful in a multitude of use cases where lossless image data compression exists. Thus we can say that the encoder part of the AutoEncoder encodes a dense representation of the data. Here we will use TensorFlow Subclassing API to define custom layers for the encoder and decoder.
We then extend tf.keras.Model to define a custom model that utilizes our previously defined custom layers to form the AutoEncoder model. The call function is overridden which is the forward passwhen the data is made avaialable to the model object. Notice the @tf.function function decorator. It ensures that the function execution occurs in a graph which speeds up our execution.
The following code block prepares the dataset and gets the data ready to be fed into the pre-processing pipeline of functions before training the AutoEncoder.
It is TensorFlow best practice to use tf.data.Dataset to get tensor slices with a shuffled batch quickly from the dataset for training. The following code block demonstrates teh use of tf.data and also defines the hyperparameters for training the AutoEncoder model.
We have completed every pre-requisite to train our AutoEncoder model! All we have left to do is to define an AutoEncoder object and compile the model with the optimizer and loss before calling model.train on it for the hyperparameters defined above. Voila! You can see the loss reducing and the AutoEncoder improving its performance!
Attention reader! Don’t stop learning now. Get hold of all the important CS Theory concepts for SDE interviews with the CS Theory Course at a student-friendly price and become industry ready.
- Python | Classify Handwritten Digits with Tensorflow
- Introduction to TensorFlow
- Softmax Regression using TensorFlow
- Introduction to Tensor with Tensorflow
- Python | Tensorflow cos() method
- Linear Regression Using Tensorflow
- Python | Tensorflow nn.sigmoid()
- Python | Tensorflow nn.relu() and nn.leaky_relu()
- Python | Tensorflow nn.softplus()
- Python | Tensorflow nn.tanh()
- Python | Creating tensors using different functions in Tensorflow
- ML | Logistic Regression using Tensorflow
- Python | Tensorflow sin() method
- Python | Tensorflow atan() method
- Python | Tensorflow tan() method
- Python | Tensorflow cosh() method
- Python | Tensorflow sinh() method
- Python | Tensorflow asin() method
- Python | Tensorflow acos() method
- Python | Tensorflow reciprocal() method