Open In App

Generative Adversarial Networks (GANs) | An Introduction

Improve
Improve
Like Article
Like
Save
Share
Report

Generative Adversarial Networks (GANs) was first introduced by Ian Goodfellow in 2014. GANs are a powerful class of neural networks that are used for unsupervised learning. GANs can create anything whatever you feed to them, as it Learn-Generate-Improve. To understand GANs first you must have little understanding of Convolutional Neural Networks. CNNs are trained to classify images with respect to their labels if an image is fed to a CNN, it analyzes the image pixel by pixel and is passed through nodes present in CNN’s hidden layers and as an output, it tells what the image is about or what it sees in the image. For example: If a CNN is trained to classify dogs and cats and an image is fed to this CNN, it can tell whether there is a dog or a cat in that image. Therefore it can also be called as a classification algorithm. How GANs are different? GANs can be divided into two parts which are the Generator and the Discriminator. Discriminator – This part of GANs can be considered similar to what CNNs does. Discriminator is a Convolutional Neural Network consisting of many hidden layers and one output layer, the major difference here is the output layer of GANs can have only two outputs, unlike CNNs, which can have outputs respect to the number of labels it is trained on. The output of the discriminator can either be 1 or 0 because of a specifically chosen activation function for this task, if the output is 1 then the provided data is real and if the output is 0 then it refers to it as fake data. Discriminator is trained on the real data so it learns to recognize how actual data looks like and what features should the data have to be classified as real. Generator – From the name itself, we can understand that it’s a generative algorithm. Generator is an Inverse Convolutional Neural Net, it does exactly opposite of what a CNN does, because in CNN an actual image is given as an input and a classified label is expected as an output but in Generator, a random noise (a vector having some values to be precise) is given as an input to this Inverse CNN and an actual image is expected as an output. In simple terms, it generates data from a piece of data using its own imagination. As shown in the above image, a random value vector is given as input to Inverse-CNN and after getting passed through the hidden layers and activation functions an image is received as the output. Working of both Generator and Discriminator together: As we already discussed Discriminator is trained on actual data to classify whether given data is true or not, so Discriminator’s work is to tell what’s real and what’s fake. Now the Generator starts to generate data from a random input and then that generated data is passed to Discriminator as input now Discriminator analyzes the data and checks how close it is to be classified as real, if the generated data does not contain enough features to be classified as real by the Discriminator, then this data and weights associated with it are sent back to the Generator using backpropagation, so that it can readjust the weights associated with the data and create new data which is better than the previous one. This freshly generated data is again passed to the Discriminator and it continues. This process keeps repeating as long as the Discriminator keeps classifying the generated data as fakes, for every time data is classified as fake and with every backpropagation the quality of data keeps getting better and better and there comes a time when the Generator becomes so accurate that it becomes tough to distinguish between the real data and the data generated by the Generator. In Simple terms, Discriminator is a trained guy who can tell what’s real and what’s fake and Generator is trying to fool the Discriminator and make him believe that the generated data is real, with each unsuccessful attempt Generator learns and improves itself to produce data more real like. It can also be stated as a competition between Generator and Discriminator. 

Sample code for generator and discriminator:

1. Building the generator

a. What input to pass input to first layer of generator in initial stage : random_normal_dimensions which is a hyperparameter that defines how many random numbers in a vector you’ll want to feed into the generator as a starting point for generating images.

b. Next point to be noted is that here we have used “selu” activation function instead of “relu” because “relu” has an effect of removing noise when classifying data by preventing negative values from cancelling out positive ones but in GANs we don’t wants to remove data.

Python3




# You'll pass the random_normal_dimensions to the first dense layer of the generator
random_normal_dimensions = 32
 
### START CODE HERE ###
generator = keras.models.Sequential([
    keras.layers.Dense(7 * 7 * 128, input_shape=[random_normal_dimensions]),
    keras.layers.Reshape([7, 7, 128]),
    keras.layers.BatchNormalization(),
    keras.layers.Conv2DTranspose(64, kernel_size=5, strides=2, padding="SAME",
                                 activation="selu"),
    keras.layers.BatchNormalization(),
    keras.layers.Conv2DTranspose(1, kernel_size=5, strides=2, padding="SAME",
                                 activation="tanh")
     
     
])
### END CODE HERE ###


2. Building the discriminator:

Python3




### START CODE HERE ###
discriminator = keras.models.Sequential([
    keras.layers.Conv2D(64, kernel_size=5, strides=2, padding="SAME",
                        activation=keras.layers.LeakyReLU(0.2),
                        input_shape=[28, 28, 1]),
    keras.layers.Dropout(0.4),
    keras.layers.Conv2D(128, kernel_size=5, strides=2, padding="SAME",
                        activation=keras.layers.LeakyReLU(0.2)),
    keras.layers.Dropout(0.4),
    keras.layers.Flatten(),
    keras.layers.Dense(1, activation="sigmoid"
     
     
])
### END CODE HERE ###


3. Compiling the discriminator:

Here we are compiling  the discriminator with a binary_crossentropy loss and rmsprop optimizer.
Set the discriminator to not train on its weights (set its “trainable” field).

Python3




### START CODE HERE ###
discriminator.compile(loss="binary_crossentropy", optimizer="rmsprop")
discriminator.trainable = False
### END CODE HERE ###


4. Build and compile the GAN model :
Build the sequential model for the GAN, passing a list containing the generator and discriminator.
Compile the model with a binary cross entropy loss and rmsprop optimizer.

Python3




### START CODE HERE ###
gan = keras.models.Sequential([generator, discriminator])
gan.compile(loss="binary_crossentropy", optimizer="rmsprop")
### END CODE HERE ###


5. Train the GAN :
Phase 1

real_batch_size: Get the batch size of the input batch (it’s the zero-th dimension of the tensor)
noise: Generate the noise using tf.random.normal. The shape is batch size x random_normal_dimension
fake images: Use the generator that you just created. Pass in the noise and produce fake images.
mixed_images: concatenate the fake images with the real images.
Set the axis to 0.
discriminator_labels: Set to 0. for real images and 1. for fake images.
Set the discriminator as trainable.
Use the discriminator’s train_on_batch() method to train on the mixed images and the discriminator labels.

Phase 2

noise: generate random normal values with dimensions batch_size x random_normal_dimensions
Use real_batch_size.
Generator_labels: Set to 1. to mark the fake images as real
The generator will generate fake images that are labeled as real images and attempt to fool the discriminator.
Set the discriminator to NOT be trainable.
Train the GAN on the noise and the generator labels.

Further Readhttps://www.geeksforgeeks.org/generative-adversarial-network-gan/



Last Updated : 04 Jul, 2022
Like Article
Save Article
Previous
Next
Share your thoughts in the comments
Similar Reads