Deep Convolutional GAN (DCGAN) was proposed by a researcher from MIT and Facebook AI research .It is widely used in many convolution based generation based techniques. The focus of this paper was to make training GANs stable . Hence, they proposed some architectural changes in computer vision problem. In this article we will be using DCGAN on fashion MNIST dataset to generate the images related to clothes.
The generator of the DCGAN architecture takes 100 uniform generated values using normal distribution as an input. First, it changes the dimension to 4x4x1024 and performed a fractionally strided convolution in 4 times with stride of 1/2 (this means every time when applied, it doubles the image dimension while reducing the number of output channels). The generated output has dimensions of (64, 64, 3). There are some architectural changes proposed in generator such as removal of all fully connected layer, use of Batch Normalization which helps in stabilizing training. In this paper, the authors use ReLU activation function in all layers of generator, except for the output layers. We will be implementing generator with similar guidelines but not completely same architecture.
The role of the discriminator here is to determine that the image comes from either real dataset or generator. The discriminator can be simply designed similar to a convolution neural network that performs a image classification task. However, the authors of this paper suggested some changes in the discriminator architecture. Instead of fully connected layers, they used only strided-convolutions with LeakyReLU as activation function, the input of the generator is a single image from dataset or generated image and the output is a score that determines the image is real or generated.
In this section we will be discussing implementation of DCGAN in keras, since our dataset in Fashion MNIST dataset, this dataset contains images of size (28, 28) of 1 color channel instead of (64, 64) of 3 color channels. So, we needs to make some changes in the architecture , we will be discussing these changes as we go along.
- In first step, we need to import the necessary classes such as TensorFlow, keras , matplotlib etc. We will be using TensorFlow version 2. This version of tensorflow provides inbuilt support for Keras library as its default High level API.
- Now we load the fashion-MNIST dataset, the good thing is that dataset can be imported from tf.keras.datasets API. So, we don’t need to load datasets manually by copying files. This dateset contains 60k training images and 10k test images each of dimensions(28, 28, 1). Since the value of each pixel is in the range (0, 255), we divide these values by 255 to normalize it.
((60000, 28, 28), (10000, 28, 28))
- Now in the next step, we will be visualizing some of the images from Fashion-MNIST dateset, we use matplotlib library for that.
- Now, we define training parameters such as batch size and divides the dataset into batch size and fills those batch size by randomly sampling the training data.
- Now, we define the generator architecture, this generator architecture takes a vector of size 100 and first reshape that into (7, 7, 128) vector then applied transpose convolution in combination with batch normalization. The output of this generator is a trained an image of dimension (28, 28, 1).
Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= dense (Dense) (None, 6272) 633472 _________________________________________________________________ reshape (Reshape) (None, 7, 7, 128) 0 _________________________________________________________________ batch_normalization (BatchNo (None, 7, 7, 128) 512 _________________________________________________________________ conv2d_transpose (Conv2DTran (None, 14, 14, 64) 204864 _________________________________________________________________ batch_normalization_1 (Batch (None, 14, 14, 64) 256 _________________________________________________________________ conv2d_transpose_1 (Conv2DTr (None, 28, 28, 1) 1601 ================================================================= Total params: 840, 705 Trainable params: 840, 321 Non-trainable params: 384 _________________________________________________________________
- Now, we define out discriminator architecture, the discriminator takes image of size 28*28 with 1 color channel and output a scalar value representing image from either dataset or generated image.
Model: "sequential_1" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d (Conv2D) (None, 14, 14, 64) 1664 _________________________________________________________________ leaky_re_lu (LeakyReLU) (None, 14, 14, 64) 0 _________________________________________________________________ dropout (Dropout) (None, 14, 14, 64) 0 _________________________________________________________________ conv2d_1 (Conv2D) (None, 7, 7, 128) 204928 _________________________________________________________________ leaky_re_lu_1 (LeakyReLU) (None, 7, 7, 128) 0 _________________________________________________________________ dropout_1 (Dropout) (None, 7, 7, 128) 0 _________________________________________________________________ flatten (Flatten) (None, 6272) 0 _________________________________________________________________ dense_1 (Dense) (None, 1) 6273 ================================================================= Total params: 212, 865 Trainable params: 212, 865 Non-trainable params: 0 _________________________________________________________________
- Now we need to compile the our DCGAN model (combination of generator and discriminator), we will first compile discriminator and set its training to False, because we first want to train the generator.
- Now, we define the training procedure for this GAN model, we will be using tqdm package which we have imported earlier., this package help in visualizing training.
- Now we define a function that generate and save images from generator (during training). We will use these generated images to plot the GIF later.
- Now, we need to train the model but before that we also need to create batches of training data and add a dimension that represents number of color maps.
0%| | 0/10 [00:00<?, ?it/s] Epoch 1/10 10%|? | 1/10 [01:04<09:39, 64.37s/it] Epoch 2/10 20%|?? | 2/10 [02:10<08:39, 64.99s/it] Epoch 3/10 30%|??? | 3/10 [03:14<07:33, 64.74s/it] Epoch 4/10 40%|???? | 4/10 [04:19<06:27, 64.62s/it] Epoch 5/10 50%|????? | 5/10 [05:23<05:22, 64.58s/it] Epoch 6/10 60%|?????? | 6/10 [06:27<04:17, 64.47s/it] Epoch 7/10 70%|??????? | 7/10 [07:32<03:13, 64.55s/it] Epoch 8/10 80%|???????? | 8/10 [08:37<02:08, 64.48s/it] Epoch 9/10 90%|????????? | 9/10 [09:41<01:04, 64.54s/it] Epoch 10/10 100%|??????????| 10/10 [10:46<00:00, 64.61s/it] CPU times: user 7min 4s, sys: 33.3 s, total: 7min 37s Wall time: 10min 46s
- Now we will define a function that takes the save images and convert into GIF. We use this function from here
Results and Conclusion:
- To evaluate the quality of the representations learned by DCGANs for supervised tasks, the authors train the model on ImageNet-1k and then use the discriminator’s convolution features from all layers, max pooling each layers representation to produce a 4 × 4 spatial grid. These features are then flattened and concatenated to form a 28672 dimensional vector and a regularized linear L2-SVM classifier is trained on top of them. This model is then evaluated on CIFAR-10 dataset but not trained don it. The model reported an accuracy of 82 % which also displays robustness of the model.
- On Street View Housing Number dataset, it achieved a validation loss of 22% which is the new state-of-the-art, even discriminator architecture when supervise trained as a CNN model has more validation loss than it.
- keras.fit() and keras.fit_generator()
- Python Keras | keras.utils.to_categorical()
- ML - Saving a Deep Learning model in Keras
- Applying Convolutional Neural Network on mnist dataset
- Importance of Convolutional Neural Network | ML
- ML | Transfer Learning with Convolutional Neural Networks
- Multiple Labels Using Convolutional Neural Networks
- Generative Adversarial Network (GAN)
- Super Resolution GAN (SRGAN)
- Text Generation using knowledge distillation and GAN
- Building a Generative Adversarial Network using Keras
- Python | Image Classification using keras
- Building an Auto-Encoder using Keras
- Keras.Conv2D Class
- ML | Word Encryption using Keras
- OpenCV and Keras | Traffic Sign Classification for Self-Driving Car
- Keras vs PyTorch
- Colorization Autoencoders using Keras
- Choose optimal number of epochs to train a neural network in Keras
- Creating a Keras Callback to send notifications on WhatsApp
If you like GeeksforGeeks and would like to contribute, you can also write an article using contribute.geeksforgeeks.org or mail your article to email@example.com. See your article appearing on the GeeksforGeeks main page and help other Geeks.
Please Improve this article if you find anything incorrect by clicking on the "Improve Article" button below.