Skip to content
Related Articles

Related Articles

Improve Article
Save Article
Like Article

Colorize Images Using Deoldify

  • Last Updated : 09 Nov, 2021

Deoldify is the project used to colorize and restore old images from Black and white format. It was developed by Jason Antic. Deoldify uses GAN architecture to colorize the image. It contains a generator that added color to the critic (Discriminator), the goal of which to criticize the coloring generated by the generator. It proposed a special type of GAN training method called No-GAN.

Architectural Details

The author uses the following deep learning concepts in these models. These concepts are:

Attention reader! Don’t stop learning now. Get hold of all the important Machine Learning Concepts with the Machine Learning Foundation Course at a student-friendly price and become industry ready.



  • Self-attention: The authors use U-Net architecture for the generator, they also modified the architecture to use Spectral Normalization and self-attention in the model.
  • Two-Time Scale Update Rule: It is a way of training the GAN architecture. It’s just one to one generator/critic architecture and a higher critic learning rate. This is modified to incorporate a threshold critic loss that makes sure that the critic is “caught up” before moving on to generator training. This is particularly useful for NoGAN training.
  • No-GAN: This method of GAN training is developed by the authors of the model. The main idea behind that model that you get the benefits of GAN training while spending minimal time doing direct GAN training. We will discuss NoGAN in more detail.
  • Generator Loss: There are two types of NoGAN learning in the generator:
    • Perpetual Loss: This loss is used in the generator to report and minimize the losses generated due to bias in the model.
    • Critic Loss: It is loss used in the discriminator/critic.

No-GAN

This is a new type of GAN training that is developed by the authors of Deoldify. It provides the benefits of GAN training while spending minimal time doing direct GAN training. Instead, we spent most time training generator and critic separately with more straight-forward, fast, and reliable conventional methods.



The steps are as follows:  

  • First, we train the generator in a conventional way by itself with just the feature loss.
  • Next, we generate images from the trained generator and train the critic on distinguishing between those outputs and real images as a basic binary classifier.
  • Finally, train the generator and critic together in a GAN setting (starting right at the target size of 192px in this case).

All the important GAN training only takes place in a very small fraction of time. There’s an inflection point where it appears the critic has transferred all the useful knowledge to the generator. There appears to be no productive training after the model achieved the inflection point. The hard part appears to be finding the inflection point and the model is quite unstable, so the author has to create a lot of checkpoints. Another key thing about No-GAN is that you can repeat pre-training the critic on generated images after the initial GAN training, then repeat the GAN training itself in the same fashion.

There are 3 types of models that are trained by Deoldify:

  • Artistic: This model achieves the best results in terms of image coloration, in terms of details, and vibrancy. The models use a ResNet 34 backbone architecture with U-Net with an emphasis on the depth of layers on the decoder side. There are some drawbacks of the model such as this model does not provide stability for common tasks such as natural scenes and portraits and it takes a lot of time and parameter tuning to obtain the best results.
  • Stable: This model achieves the best results in landscapes and portraits. It provides better coloring to human faces instead of gray coloring on faces. The models use a ResNet 101 backbone architecture with U-Net with an emphasis on the depth of layers on the decoder side. This model generally has less weird miscoloration than the artistic model but is also less colorful.
  • Video: This model is optimized for smooth, consistent, and flicker-free video. This would be the least colorful of all the three models. The model is similar to the architecture ‘stable’ but differs in training.

Implementation

Python3




# Clone deoldify Repository
! git clone https://github.com/jantic/DeOldify.git DeOldify
 
# change directory to DeOldify Repo
cd DeOldify
 
# For Colab
! pip install -r colab_requirements.txt
# For Local Script
! pip install -r requirements.txt
 
# import pytorch library
import torch
# check for GPU
if not torch.cuda.is_available():
    print('GPU not available.')
# necessary imports
import fastai
from deoldify.visualize import *
import warnings
warnings.filterwarnings("ignore",
                        category=UserWarning, message=".*?Your .*? set is empty.*?")
# download the artistic model
!mkdir 'models'
!wget https://data.deepai.org/deoldify/ColorizeArtistic_gen.pth -O
  ./models/ColorizeArtistic_gen.pth
 
# use the get image colorizer function with artistic model
colorizer = get_image_colorizer(artistic=True)
 
# Here, we provide the parameters such as source URL, render factor etc.
'&crop=smart&auto=webp&s=a5f2523513bb24648737760369d2864eb1f57118' #@param {type:"string"}
render_factor = 39  #@param {type: "slider", min: 7, max: 40}
watermarked = False #@param {type:"boolean"}
 
if source_url is not None and source_url !='':
    image_path = colorizer.plot_transformed_image_from_url(url=source_url,
          render_factor=render_factor, compare=True, watermarked=watermarked)
    show_image_in_notebook(image_path)
else:
    print('Provide the valid image URL.')

 
 

DeOldify Results (Original B/W Image Credit here)

DeOldify Stable Results

References:

 




My Personal Notes arrow_drop_up
Recommended Articles
Page :