Skip to content

Latest commit

 

History

History
58 lines (38 loc) · 3.52 KB

README.md

File metadata and controls

58 lines (38 loc) · 3.52 KB

Building Artistic Paintings

Neural Style Transfer using PyTorch


Description

NST(Neural Style Transfer) works on the concept that a pretrained convolution neural network (CNN) can we used to transfer content from one image and styles from a another image to blend them together to form a new image which has content of content image and style of style image. batch of image

The code has been implemented using PyTorch based on the paper

Neural Style Transfer

Below are the various process for implemeting NST . The key points to be followed are:

  1. Loading Images and Applying transforms:

    • Loading the content image and style image and resizing them for faster processing and converting them into Tensors.
  2. Loading the Pertrained VGG19 Model:

    • Loading the Pretrained VGG19 model and freezing the layers so that no gradient computation or the update of gradients takes place.
    • We only need the convolution layers of the model or the features layers as we are not training any classifier.
    • As the model is already trained with large no of images we will be using its convolution layers to extract the content and style of the images using these convolution layers.
  3. Getting the content from different layers and creating gram matrix:

    • To get the content of the image we need to pass through the various convolution layers of vgg model.
    • Style refers to the colour and texture of present in the image. To get the style of an image we we find the correlation between different conv layers which can be found by a gram matrix.
  4. Defining content and Style Loss, and Comparing it with target image:

    • We define the content and target loss by comparing the content of the target image with content of the content image and style loss by comparing the gram matrix of target and style image.
  5. Initializing style weights for each layers and style and content weight to balance the losses:

    • We define style weights for each indvidual style layers.We give initial layers a higher value than lower layers.

    • We define weights for content weight(alpha) and style weight(beta) so that both maintain an equal balance between both losses. Usually alpha is smaller and beta is larger. As beta increases we see more style in target image.

  6. Various Content and Style images tried along with different "Learning Rate" and Different values of "style and content weights":

generated images generated images generated images

Usage:

Code for all the various combination tried: