Skip to content

This project is an unofficial implementation of the paper: Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization

License

Notifications You must be signed in to change notification settings

MAlberts99/PyTorch-AdaIN-StyleTransfer

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

36 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

PyTorch-AdaIN-StyleTransfer

This project is an unofficial PyTorch implementation of the paper using Google Colab: Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization

All credit goes to: Xun Huang and Serge Belongie

Description

The paper presents a new style transfer algorithm, which uses a fixed pretrained vgg19 (up to ReLU 4.1) to encode a style image and a content image. Then the style of the style image is transferred to the content image. The novel approach this paper describes uses an AdaIN layer to transfer the style. This layer first normalises the the content image to unit mean and standard deviation. After that, the content image is scaled such that it's mean and standard deviation are equal to the mean and standard deviation of the style image. Then the image is decoded using a decoder that mirrors the vgg19.

Requirements

  • A google drive account to run the notebooks.
  • A pretrained vgg19 pth file. I used the file provided by Naoto Inoue in his implementation of the same paper. Link: vgg_normalised.pth.

To train:

A note on the Datasets: The free version of Google Colab only has 30GB of usable storage while using the GPU. Thus you may have to reduce the size of the dataset. In this implementation I used 40k images of each dataset.

Trained Model

You can download my model from here. It has been trained for 120.000 iteration and provides an image quality close to the offical implementation. The style weight (gamma) used was 2.0.

Manual

  • Copy the content of this repository into a Google Drive folder. Then download the pretrained vgg19 file and place it in the same folder. If you want to play around with the network add the pretrained network file as well. If you want to train the network from scratch, e.g change the style weight, download the datasets as well.

Interference

  • Open the Style Transfer Test notebook. In the first cell you need to specify the directory of the folder in your Google Drive. Additionally if you changed the image folder you also need to change the img_dir variable accordingly.
  • The next cell will load the network.
  • Then the images are loaded. Here you can choose your images.
  • In the next cell you can change the alpha variable. This variable influences the impact of the style image on the content image.

Training

  • Open the Style Transfer Train notebook. In the first cell you need to specify the directory of the folder in your Google Drive.
  • Then you will have to download/import the datasets into the colab instance. I have not implemented this step as depending on your storage you will need to reduce the amount of images of each dataset used.
  • Change the pathStyleImages and pathContentImages to the folders containing the images. Note the folder needs to only contain images. Nested folders are not supported
  • Then run the rest of the cells.

Results

Style Generated Image

As can be seen above the results are not quite as good as the one presented in the paper. This can be explained by the model in the paper being trained for 160.000 iterations. I only trained mine for 120.000. Additionally, the original model was trained on 80.000 images of each type in whereas I only trained on 40.000 images.

About

This project is an unofficial implementation of the paper: Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published