Tensorflow 2 implementation of Arbitrary Style Transfer in Real-time with Adaptive Instance Normalization which introduces the adaptive instance normalization (AdaIN) layer, allowing for style transfer of arbitrary style images.
This implementation is based on the original Torch implementation and also on the great unofficial pytorch implementation.
Create a Python 3.7 virtual environment and activate it:
virtualenv -p python3.7 venv
source ./venv/bin/activate
Next, install the required dependencies:
pip install -r requirements.txt
To style an image using a pre-trained model specify the content and style image as well as the directory of the model checkpoint.
python style.py \
--log-dir model/ \
--content-image images/content/avril_cropped.jpg \
--style-image images/style/impronte_d_artista_cropped.jpg \
--output-image images/output/avril_stylized.jpg \
--alpha 1.0
The alpha
parameter makes it possible to control the level of
stylization of the content image. Varying alpha
between 0 and 1 (default):
Training requires both the MSCOCO and the WikiArt datasets, the first one is automatically downloaded and converted to tfrecords using Tensorflow datasets. The style images however needs to be downloaded from here.
To start training, simply run:
python train.py \
--style-dir WIKIART_IMAGE_DIR \
--log-dir model/
where WIKIART_IMAGE_DIR
is the location of the WikiArt images.
Training 160 000
steps with default parameters takes about 6 hours on a Tesla P100 GPU.
To track metrics and see style progress, start Tensorboard
tensorboard --logdir model/
and navigate to localhost:6006.