Pytorch implementation of ProjectedGAN (https://arxiv.org/abs/2111.01007)
ProjectedGAN establishes new sota results on plenty of datasets in terms of metrics, such as FID, and training convergence. It exploits features of a pretrained feature network, which subsequently will be fed into multiple discriminators which operate at different stages of resolution. This new paradigm of using multiple discriminators leads to immense performance boosts. Furthermore, they use the generator from FastGAN, EfficientNet-Lite as their feature network and apply Differentiable Augmentation.
(Results at 0, 8k, 40k and 100k images shown to the generator)
usage: projected_gan.py [-h] [--batch-size BATCH_SIZE] [--epochs N] [--lr LR] [--beta1 lambda] [--beta2 lambda]
[--latent-dim LATENT_DIM] [--diff-aug DIFF_AUG] [--checkpoint-path Path] [--save-all SAVE_ALL]
[--checkpoint-efficient-net Path] [--log-every LOG_EVERY] [--dataset-path Path]
[--image-size IMAGE_SIZE]
--batch-size BATCH_SIZE input batch size for training (default: 32)
--epochs N number of epochs to train (default: 50)
--lr LR learning rate (default: 0.0002)
--beta1 lambda Adam beta param (default: 0.0)
--beta2 lambda Adam beta param (default: 0.999)
--latent-dim LATENT_DIM Latent dimension for generator (default: 100)
--diff-aug DIFF_AUG Apply differentiable augmentation to both discriminator and generator (default: True)
--checkpoint-path Path Path for checkpointing (default: /checkpoints)
--save-all SAVE_ALL Saves all discriminator, all CSMs and generator if True, only the generator otherwise (default: False)
--checkpoint-efficient-net Path Path for EfficientNet checkpoint (default: efficientnet_lite1.pth)
--log-every LOG_EVERY How often model will be saved, generated images will be saved etc. (default: 100)
--dataset-path Path Path to data (default: /data)
--image-size IMAGE_SIZE Size of images in dataset (default: 256)
@InProceedings{Sauer2021NEURIPS,
author = {Axel Sauer and Kashyap Chitta and Jens M{\"{u}}ller and Andreas Geiger},
title = {Projected GANs Converge Faster},
booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
year = {2021},
}
@misc{2101.04775,
Author = {Bingchen Liu and Yizhe Zhu and Kunpeng Song and Ahmed Elgammal},
Title = {Towards Faster and Stabilized GAN Training for High-fidelity Few-shot Image Synthesis},
Year = {2021},
Eprint = {arXiv:2101.04775},
}
@article{1905.11946,
Author = {Mingxing Tan and Quoc V. Le},
Title = {EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks},
Year = {2019},
Eprint = {arXiv:1905.11946},
Howpublished = {International Conference on Machine Learning, 2019},
}
@misc{2006.10738,
Author = {Shengyu Zhao and Zhijian Liu and Ji Lin and Jun-Yan Zhu and Song Han},
Title = {Differentiable Augmentation for Data-Efficient GAN Training},
Year = {2020},
Eprint = {arXiv:2006.10738},
}