This is a Keras implementation of a new volume-preserving flow using a series of Householder transformations as described in the following paper:
- Jakub M. Tomczak, Max Welling, Improving Variational Auto-Encoders using Householder Flow, NIPS Workshop on Bayesian Deep Learning, arXiv preprint, 2016
There are two datasets available:
- MNIST: it will be downloaded automatically;
- Histopathology: before running the experiment this needs to be unpacked.
- Unpack histopathology data.
- Set-up your experiment in
run_experiment.py
(additional changes could be needed incommons/configuration.py
). - Run experiment:
python run_experiment.py
You can run a vanilla VAE or a VAE with the Householder Flow (HF) by setting number_of_Householders
variable to 0
(the vanilla VAE) or 1,2,...
(the VAE+HF).
Additionally, you can choose either you want to run the experiment with warm-up
(Sønderby, Casper Kaae, et al. "Ladder Variational Autoencoders." NIPS. 2016.) (Bowman, Samuel R., et al. "Generating sentences from a continuous space." arXiv, 2015.) or
none
.
Please cite our paper if you use this code in your research:
@article{TW:2016,
title={Improving Variational Auto-Encoders using Householder Flow},
author={Tomczak, Jakub M and Welling, Max},
journal={arXiv preprint arXiv:1611.09630},
year={2016}
}
The research conducted by Jakub M. Tomczak was funded by the European Commission within the Marie Skłodowska-Curie Individual Fellowship (Grant No. 702666, ”Deep learning and Bayesian inference for medical imaging”).
I am very grateful to Szymon Zaręba who helped me to develop the framework at its early stage.
Our previous implementation of the Householder Flow was buggy. In order to avoid possible bugs we used Keras. Please take a look at the new version of the paper for new (corrected) results.