This repository implements the algorithms and experiments described in Learning with Stochastic orders.
To get started, create and activate the conda
environment below:
conda env create -f gmorder_env.yml
conda activate gmorder_env
Note, if installing this environment on macOS, remove the cudatoolkits
dependency from gmorder_env.yml
as CUDA no longer supports macOS.
Additionally, for running generative modeling training on larger datasets, such as CIFAR-10, you will need to ensure that device
is set to cpu
, e.g., in run_wgan_train_images.sh
and run_wgan_dominate_images.sh
.
To run the 1D portfolio optimization experiment open and execute the portfolio_optimization
notebook.
To run the GAN training using the Choquet-Toland (CT) distance use the shell script below:
sh run_choquet_train_distributions.sh
Open this script and change data
(Line 7) to one of circle_of_gaussians
, swiss_roll
, image_point_cloud
.
FID | |
---|---|
g0: WGAN-GP | 69.67 |
g*: WGAN-GP + VDC | 67.317 ± 0.776 |
To train a baseline WGAN-GP model run
sh run_wgan_train_images.sh
Once training is complete, to reproduce the WGAN-GP + VDC results from the paper, execute:
sh run_wgan_dominate_images.sh
If needed, change file paths in this script to point to where the WGAN-GP checkpoint file and hyperparameter args are saved.
For several of our generator, discriminator, and Choquet critics, we draw inspiration and leverage code from the following public GitHub repositories:
- https://github.com/caogang/wgan-gp
- https://github.com/ozanciga/gans-with-pytorch
- https://github.com/CW-Huang/CP-Flow
To cite our work, please use:
@inproceedings{
domingo-enrich2023learning,
title={Learning with Stochastic Orders},
author={Carles Domingo-Enrich and Yair Schiff and Youssef Mroueh},
booktitle={International Conference on Learning Representations},
year={2023},
url={https://openreview.net/forum?id=P3PJokAqGW}
}