-
Notifications
You must be signed in to change notification settings - Fork 2
Backlog
Mokke Meguru edited this page Jun 29, 2020
·
1 revision
Set Naming Convention
XXX [YYY] [WithMask]
- XXX is layer name such as Inv1x1Conv
- YYY is the data-dimention such as 2D. default 3D
- WithMask is the support of Mask
Inv1x1Conv -> Inv1x1Conv for 3D Tensor Inv1x1Conv2D -> Inv1x1Conv for 2D Tensor Inv1x1Conv2DWithMask -> Inv1x1Conv for 2D Tensor with the support of Mask
- Mask is a high burden of memory. Flow-based Model needs more memory in training, So we want to reduce mask as much as possible.
- Mask often pollute the invertible characteristics.
- Multi-dimensional Layer often have complex implementation. It makes proofing and verification difficult.
New training results Oxford-flower102 with only 8 hours! (Quadro P6000 x 1)
data | NLL(test) | epoch | pretrained |
---|---|---|---|
Oxford-flower102 | 4.590211391448975 | 1024 | — |
see more detail, you can see my internship’s report (Japanese only, if you need translated version, please contact me.)
- Implement SPADE Layer
- Implement Invertible Flatten Layer
- Update document with Some Example
New training results Oxford-flower102 with only 4 hours! (Quadro P6000 x 1)
data | NLL(test) | epoch | pretrained |
---|---|---|---|
Oxford-flower102 | 4.640194892883301 | 512 | — |
update loss value in Glow-MNIST
data | NLL(val) | epoch | pretrained |
---|---|---|---|
MNIST | 1.33 | 64 | — |
Move example code to the TFGENZOO_EXAMPLE.
publish installable alpha-version!!!
update loss value
data | NLL(val) | epoch | pretrained |
---|---|---|---|
MNIST | 1.56 | about 450 | — |
docker-compose build
sh run_script.sh
[docker]$ cd workspace/Github
[docker]$ python
python 3.6 > from TFGENZOO.examples.glow_mnist import trainer
python 3.6 > trainer.main()
requirements
- Nvidia-Docker
- GPU > NVIDIA 1080
- about 4 hours
I try to update correct loss function (training is correct, but nll is something wrong…)
I may implement normalizing flow.
You can try it with these commands in your shell.
And also, You can check training process via tensorboard in TFGENZOO/glow_log
sh run_script.sh
[docker]$ cd workspace/Github
[docker]$ python
python 3.6 > from TFGENZOO.examples.glow_mnist import trainer
python 3.6 > trainer.main()