Skip to content

Backlog

Mokke Meguru edited this page Jun 29, 2020 · 1 revision

News [2020/6/29]

Set Naming Convention

XXX [YYY] [WithMask]

  • XXX is layer name such as Inv1x1Conv
  • YYY is the data-dimention such as 2D. default 3D
  • WithMask is the support of Mask
Inv1x1Conv -> Inv1x1Conv for 3D Tensor
Inv1x1Conv2D -> Inv1x1Conv for 2D Tensor
Inv1x1Conv2DWithMask -> Inv1x1Conv for 2D Tensor with the support of Mask

Why implement these layer?

  1. Mask is a high burden of memory. Flow-based Model needs more memory in training, So we want to reduce mask as much as possible.
  2. Mask often pollute the invertible characteristics.
  3. Multi-dimensional Layer often have complex implementation. It makes proofing and verification difficult.

News [2020/6/16]

New training results Oxford-flower102 with only 8 hours! (Quadro P6000 x 1)

data NLL(test) epoch pretrained
Oxford-flower102 4.590211391448975 1024

see more detail, you can see my internship’s report (Japanese only, if you need translated version, please contact me.)

News [2020/6/12]

  • Implement SPADE Layer
  • Implement Invertible Flatten Layer
  • Update document with Some Example

News [2020/5/29]

New training results Oxford-flower102 with only 4 hours! (Quadro P6000 x 1)

data NLL(test) epoch pretrained
Oxford-flower102 4.640194892883301 512

News [2020/5/25]

update loss value in Glow-MNIST

data NLL(val) epoch pretrained
MNIST 1.33 64

News [2020/5/1]

Move example code to the TFGENZOO_EXAMPLE.

News [2020/4/24]

publish installable alpha-version!!!

News [2020/3/17]

update loss value

data NLL(val) epoch pretrained
MNIST 1.56 about 450
docker-compose build
sh run_script.sh
[docker]$ cd workspace/Github
[docker]$ python
python 3.6 > from TFGENZOO.examples.glow_mnist import trainer
python 3.6 > trainer.main()

requirements

  • Nvidia-Docker
  • GPU > NVIDIA 1080
  • about 4 hours

News [2020/3/15]

I try to update correct loss function (training is correct, but nll is something wrong…)

News [2020/2/28]

I may implement normalizing flow. You can try it with these commands in your shell. And also, You can check training process via tensorboard in TFGENZOO/glow_log

sh run_script.sh
[docker]$ cd workspace/Github
[docker]$ python
python 3.6 > from TFGENZOO.examples.glow_mnist import trainer
python 3.6 > trainer.main()