-
Notifications
You must be signed in to change notification settings - Fork 16
/
TODOList.txt
54 lines (41 loc) · 2.25 KB
/
TODOList.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
QUESTIONS:
- [ELBO] Why is (in Maurizio's implementation of ELBO) only the LogLikelihood summed over the batch and not the KL divergence?
- [PRIOR, KL DIV] Why do variational autoencoders use standard normal prior? cna we train/generalize more?
- Our plan is to compare ELBO of both models. How can we compare KL divergences? maybe by averaging them out?
- Does it even make sense to compare KL-divergences of different priors etc.? Or should we just compare the test-log-likelihood?
- How do we change the KL divergence if we want to account for the bayesian weights as well as the latent distribution parameters (e.g. gaussian)?
TODO:
- Compare the following architectures with log-likelihood on test data + 20 reconstructions (subjective)
-- Normal Autoencoder (incl. input and output neurons):
--- [784, 32, 784]
--- [784, 256, 128, 2, 128, 256, 784]
--- [784, 256, 128, 3, 128, 256, 784]
--- [784, 256, 128, 32, 128, 256, 784]
--- [784, 128, 64, 2, 64, 128, 784]
--- [784, 128, 64, 3, 64, 128, 784]
--- [784, 128, 64, 32, 64, 128, 784]
Things to compare:
- TAKE TRAINING TIME!
- COUNT PARAMETERS!
- MODEL SIZE ON DISK (export variables, numpy matrix)
- TEST LOG-LIKELIHOOD
- LATENT SPACE VISUALIZATION
- VISUAL RECONSTRUCTION QUALITY (W/ W/O DENOISING CAPABILITIES)
-- Convolutional Autoencoder (take the architecture from BNN-Autoencoder-Convolutions notebook)
--- Here we use a gaussian likelihood that needs to be compared
- Jonas: Implement Variational Autoencoder as a bayesian DNN
- + perhaps compare to sampling approach (fit mean and variance)
For the future:
- How do we make both models comparable?
- Ideas:
-- Compute sample mean and sample variance on latent output of bayesian autoencoder, sampling from gaussian and feed into decoder
MEETING MAURIZIO:
- Compare using Test Log Likelihood, but using the same Lokelihood (we're thus ignoring KL divergence)
- Compare II visualizing Latent Space
- Compare III reconstruction Visually, turning it into a generative model
- Scaled sum of N/M in their Deep Gaussian Processes is correct for KL divergence -> Data independence
- We can try to make Prior Trainable in VAEsk (right now it's N(0,1)
EXTRAS:
- We could have a look on Hypernets for BNN.
- Any advantage on changing latent space of VAEs?
FORMAT: Single Python Notebook.