Tensorflow implementation of Generative Adversarial Network for approximating a 1D Gaussian distribution.
Desirable result of GAN is to have decision boundary (db) of one-half and probability distribution function (pdf) of generated data similar to the original pdf.
Through the following images, you can see that all of results for various 1D Gaussian distributions are well shaped.
On the other hand, code in two references does not give stable results when changing mean, sigma, or seed. Please check it out for yourself.
mean = -1 | mean = +1 | |
stdev = 0.7 | ![]() | ![]() |
stdev = 1.0 | ![]() | ![]() |
stdev = 2.0 | ![]() | ![]() |
The implementation is based on the projects:
[1] Project by Eric Jang : BLOG, CODE
[2] Project by John Glober : BLOG, CODE
Fully-connected neural network with 1 hidden layer.
Generator | Discriminator | |
Input layer | 1 node | 1 node |
Hidden layer | 32 nodes + relu | 32 nodes + relu |
Output layer | 1 node | 1 node + sigmoid |
Discriminator is pre-trained with pdf of the orginal data in both two references.
In real situation, we don't know about the pdf of the original data. Actually, that is what we want know.
Estimated pdf of the original data is used to pre-train discriminator in this implementation.
This implementation has been tested with Tensorflow r0.12 on Windows 10 and Ubuntu 14.04.