university seminar material
Adam: https://arxiv.org/pdf/1412.6980.pdf Batchnorm: http://arxiv.org/pdf/1502.03167 Dropout: http://jmlr.org/papers/volume15/srivastava14a/srivastava14a.pdf FCN: http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Long_Fully_Convolutional_Networks_2015_CVPR_paper.pdf Feature Dropout: http://arxiv.org/pdf/1207.0580.pdf Highway networks: http://papers.nips.cc/paper/5850-training-very-deep-networks.pdf Random search for hyperparameter optimization: http://www.jmlr.org/papers/volume13/bergstra12a/bergstra12a Inception papers: http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Szegedy_Going_Deeper_With_2015_CVPR_paper.pdf, http://www.cv-foundation.org/openaccess/content_cvpr_2016/papers/Szegedy_Rethinking_the_Inception_CVPR_2016_paper.pdf, http://arxiv.org/pdf/1602.07261 ResNet: http://arxiv.org/pdf/1512.03385, https://arxiv.org/pdf/1603.05027v2.pdf Network in network: http://arxiv.org/pdf/1312.4400 ReLu: http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf pReLu: http://www.cv-foundation.org/openaccess/content_iccv_2015/papers/He_Delving_Deep_into_ICCV_2015_paper.pdf VAE: http://arxiv.org/pdf/1312.6114 VGG: http://arxiv.org/pdf/1409.1556
https://arxiv.org/pdf/1809.02942.pdf
they learn GOL with a single conv kernel with a large deep mlp on top. we did it with two conv kernels and 3 mlp nodes.
ideas:
- find a mini-network that is more efficient than your stochastic CA
- reinforcement-learn a CA surrogate based on polarized-light micrographs