The presentation draws on the work proposed on a 2017 paper Interpretable Explanations of Black Boxes by Meaningful Perturbation by Ruth Fong & Andrea Vidaldi.
Repo for Paper: https://github.com/ruthcfong/perturb_explanations
- 📖 Simonyan, K., Vedaldi, A., & Zisserman, A. (2014). Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps. ArXiv:1312.6034 [Cs]. http://arxiv.org/abs/1312.6034
- 📺 How Deep Neural Networks Work
- 📝 Activation Functions in Neural Networks
- 📝 What is Meta-Learning in Machine Learning
- 📝 Understanding Neural Networks: From Activation Function To Back Propagation
- 📝 Understanding Neural Networks
- 📺 Introduction to Optimization: Gradient Based Algorithms
- 📘 Molnar, Christoph. “Interpretable machine learning. A Guide for Making Black Box Models Explainable”, 2019, Chapter 10. https://christophm.github.io/interpretable-ml-book/.
- 📝 CNN Heat Maps: Gradients vs. DeconvNets vs. Guided Backpropagation
Aknowledgment: @jacobgil for the pytorch implementation of this paper's framework.