This project implements Grad-CAM (Gradient-weighted Class Activation Mapping) for visualizing and understanding convolutional neural networks.
Grad-CAM is a technique that provides visual explanations for predictions made by deep learning models. It highlights the important regions in an input image that influence the model's decisions.
COPY_OF_GRAD_CAM_TRAINING_TUTORIAL.ipynb
: Jupyter Notebook containing the implementation and training tutorial.data.csv
: Dataset with cool dog images.
- Python 3.x
- Necessary libraries as specified in the notebook (e.g., TensorFlow or PyTorch, NumPy, Matplotlib)
-
Clone the repository:
git clone https://github.com/stuyai/GradCam.git
-
Navigate to the project directory:
cd GradCam
-
Install required dependencies:
pip install -r requirements.txt
-
Open the Jupyter Notebook:
jupyter notebook COPY_OF_GRAD_CAM_TRAINING_TUTORIAL.ipynb
-
Follow the instructions in the notebook to train the model and generate Grad-CAM visualizations.
This project is licensed under the MIT License.
- Grad-CAM: Visual Explanations from Deep Networks via Gradient-based Localization
- The blog post on the slides