Advanced AI Explainability for computer vision. Support for CNNs, Vision Transformers, Classification, Object detection, Segmentation, Image similarity and more.
-
Updated
Jan 19, 2025 - Python
Advanced AI Explainability for computer vision. Support for CNNs, Vision Transformers, Classification, Object detection, Segmentation, Image similarity and more.
Model interpretability and understanding for PyTorch
Zennit is a high-level framework in Python using PyTorch for explaining/exploring neural networks using attribution methods like LRP.
The code of NeurIPS 2021 paper "Scalable Rule-Based Representation Learning for Interpretable Classification" and TPAMI paper "Learning Interpretable Rules for Scalable Data Representation and Classification"
Material related to my book Intuitive Machine Learning. Some of this material is also featured in my new book Synthetic Data and Generative AI.
Modular Python Toolbox for Fairness, Accountability and Transparency Forensics
Python library to explain Tree Ensemble models (TE) like XGBoost, using a rule list.
SIDU: SImilarity Difference and Uniqueness method for explainable AI
Code for NeurIPS 2019 paper ``Self-Critical Reasoning for Robust Visual Question Answering''
Interpretable Pre-Trained Transformers for Heart Time-Series Data
Implementation of Beyond Neural Scaling beating power laws for deep models and prototype-based models
A Multimodal Transformer: Fusing Clinical Notes With Structured EHR Data for Interpretable In-Hospital Mortality Prediction
A PyTorch implementation of constrained optimization and modeling techniques
Explainability of Deep Learning Models
Find the samples, in the test data, on which your (generative) model makes mistakes.
The code of AAAI 2020 paper "Transparent Classification with Multilayer Logical Perceptrons and Random Binarization".
[ICCV 2023] Learning Support and Trivial Prototypes for Interpretable Image Classification
ProtoTorch is a PyTorch-based Python toolbox for bleeding-edge research in prototype-based machine learning algorithms.
NAISR: A 3D Neural Additive Model for Interpretable Shape Representation
Add a description, image, and links to the interpretable-ai topic page so that developers can more easily learn about it.
To associate your repository with the interpretable-ai topic, visit your repo's landing page and select "manage topics."