interpret-community is developed and maintained by a community of people interested in exploring interpretability algorithms and how best to deploy them in industry settings. The goal is to accelerate the workflow of any individual or organization working on interpretable algorithms. Everyone is encouraged to contribute at any level to add and improve the implemented algorithms, notebooks and visualization.
Maintainers are actively supporting the project and have made substantial contributions to the repository.
They have admin access to the repo and provide support reviewing issues and pull requests.
- Beth Zeranski
- DevOps Pipelines for building repository and running tests
- Added example notebooks
- Documentation
- Brandon Horn
- Explanation Visualization dashboard
- Eduardo de Leon
- Extensions system to interpret
- Feature planning
- Himanshu Chandola
- Raw and engineered feature explanations and transformations
- Ilya Matiach
- TabularExplainer, MimicExplainer, PFI and SHAP explainers
- Janhavi Mahajan
- Explanation Visualization dashboard: Cohort Editor
- Mark Soper
- Explanation serialization
- Mehrnoosh Sameki
- Feature planning
- Customer engagement
- Documentation
- Tong Wen
- Initial project development and demos
- Vincent Xu
- Managing project deliverables
- Coordinating features
- Customer engagement
- Documentation
- Walter Martin
- Repository structure
- Explainer and explanation hierarchy
- Explanation serialization
To contributors: please add your name to the list when you submit a patch to the project.
- Roger He
- Tests for time-series functionality with pandas indexes