Unlock the "why" behind your AI models' decisions with easy-explain
, a Python package designed to democratize access to advanced XAI algorithms. By integrating state-of-the-art explanation techniques with minimal code, we make AI transparency accessible to developers and researchers alike.
Important
The new versions of easy-explain
after 0.4.3 have breaking changes. We have changed the logic of different imports to support more models like YoloV8. Have a look at the provided examples.
- Primary:
3.11
- Also Supported:
3.9
,3.10
Ensure one of these Python versions is installed on your system to use easy-explain
.
easy-explain
can be seamlessly integrated into your projects with a straightforward installation process:
To incorporate easy-explain
into your project as a dependency, execute the following command in your terminal:
pip install easy-explain
easy-explain
uses under the hood different packages based on the model to be used. Captum is used for classification models and it aids to comprehend how the data properties impact the model predictions or neuron activations, offering insights on how the model performs. Captum comes together with Pytorch library.
There are also other algorithms supported like GradCam or customade algorithms to support other models like the LRP implementation for YoloV8.
Currently, easy-explain
specializes in specific cutting-edge XAI methodologies for images:
- Occlusion: For deep insight into classification model decisions.
- Cam: SmoothGradCAMpp & LayerCAM for explainability on image classification models.
- Layer-wise Relevance Propagation (LRP): Specifically tailored for YoloV8 models, unveiling the decision-making process in object detection tasks.
To begin unraveling the intricacies of your model's decisions, import and utilize the corresponding classes as follows:
from easy_explain import OcclusionExplain
model = 'your-model'
occlusion_explain = OcclusionExplain(model=model)
vis_types=[["blended_heat_map", "original_image"]]
vis_signs = [["positive","all"]]
occlusion_explain.generate_explanation(image_url="your-image",total_preds=5,vis_types = vis_types, vis_signs = vis_signs, labels_path="your-labels-path")
from easy_explain import YOLOv8LRP
model = 'your-model'
image = 'your-image'
lrp = YOLOv8LRP(model, power=2, eps=1, device='cpu')
explanation_lrp = lrp.explain(image, cls='your-class', contrastive=False).cpu()
lrp.plot_explanation(frame=image, explanation = explanation_lrp, contrastive=True, cmap='seismic', title='Explanation for your class"')
from easy_explain import YOLOv8LRP
model = 'your-model'
image = 'your-image'
trans_params = {"ImageNet_transformation":
{"Resize": {"h": 224,"w": 224},
"Normalize": {"mean": [0.485, 0.456, 0.406], "std": [0.229, 0.224, 0.225]}}}
explainer = CAMExplain(model)
input_tensor = explainer.transform_image(img, trans_params["ImageNet_transformation"])
explainer.generate_explanation(img, input_tensor, multiple_layers=["a_layer", "another_layer", "another_layer"])
For more information about how to begin have a look at the examples notebooks.
Explore how easy-explain
can be applied in various scenarios:
easy-explain
thrives on community contributions, from feature requests and bug reports to code submissions. We encourage you to share your insights, improvements, and use cases to foster a collaborative environment for advancing XAI.
Submit Issues: Encounter a bug or have a feature idea? Let us know through our issues page.
Code Contributions: Interested in contributing code? Please refer to our CONTRIBUTING
guidelines for more information on how to get started..
Join us in making AI models more interpretable, transparent, and trustworthy with easy-explain
.