Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature/dick/anomaly score normalization #35

Merged
merged 44 commits into from
Dec 21, 2021

Conversation

djdameln
Copy link
Contributor

@djdameln djdameln commented Dec 17, 2021

Description

  • This PR adds anomaly score and anomaly normalization. A substantial refactor was needed to make this functionality compatible with the TorchMetrics evaluation design.

Changes

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to not work as expected)
  • This change requires a documentation update

  • Adds NormalizationCallback, which normalizes the predicted scores and error maps based on the distribution of normal data in the training set.
  • A test case was added for the normalization callback, which asserts that the (image-level) performance with and without normalization is the same.
  • Normalization is repeated with numpy operations in the post-process method of the openvino inferencer.
  • Adaptive threshold and training set statistics are now implemented as TorchMetrics classes, which de-clutters the base anomaly module and allows automatic saving and loading of the values in the model's state dict.
  • F1 score is now computed by using TorchMetrics F1 class configured with the computed threshold, instead of OptimalF1.
  • Our custom FeatureExtractor was replaced by torchvision's create_feature_extractor. For this I had to bump up the torch and torchvision versions.

Checklist:

  • My code follows the pre-commit style and check guidelines of this project.
  • I have performed a self-review of my code
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • My changes generate no new warnings
  • I have added tests that prove my fix is effective or that my feature works
  • New and existing tests pass locally with my changes

Copy link
Contributor

@samet-akcay samet-akcay left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot @djdameln! Looks really good, only minor stuff.

@@ -9,6 +9,7 @@

from .compress import CompressModelCallback
from .model_loader import LoadModelCallback
from .normalization import NormalizationCallback
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think NormalizationCallback is not sufficient to understand what the callback does. I think it should be something like ScoreNormalizationCallback or something similar.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I tried AnomalyScoreNormalizationCallback but found it too verbose. ScoreNormalizationCallback would be better, though maybe a bit vague. How about OutputNormalizationCallback, because it normalizes the outputs of the model?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I changed it to OutputNormalizationCallback. Let me know what you think

anomalib/core/metrics/training_stats.py Outdated Show resolved Hide resolved
anomalib/data/tiler.py Show resolved Hide resolved
requirements/requirements.txt Outdated Show resolved Hide resolved
anomalib/models/padim/model.py Outdated Show resolved Hide resolved
@djdameln
Copy link
Contributor Author

Thanks for the review @samet-akcay. I addressed your comments and updated the PR

@@ -1,8 +1,9 @@
pillow==8.3.2
nncf>=2.0.0
pytorch-lightning==1.3.6
torch==1.8.1
torchvision==0.9.1
-f https://download.pytorch.org/whl/torch_stable.html
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we need to remove this line too.

from torch.distributions import LogNormal, Normal


class OutputNormalizationCallback(Callback):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about the name of your PR? :) Like AnomalyScoreNormalizationCallback or ScoreNormalizationCallback. It doesn't matter if it's verbose. What's more important is its readability

Copy link
Contributor

@samet-akcay samet-akcay left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done with the second round. This will be super cool once merged.

Copy link
Contributor

@samet-akcay samet-akcay left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot for the effort @djdameln! This is a crucial addition!

@samet-akcay
Copy link
Contributor

Maybe we could use version 0.2.3 for this one?



def test_normalizer():
config = get_configurable_parameters(model_config_path="anomalib/models/padim/config.yaml")
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you add get_dataset_path() to config.dataset.path? Without this, the CI is downloading the entire dataset again in the current directory. Due to the permissions of the original files, the CI cannot remove the dataset once the test runs.

@samet-akcay samet-akcay merged commit f14d195 into development Dec 21, 2021
@samet-akcay samet-akcay deleted the feature/dick/anomaly-score-normalization branch December 21, 2021 13:28
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants