Skip to content

Commit

Permalink
Transform behaviour+documentation (openvinotoolkit#1953)
Browse files Browse the repository at this point in the history
* only export eval_transform

* add transform documentation

* update data configs

* update changelog

* minor fix

* Update docs/source/snippets/data/transforms/inference_cli.sh

Co-authored-by: Ashwin Vaidya <ashwinnitinvaidya@gmail.com>

---------

Co-authored-by: Ashwin Vaidya <ashwinnitinvaidya@gmail.com>
  • Loading branch information
djdameln and ashwinvaidya17 authored May 2, 2024
1 parent c71cf7e commit 8f5fa93
Show file tree
Hide file tree
Showing 25 changed files with 399 additions and 21 deletions.
1 change: 1 addition & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@ The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/).

### Changed

- Use default model-specific eval transform when only train_transform specified by @djdameln(https://github.com/djdameln) in (<https://github.com/openvinotoolkit/anomalib/pull/1953>)
- 🔨Rename OptimalF1 to F1Max for consistency with the literature, by @samet-akcay in https://github.com/openvinotoolkit/anomalib/pull/1980
- 🐞Update OptimalF1 score to use BinaryPrecisionRecallCurve and remove num_classes by @ashwinvaidya17 in https://github.com/openvinotoolkit/anomalib/pull/1972

Expand Down
3 changes: 1 addition & 2 deletions configs/data/avenue.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -5,11 +5,10 @@ init_args:
clip_length_in_frames: 1
frames_between_clips: 1
target_frame: last
image_size: [256, 256]
transform: null
train_batch_size: 32
eval_batch_size: 32
num_workers: 8
transform: null
train_transform: null
eval_transform: null
val_split_mode: from_test
Expand Down
3 changes: 1 addition & 2 deletions configs/data/btech.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2,11 +2,10 @@ class_path: anomalib.data.BTech
init_args:
root: "./dtasets/BTech"
category: "01"
image_size: [256, 256]
transform: null
train_batch_size: 32
eval_batch_size: 32
num_workers: 8
transform: null
train_transform: null
eval_transform: null
test_split_mode: from_dir
Expand Down
3 changes: 1 addition & 2 deletions configs/data/folder.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -8,12 +8,11 @@ init_args:
mask_dir: "ground_truth/broken_large"
normal_split_ratio: 0
extensions: [".png"]
image_size: [256, 256]
transform: null
train_batch_size: 32
eval_batch_size: 32
num_workers: 8
task: segmentation
transform: null
train_transform: null
eval_transform: null
test_split_mode: from_dir
Expand Down
3 changes: 1 addition & 2 deletions configs/data/kolektor.yaml
Original file line number Diff line number Diff line change
@@ -1,11 +1,10 @@
class_path: anomalib.data.Kolektor
init_args:
root: "./datasets/kolektor"
image_size: [256, 256]
transform: null
train_batch_size: 32
eval_batch_size: 32
num_workers: 8
transform: null
train_transform: null
eval_transform: null
test_split_mode: from_dir
Expand Down
3 changes: 1 addition & 2 deletions configs/data/mvtec.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2,12 +2,11 @@ class_path: anomalib.data.MVTec
init_args:
root: ./datasets/MVTec
category: bottle
image_size: [256, 256]
transform: null
train_batch_size: 32
eval_batch_size: 32
num_workers: 8
task: segmentation
transform: null
train_transform: null
eval_transform: null
test_split_mode: from_dir
Expand Down
3 changes: 1 addition & 2 deletions configs/data/mvtec_3d.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2,11 +2,10 @@ class_path: anomalib.data.MVTec3D
init_args:
root: ./datasets/MVTec3D
category: "bagel"
image_size: [256, 256]
transform: null
train_batch_size: 32
eval_batch_size: 32
num_workers: 8
transform: null
train_transform: null
eval_transform: null
test_split_mode: from_dir
Expand Down
3 changes: 1 addition & 2 deletions configs/data/shanghaitec.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -5,11 +5,10 @@ init_args:
clip_length_in_frames: 1
frames_between_clips: 1
target_frame: LAST
image_size: [256, 256]
transform: null
train_batch_size: 32
eval_batch_size: 32
num_workers: 8
transform: null
train_transform: null
eval_transform: null
val_split_mode: FROM_TEST
Expand Down
3 changes: 1 addition & 2 deletions configs/data/ucsd_ped.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -5,11 +5,10 @@ init_args:
clip_length_in_frames: 2
frames_between_clips: 10
target_frame: LAST
image_size: [256, 256]
transform: null
train_batch_size: 8
eval_batch_size: 1
num_workers: 8
transform: null
train_transform: null
eval_transform: null
val_split_mode: FROM_TEST
Expand Down
3 changes: 1 addition & 2 deletions configs/data/visa.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -2,11 +2,10 @@ class_path: anomalib.data.Visa
init_args:
root: "./datasets/visa"
category: "capsules"
image_size: [256, 256]
transform: null
train_batch_size: 32
eval_batch_size: 32
num_workers: 8
transform: null
train_transform: null
eval_transform: null
test_split_mode: from_dir
Expand Down
8 changes: 8 additions & 0 deletions docs/source/markdown/guides/how_to/data/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,6 +13,13 @@ This section contains tutorials on how to fully utilize the data components of a
Learn more about how to use `Folder` dataset to train anomalib models on your custom data.
:::

:::{grid-item-card} {octicon}`versions` Using Data Transforms.
:link: ./transforms
:link-type: doc

Learn how to apply custom data transforms to the input images.
:::

:::{grid-item-card} {octicon}`table` Input tiling
:link: ./input_tiling
:link-type: doc
Expand All @@ -27,5 +34,6 @@ Learn more about how to use the tiler for input tiling.
:hidden:
./custom_data
./transforms
./input_tiling
```
131 changes: 131 additions & 0 deletions docs/source/markdown/guides/how_to/data/transforms.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,131 @@
# Data Transforms

This tutorial will show how Anomalib applies transforms to the input images, and how these transforms can be configured. Anomalib uses the [Torchvision Transforms v2 API](https://pytorch.org/vision/main/auto_examples/transforms/plot_transforms_getting_started.html) to apply transforms to the input images.

Common transforms are the `Resize` transform, which is used to resize the input images to a fixed width and height, and the `Normalize` transform, which normalizes the pixel values of the input images to a pre-determined range. The normalization statistics are usually chosen to correspond to the pre-training characteristics of the model's backbone. For example, when the backbone of the model was pre-trained on ImageNet dataset, it is usually recommended to normalize the model's input images to the mean and standard deviation of the pixel values of ImageNet. In addition, there are many other transforms which could be useful to achieve the desired pre-processing of the input images and to apply data augmentations during training.

## Using custom transforms for training and evaluation

When we create a new datamodule, it will not be equipped with any transforms by default. When we load an image from the datamodule, it will have the same shape and pixel values as the original image from the file system.

```{literalinclude} ../../../../snippets/data/transforms/datamodule_default.txt
:language: python
```

Now let's create another datamodule, this time passing a simple resize transform to the datamodule using the `transform` argument.

::::{tab-set}
:::{tab-item} API
:sync: label-1

```{literalinclude} ../../../../snippets/data/transforms/datamodule_custom.txt
:language: python
```

:::

:::{tab-item} CLI
:sync: label-2

In the CLI, we can specify a custom transforms by providing the class path and init args of the Torchvision transforms class:

```{literalinclude} ../../../../snippets/data/transforms/datamodule_custom_cli.yaml
:language: yaml
```

::::

As we can see, the datamodule now applies the custom transform when loading the images, resizing both training and test data to the specified shape.

In the above example, we used the `transform` argument to assign a single set of transforms to be used both in the training and in the evaluation subsets. In some cases, we might want to apply distinct sets of transforms between training and evaluation. This can be useful, for example, when we want to apply random data augmentations during training to improve generalization of our model. Using different transforms for training and evaluation can be done easily by specifying different values for the `train_transform` and `eval_transform` arguments. The train transforms will be applied to the images in the training subset, while the eval transforms will be applied to images in the validation, testing and prediction subsets.

::::{tab-set}
:::{tab-item} API
:sync: label-1

```{literalinclude} ../../../../snippets/data/transforms/datamodule_train_eval.txt
:language: python
```

:::

:::{tab-item} CLI
:sync: label-2

`train_transform` and `eval_transform` can also be set separately from CLI. Note that the CLI also supports stacking multiple transforms using a `Compose` object.

```{literalinclude} ../../../../snippets/data/transforms/datamodule_train_eval_cli.yaml
:language: yaml
```

::::

```{note}
Please note that it is not recommended to pass only one of `train_transform` and `eval_transform` while keeping the other parameter empty. This could lead to unexpected behaviour, as it might lead to a mismatch between the training and testing subsets in terms of image shape and normalization characteristics.
```

## Model-specific transforms

Each Anomalib model defines a default set of transforms, that will be applied to the input data when the user does not specify any custom transforms. The default transforms of a model can be inspected using the `configure_transforms` method, for example:

```{literalinclude} ../../../../snippets/data/transforms/model_configure.txt
:language: python
```

As shown in the example, the default transforms for PatchCore consist of resizing the image to 256x256 pixels, followed by center cropping to an image size of 224x224. Finally, the pixel values are normalized to the mean and standard deviation of the ImageNet dataset. These transforms correspond to the recommended pre-processing steps described in the original PatchCore paper.

The use of these model-specific transforms ensures that Anomalib automatically applies the right transforms when no custom transforms are passed to the datamodule by the user. When no user-defined transforms are passed to the datamodule, Anomalib's engine assigns the model's default transform to the `train_transform` and `eval_transform` of the datamodule at the start of the fit/val/test sequence:

::::{tab-set}
:::{tab-item} API
:sync: label-1

```{literalinclude} ../../../../snippets/data/transforms/model_fit.txt
:language: python
```

:::

:::{tab-item} CLI
:sync: label-2

Since the CLI uses the Anomalib engine under the hood, the same principles concerning model-specific transforms apply when running a model from the CI. Hence, the following command will ensure that Patchcore's model-specific default transform is used when fitting the model.

```{literalinclude} ../../../../snippets/data/transforms/model_fit_cli.sh
:language: bash
```

::::

## Transforms during inference

To ensure consistent transforms between training and inference, Anomalib includes the eval transform in the exported model. During inference, the transforms are infused in the model's forward pass which ensures that the transforms are always applied. The following example illustrates how Anomalib's torch inferencer automatically applies the transforms stored in the model. The same principles apply to both Lightning inference and OpenVINO inference.

::::{tab-set}
:::{tab-item} API
:sync: label-1

```{literalinclude} ../../../../snippets/data/transforms/inference.txt
:language: python
```

:::

:::{tab-item} CLI
:sync: label-2

The CLI behaviour is equivalent to that of the API. When a model is trained with a custom `eval_transform` like in the example below, the `eval_transform` is included both in the saved lightning model as in the exported torch model.

```{literalinclude} ../../../../snippets/data/transforms/inference_cli.yaml
:language: yaml
```

```{literalinclude} ../../../../snippets/data/transforms/inference_cli.sh
:language: bash
```

::::

:::
::::
:::::
17 changes: 17 additions & 0 deletions docs/source/snippets/data/transforms/datamodule_custom.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
from torchvision.transforms.v2 import Resize

transform = Resize((256, 256))
datamodule = MVTec(transform=transform)

datamodule.prepare_data()
datamodule.setup()

datamodule.train_transform
# Resize(size=[256, 256], interpolation=InterpolationMode.BILINEAR, antialias=warn)
datamodule.eval_transform
# Resize(size=[256, 256], interpolation=InterpolationMode.BILINEAR, antialias=warn)

next(iter(datamodule.train_data))["image"].shape
# torch.Size([3, 256, 256])
next(iter(datamodule.test_data))["image"].shape
# torch.Size([3, 256, 256])
18 changes: 18 additions & 0 deletions docs/source/snippets/data/transforms/datamodule_custom_cli.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
class_path: anomalib.data.MVTec
init_args:
root: ./datasets/MVTec
category: bottle
image_size: [256, 256]
train_batch_size: 32
eval_batch_size: 32
num_workers: 8
task: segmentation
test_split_mode: from_dir
test_split_ratio: 0.2
val_split_mode: same_as_test
val_split_ratio: 0.5
seed: null
transform:
- class_path: torchvision.transforms.v2.Resize
init_args:
size: [256, 256]
10 changes: 10 additions & 0 deletions docs/source/snippets/data/transforms/datamodule_default.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
from anomalib.data import MVTec

datamodule = MVTec()
datamodule.prepare_data()
datamodule.setup()

next(iter(datamodule.train_data))["image"].shape
# torch.Size([3, 900, 900])
next(iter(datamodule.test_data))["image"].shape
# torch.Size([3, 900, 900])
33 changes: 33 additions & 0 deletions docs/source/snippets/data/transforms/datamodule_train_eval.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,33 @@
from torchvision.transforms.v2 import Compose, RandomAdjustSharpness, RandomHorizontalFlip, Resize

train_transform = Compose(
[
RandomAdjustSharpness(sharpness_factor=0.7, p=0.5),
RandomHorizontalFlip(p=0.5),
Resize((256, 256), antialias=True),
Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
],
)
eval_transform = Compose(
[
Resize((256, 256), antialias=True),
Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225]),
],
)

datamodule = MVTec(train_transform=train_transform, eval_transform=eval_transform)
datamodule.prepare_data()
datamodule.setup()

datamodule.train_transform
# Compose(
# RandomAdjustSharpness(p=0.5, sharpness_factor=0.7)
# RandomHorizontalFlip(p=0.5)
# Resize(size=[256, 256], interpolation=InterpolationMode.BILINEAR, antialias=True)
# Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225], inplace=False)
# )
datamodule.eval_transform
# Compose(
# Resize(size=[256, 256], interpolation=InterpolationMode.BILINEAR, antialias=True)
# Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225], inplace=False)
# )
Loading

0 comments on commit 8f5fa93

Please sign in to comment.