Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: OpenVINO vs Torch inferencer on CPU gives significantly different results #2448

Open
1 task done
FedericoDeBona opened this issue Dec 3, 2024 · 0 comments
Open
1 task done

Comments

@FedericoDeBona
Copy link

FedericoDeBona commented Dec 3, 2024

Describe the bug

From #2447, training a model, then exporting it in both OpenVINO and Torch the visual result and pred_score are a lot different. Looks like a normalization problem but in the metadata of both models, the values are the same.

OpenVINO inference - pred_score: 0.7632662161599639
openvino
Torch inference - pred_score: 0.6897885799407959
torch

Full code

from anomalib.data import MVTec
from anomalib.models import Patchcore
from anomalib.engine import Engine
from anomalib.deploy import ExportType
from anomalib.deploy import OpenVINOInferencer, TorchInferencer
from PIL import Image
from glob import glob
from anomalib.utils.visualization import ImageVisualizer
from anomalib.utils.visualization.image import  VisualizationMode
from anomalib import TaskType
import os
import cv2

CATEGORY="screw"

datamodule = MVTec(category=CATEGORY)
model = Patchcore()
engine = Engine()

engine.fit(datamodule=datamodule, model=model)
engine.export(model=model,export_type=ExportType.TORCH) 
engine.export(model=model,export_type=ExportType.OPENVINO) 

vino_inferencer = OpenVINOInferencer(
    path=f"/home/trainer/trainer_engine/results/Patchcore/MVTec/{CATEGORY}/latest/weights/openvino/model.bin",
    metadata=f"/home/trainer/trainer_engine/results/Patchcore/MVTec/{CATEGORY}/latest/weights/openvino/metadata.json",
    device="CPU")
torch_inferencer = TorchInferencer(path=f"/home/trainer/trainer_engine/results/Patchcore/MVTec/{CATEGORY}/latest/weights/torch/model.pt", device="cpu")
print(vino_inferencer.metadata)
print(torch_inferencer.metadata)
#Output
"""
{'task': 'segmentation', 'image_threshold': 30.5954647064209, 'pixel_threshold': 34.377254486083984, 'anomaly_maps.min': 9.125215530395508, 'anomaly_maps.max': 49.421302795410156, 'pred_scores.min': 25.861536026000977, 'pred_scores.max': 52.29983139038086}
{'task': <TaskType.SEGMENTATION: 'segmentation'>, 'image_threshold': 30.5954647064209, 'pixel_threshold': 34.377254486083984, 'anomaly_maps.min': 9.125215530395508, 'anomaly_maps.max': 49.421302795410156, 'pred_scores.min': 25.861536026000977, 'pred_scores.max': 52.29983139038086}

"""


visualizer = ImageVisualizer(mode=VisualizationMode.FULL, task=TaskType.SEGMENTATION)

for defect_path in sorted(glob(f"/home/trainer/trainer_engine/datasets/{CATEGORY}/test/*")):
	defect = os.path.basename(defect_path)
	print(f"===== {defect} =====")
	for img_path in sorted(glob(f"/home/trainer/trainer_engine/datasets/{CATEGORY}/test/{defect}/*")):
		img_name = os.path.basename(img_path)
		vino_res = vino_inferencer(cv2.imread(img_path))
		torch_res = torch_inferencer(img_path)
		print("VINO")
		print(vino_res.pred_score)
		display(Image.fromarray(cv2.resize(visualizer.visualize_image(vino_res), (500*3,125*3))))
		print("TORCH")
		print(torch_res.pred_score)
		display(Image.fromarray(cv2.resize(visualizer.visualize_image(torch_res), (500*3,125*3))))

Dataset

MVTec

Model

PatchCore

Steps to reproduce the behavior

See above

OS information

OS information:

  • OS: Ubuntu 24.04
  • Python version: 3.10.14
  • Anomalib version: 1.2.0.dev0
  • PyTorch version: 2.4.0+cu118
  • CUDA/cuDNN version: 11.8
  • GPU models and configuration: GeForce RTX 3090 Ti

Expected behavior

Not sure, I suppose the exported models should have almost equal result in both OpenVINO and Torch

Screenshots

No response

Pip/GitHub

GitHub

What version/branch did you use?

1.2.0.dev0

Configuration YAML

-

Logs

-

Code of Conduct

  • I agree to follow this project's Code of Conduct
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant