Skip to content

Commit

Permalink
Merge develop 2 (#2936)
Browse files Browse the repository at this point in the history
* Update base.txt

updated dependency version of datumaro

* Update __init__.py

update version string

* Update requirements.txt

* Temporarily skip visual prompting openvino integration test (#2323)

* Fix import dm.DatasetSubset (#2324)

Signed-off-by: Kim, Vinnam <vinnam.kim@intel.com>

* Fix semantic segmentation soft prediction dtype (#2322)

* Fix semantic segmentation soft prediction dtype

* relax ref sal vals check

---------

Co-authored-by: Songki Choi <songki.choi@intel.com>

* Contrain yapf verison lesser than 0.40.0 (#2328)

contrain_yapf_version

* Fix detection e2e tests (#2327)

Fix for detection

* Mergeback: Label addtion/deletion 1.2.4 --> 1.4.0 (#2326)

* Make black happy

* Fix conflicts

* Merge-back: add test datasets and edit the test code

* Make black happy

* Fix mis-merge

* Make balck happy

* Fix typo

* Fix typoi

---------

Co-authored-by: Songki Choi <songki.choi@intel.com>

* Bump datumaro up to 1.4.0rc2 (#2332)

bump datumaro up to 1.4.0rc2

* Tiling Doc for releases 1.4.0 (#2333)

* Add tiling documentation

* Bump otx version to 1.4.0rc2 (#2341)

* OTX deploy for visual prompting task  (#2311)

* Enable `otx deploy`

* (WIP) integration test

* Docstring

* Update args for create_model

* Manually set image embedding layout

* Enable to use model api for preprocessing
- `fit_to_window` doesn't work expectedly, so newly implemented `VisualPromptingOpenvinoAdapter` to use new resize function

* Remove skipped test

* Updated

* Update unit tests on model wrappers

* Update

* Update configuration

* Fix not to patch pretrained path

* pylint & update model api version in docstring

---------

Co-authored-by: Wonju Lee <wonju.lee@intel.com>

* Bump albumentations version in anomaly requirements (#2350)

increment albumentations version

* Update action detection (#2346)

* Remove skip mark for PTQ test of action detection

* Update action detection documentation

* Fix e2e (#2348)

* Change classification dataset from dummy to toy

* Revert test changes

* Change label name for multilabel dataset

* Revert e2e test changes

* Change ov test cases' threshold

* Add parent's label

* Update ModelAPI in 1.4 release (#2347)

* Upgrade model API

* Update otx in exportable code

* Fix unit tests

* Fix black

* Fix detection inference

* Fix det tiling

* Fix mypy

* Fix demo

* Fix visualizer in demo

* Fix black

* Add OTX optimize for visual prompting task (#2318)

* Initial commit

* Update block

* (WIP) otx optimize

* Fix

* WIP

* Update configs & exported outputs

* Remove unused modules for torch

* Add unit tests

* pre-commit

* Update CHANGELOG

* Update detection docs (#2335)

* Update detection docs

* Revert template id changes

* Fix wrong template id

* Update docs/source/guide/explanation/algorithms/object_detection/object_detection.rst

Co-authored-by: Eunwoo Shin <eunwoo.shin@intel.com>

* Update docs/source/guide/explanation/algorithms/object_detection/object_detection.rst

Co-authored-by: Eunwoo Shin <eunwoo.shin@intel.com>

---------

Co-authored-by: Eunwoo Shin <eunwoo.shin@intel.com>

* Add visual prompting documentation (#2354)

* (WIP) write docs

* Add visual prompting documentation

* Update CHANGELOG

---------

Co-authored-by: sungchul.kim <sungchul@ikvensx010>

* Remove custom modelapi patch in visual prompting (#2359)

* Remove custom modelapi patch

* Update test

* Fix graph metric order and label issues (#2356)

* Fix graph metric going backward issue
* Add license notice
* Fix pre-commit issue
* Add rename items & logic for metric
---------
Signed-off-by: Songki Choi <songki.choi@intel.com>

* Update multi-label document and conversion script (#2358)

Update docs, label convert script

* Update third party programs (#2365)

* Make anomaly task compatible with older albumentations versions (#2363)

* fix transforms export in metadata

* wrap transform dict

* add todo for updating to_dict call

* Fixing detection saliency map for one class case (#2368)

* fix softmax

* fix validity tests

* Add e2e test for visual prompting (#2360)

* (WIP) otx optimize

* pre-commit

* (WIP) set e2e

* Remove nncf config

* Add visual prompting requirement

* Add visual prompting in tox

* Add visual prompting in setup.py

* Fix typo

* Delete unused configuration.yaml

* Edit test_name

* Add to limit activation range

* Update from `vp` to `visprompt`

* Fix about no returning the first label

* pre-commit

* (WIP) otx optimize

* pre-commit

* (WIP) set e2e

* Remove nncf config

* Add visual prompting requirement

* Add visual prompting in tox

* Add visual prompting in setup.py

* Fix typo

* pre-commit

* Add actions

* Update tests/e2e/cli/visual_prompting/test_visual_prompting.py

Co-authored-by: Jaeguk Hyun <jaeguk.hyun@intel.com>

* Skip PTQ e2e test

* Change task name

* Remove skipped tc

---------

Co-authored-by: Jaeguk Hyun <jaeguk.hyun@intel.com>

* Fix e2e (#2366)

* Change e2e reference name

* Update openvino eval threshold for multiclass classification

* Change comment message

* Fix tiling e2e tests

---------

Co-authored-by: GalyaZalesskaya <galina.zalesskaya@intel.com>

* Add Dino head unit tests (#2344)

Recover DINO head unit tests

* Update for release 1.4.0rc2 (#2370)

* update for release 1.4.0rc2

* Add skip mark for unstable unit tests

---------

Co-authored-by: jaegukhyun <jaeguk.hyun@intel.com>

* Fix NNCF training on CPU (#2373)

* Align label order between Geti and OTX (#2369)

* align label order

* align with pre-commit

* update CHANGELOG.md

* deal with edge case

* update type hint

* Remove CenterCrop from Classification test pipeline and editing missing docs link (#2375)

* Fix missing link for docs and removing centercrop for classification data pipeline

* Revert the test threshold

* Fix H-label classification (#2377)

* Fix h-labelissue

* Update unit tests

* Make black happy

* Fix unittests

* Make black happy

* Fix update heades information func

* Update the logic: consider the loss per batch

* Update for release 1.4 (#2380)

* updated for 1.4.0rc3

* update changelog & release note

* bump datumaro version up

---------

Co-authored-by: Songki Choi <songki.choi@intel.com>

* Switch to PTQ for sseg (#2374)

* Switch to PTQ for sseg

* Update log messages

* Fix invalid import structures in otx.api (#2383)

Update tiler.py

* Update for 1.4.0rc4 (#2385)

update for release 1.4.0rc4

* [release 1.4.0] XAI: Return saliency maps for Mask RCNN IR async infer (#2395)

* Return saliency maps for openvino async infer

* add workaround to fix yapf importing error

---------

Co-authored-by: eunwoosh <eunwoo.shin@intel.com>

* Update for release 1.4.0 (#2399)

update version string

Co-authored-by: Sungman Cho <sungman.cho@intel.com>

* Fix broken links in documentation (#2405)

* fix docs links to datumaro's docs
* fix docs links to otx's docs
* bump version to 1.4.1

* Update exportable code README (#2411)

* Updated for release 1.4.1 (#2412)

updated for release 1.4.1

* Add workaround for the incorrect meta info M-RCNN (used for XAI) (#2437)

Add workaround for the incorrect mata info

* Add model category attributes to model template (#2439)

Add model category attributes to model template

* Add model category & status fields in model template

* Add is_default_for_task attr to model template

* Update model templates with category attrs

* Add integration tests for model templates consistency

* Fix license & doc string

* Fix typo

* Refactor test cases

* Refactor common tests by generator

---------
Signed-off-by: Songki Choi <songki.choi@intel.com>

* Update for 1.4.2rc1 (#2441)

update for release 1.4.2rc1

* Fix label list order for h-label classification (#2440)

* Fix label list for h-label cls
* Fix unit tests

* Modified fq numbers for lite HRNET (#2445)

modified fq numbers for lite HRNET

* Update PTQ ignored scope for hrnet 18  mod2 (#2449)

Update ptq ignored scope for hrnet 18  mod2

* Fix OpenVINO inference for legacy models (#2450)

* bug fix for legacy openvino models

* Add tests

* Specific exceptions

---------

* Update for 1.4.2rc2 (#2455)

update for release 1.4.2rc2

* Prevent zero-sized saliency map in tiling if tile size is too big (#2452)

* Prevent zero-sized saliency map in tiling if tile size is too big

* Prevent zero-sized saliency in tiling (PyTorch)

* Add unit tests for Tiler merge features methods

---------

Co-authored-by: Galina <galina.zalesskaya@intel.com>

* Update pot fq reference number (#2456)

update pot fq reference number to 15

* Bump datumaro version to 1.5.0rc0 (#2470)

bump datumaro version to 1.5.0rc0

* Set tox version constraint (#2472)

set tox version constraint - tox-dev/tox#3110

* Bug fix for albumentations (#2467)

* bug fix for legacy openvino models

* Address albumentation issue

---------

Co-authored-by: Ashwin Vaidya <ashwinitinvaidya@gmail.com>

* update for release 1.4.2rc3

* Add a dummy hierarchical config required by MAPI (#2483)

* bump version to 1.4.2rc4

* Bump datumaro version (#2502)

* bump datumaro version

* remove deprecated/reomved attribute usage of the datumaro

* Upgrade nncf version for 1.4 release (#2459)

* Upgrade nncf version

* Fix nncf interface warning

* Set the exact nncf version

* Update FQ refs after NNCF upgrade

* Use NNCF from pypi

* Update version for release 1.4.2rc5 (#2507)

update version for release 1.4.2rc5

* Update for 1.4.2 (#2514)

update for release 1.4.2

* create branch release/1.5.0

* Delete mem cache handler after training is done (#2535)

release mem cache handler after training is done

* Fix bug that auto batch size doesn't consider distributed training (#2533)

* consider distributed training while searching batch size

* update unit test

* reveret gpu memory upper bound

* fix typo

* change allocated to reserved

* add unit test for distributed training

* align with pre-commit

* Apply fix progress hook to release 1.5.0 (#2539)

* Fix hook's ordering issue. AdaptiveRepeatHook changes the runner.max_iters before the ProgressHook

* Change the expression

* Fix typo

* Fix multi-label, h-label issue

* Fix auto_bs issue

* Apply suggestions from code review

Co-authored-by: Eunwoo Shin <eunwoo.shin@intel.com>

* Reflecting reviews

* Refactor the name of get_data_cfg

* Revert adaptive hook sampler init

* Refactor the function name: get_data_cfg -> get_subset_data_cfg

* Fix unit test errors

* Remove adding AdaptiveRepeatDataHook for autobs

* Remove unused import

* Fix detection and segmentation case in Geti scenario

---------

Co-authored-by: Eunwoo Shin <eunwoo.shin@intel.com>

* Re introduce adaptive scheduling for training (#2541)

* Re-introduce adaptive patience for training

* Revert unit tests

* Update for release 1.4.3rc1 (#2542)

* Mirror Anomaly ModelAPI changes (#2531)

* Migrate anomaly exportable code to modelAPI (#2432)

* Fix license in PR template

* Migrate to modelAPI

* Remove color conversion in streamer

* Remove reverse_input_channels

* Add float

* Remove test as metadata is no longer used

* Remove metadata from load method

* remove anomalib openvino inferencer

* fix signature

* Support logacy OpenVINO model

* Transform image

* add configs

* Re-introduce adaptive training (#2543)

* Re-introduce adaptive patience for training

* Revert unit tests

* Fix auto input size mismatch in eval & export (#2530)

* Fix auto input size mismatch in eval & export

* Re-enable E2E tests for Issue#2518

* Add input size check in export testing

* Format float numbers in log

* Fix NNCF export shape mismatch

* Fix saliency map issue

* Disable auto input size if tiling enabled

---------

Signed-off-by: Songki Choi <songki.choi@intel.com>

* Update ref. fq number for anomaly e2e2 (#2547)

* Skip e2e det tests by issue2548 (#2550)

* Add skip to chained TC for issue #2548 (#2552)

* Update for release 1.4.3 (#2551)

* Update MAPI for 1.5 release (#2555)

Upgrade MAPI to v 0.1.6 (#2529)

* Upgrade MAPI

* Update exp code demo commit

* Fix MAPI imports

* Update ModelAPI configuration (#2564)

* Update MAPI rt infor for detection

* Upadte export info for cls, det and seg

* Update unit tests

* Disable QAT for SegNexts (#2565)

* Disable NNCF QAT for SegNext

* Del obsolete pot configs

* Move NNCF skip marks to test commands to avoid duplication

* Add Anomaly modelAPI changes to releases/1.4.0 (#2563)

* bug fix for legacy openvino models

* Apply otx anomaly 1.5 changes

* Fix tests

* Fix compression config

* fix modelAPI imports

* update integration tests

* Edit config types

* Update keys in deployed model

---------

Co-authored-by: Ashwin Vaidya <ashwinitinvaidya@gmail.com>
Co-authored-by: Kim, Sungchul <sungchul.kim@intel.com>

* Fix the CustomNonLinearClsHead when the batch_size is set to 1 (#2571)

Fix bn1d issue

Co-authored-by: sungmanc <sungmanc@intel.com>

* Update ModelAPI configuration (#2564 from 1.4) (#2568)

Update ModelAPI configuration (#2564)

* Update MAPI rt infor for detection

* Upadte export info for cls, det and seg

* Update unit tests

* Update for 1.4.4rc1 (#2572)

* Hotfix DatasetEntity.get_combined_subset function loop (#2577)

Fix get_combined_subset function

* Revert default input size to `Default` due to YOLOX perf regression (#2580)

Signed-off-by: Songki Choi <songki.choi@intel.com>

* Fix for the degradation issue of the classification task (#2585)

* Revert to sync with 1.4.0

* Remove repeat data

* Convert to the RGB value

* Fix color conversion logic

* Fix precommit

* Bump datumaro version to 1.5.1rc3 (#2587)

* Add label ids to anomaly OpenVINO model xml (#2590)

* Add label ids to model xml

---------

* Fix DeiT-Tiny model regression during class incremental training (#2594)

* enable IBloss for DeiT-Tiny

* update changelog

* add docstring

* Add label ids to model xml in release 1.5 (#2591)

Add label ids to model xml

* Fix DeiT-Tiny regression test for release/1.4.0 (#2595)

* Fix DeiT regression test

* update changelog

* temp

* Fix mmcls bug not wrapping model in DataParallel on CPUs (#2601)

Wrap multi-label and h-label classification models by MMDataParallel in case of CPU training.
---------
Signed-off-by: Songki Choi <songki.choi@intel.com>

* Fix h-label loss normalization issue w/ exclusive label group of singe label (#2604)

* Fix h-label loss normalization issue w/ exclusive label group with signle label

* Fix non-linear version

---------
Signed-off-by: Songki Choi <songki.choi@intel.com>

* Boost up Image numpy accessing speed through PIL (#2586)

* boost up numpy accessing speed through PIL

* update CHANGELOG

* resolve precommit error

* resolve precommit error

* add fallback logic with PIL open

* use convert instead of draft

* Add missing import pathlib for cls e2e testing (#2610)

* Fix division by zero in class incremental learning for classification (#2606)

* Add empty label to reproduce zero-division error

Signed-off-by: Songki Choi <songki.choi@intel.com>

* Fix minor typo

Signed-off-by: Songki Choi <songki.choi@intel.com>

* Fix empty label 4 -> 3

Signed-off-by: Songki Choi <songki.choi@intel.com>

* Prevent division by zero

Signed-off-by: Songki Choi <songki.choi@intel.com>

* Update license

Signed-off-by: Songki Choi <songki.choi@intel.com>

* Update CHANGELOG.md

Signed-off-by: Songki Choi <songki.choi@intel.com>

* Fix inefficient sampling

Signed-off-by: Songki Choi <songki.choi@intel.com>

* Revert indexing

Signed-off-by: Songki Choi <songki.choi@intel.com>

* Fix minor typo

Signed-off-by: Songki Choi <songki.choi@intel.com>

---------

Signed-off-by: Songki Choi <songki.choi@intel.com>

* Unify logger usage (#2612)

* unify logger

* align with pre-commit

* unify anomaly logger to otx

* change logger file path

* align with pre-commit

* change logger file path in missing file

* configure logger after ConfigManager is initialized

* configure logger when ConfigManager instance is initialized

* update unit test code

* move config_logger to each cli file

* align with pre-commit

* change part still using mmcv logger

* Fix XAI algorithm for Detection (#2609)

* Impove saliency maps algorithm for Detection

* Remove extra changes

* Update unit tests

* Changes for 1 class

* Fix pre-commit

* Update CHANGELOG

* Tighten dependency constraint only adapting latest patches (#2607)

* tighten dependency constratint only adapting latest patches

* adjust scikit-image version w.r.t python version

* adjust tensorboard version w.r.t python version

* remove version specifier for scikit-image

* Add metadata to optimized model (#2618)

* bug fix for legacy openvino models

* Add metadata to optimized model

* Revert formatting changes

---------

Co-authored-by: Ashwin Vaidya <ashwinitinvaidya@gmail.com>

* modify omegaconf version constraint

* [release 1.5.0] Fix XAI algorithm for Detection (#2617)

Update detection XAI algorithm

* Update dependency constraint (#2622)

* Update tpp (#2621)

* Fix h-label bug of missing parent labels in output (#2626)

* Fix h-label bug of missing parent labels in output

* Fix h-label test data label schema

* Update CHANGELOG.md

---------
Signed-off-by: Songki Choi <songki.choi@intel.com>

* Update publish workflow (#2625)

update publish workflow to push whl to internal pypi

* bump datumaro version to ~=1.5.0

* fixed mistake while mergeing back 1.4.4

* modifiy readme

* remove openvino model wrapper class

* remove openvino model wrapper tests

* [release 1.5.0] DeiT: enable tests + add ViTFeatureVectorHook (#2630)

Add ViT feature vector hook

* Fix docs broken link to datatumaro_h-label

Signed-off-by: Songki Choi <songki.choi@intel.com>

* Fix wrong label settings for non-anomaly task ModelAPIs

Signed-off-by: Songki Choi <songki.choi@intel.com>

* Update publish workflow for tag checking (#2632)

* Update e2e tests for XAI Detection (#2634)

Fix e2e XAI ref value

* Disable QAT for newly added models (#2636)

* Update release note and readme (#2637)

* update release note and readme

* remove package upload step on internal publish wf

* update release note and, changelog, and readme

* update version string to 1.6.0dev

* fix datumaro version to 1.6.0rc0

* Mergeback 1.5.0 to develop (#2642)

* Update publish workflow for tag checking (#2632)

* Update e2e tests for XAI Detection (#2634)

* Disable QAT for newly added models (#2636)

* Update release note and readme (#2637)

* remove package upload step on internal publish wf

* update release note and, changelog, and readme

* update version string to 1.6.0dev

---------

Co-authored-by: Galina Zalesskaya <galina.zalesskaya@intel.com>
Co-authored-by: Jaeguk Hyun <jaeguk.hyun@intel.com>

* Revert "Mergeback 1.5.0 to develop" (#2645)

Revert "Mergeback 1.5.0 to develop (#2642)"

This reverts commit 2f67686.

* Add a tool to help conduct experiments (#2651)

* implement run and experiment

* implement experiment result aggregator

* refactor experiment.py

* refactor run.py

* get export model speed

* add var collumn

* refactor experiment.py

* refine a way to update argument in cmd

* refine resource tracker

* support anomaly on research framework

* refine code aggregating exp result

* bugfix

* make other task available

* eval task save avg_time_per_images as result

* Add new argument to track CPU&GPU utilization and memory usage (#2500)

* add argument to track resource usage

* fix bug

* fix a bug in a multi gpu case

* use total cpu usage

* add unit test

* add mark to unit test

* cover edge case

* add pynvml in requirement

* align with pre-commit

* add license comment

* update changelog

* refine argument help

* align with pre-commit

* add version to requirement and raise an error if not supported values are given

* apply new resource tracker format

* refactor run.py

* support optimize in research framework

* cover edge case

* Handle a case where fail cases exist

* make argparse raise error rather than exit if problem exist

* revert tensorboard aggregator

* bugfix

* save failed cases as yaml file

* deal with integer in variables

* add epoch to metric

* use latest log.json file

* align with otx logging method

* move experiment.py from cli to tools

* refactor experiment.py

* merge otx run feature into experiment.py

* move set_arguments_to_cmd definition into experiment.py

* refactor experiment.py

* bugfix

* minor bugfix

* use otx.cli instead of each otx entry

* add feature to parse single workspace

* add comments

* fix bugs

* align with pre-commit

* revert parser argument

* align with pre-commit

* Make `max_num_detections` configurable (#2647)

* Make max_num_detections configurable

* Fix RCNN case with integration test

* Apply max_num_detections to train_cfg, too

---------
Signed-off-by: Songki Choi <songki.choi@intel.com>

* Revert inference batch size to 1 for instance segmentation (#2648)

Signed-off-by: Songki Choi <songki.choi@intel.com>

* Fix CPU training issue on non-CUDA system (#2655)

Fix bug that auto adaptive batch size raises an error if CUDA isn't available (#2410)

---------
Co-authored-by: Sungman Cho <sungman.cho@intel.com>
Co-authored-by: Eunwoo Shin <eunwoo.shin@intel.com>

* Remove unnecessary log while building a model (#2658)

* revert logger in otx/algorithms/detection/adapters/mmdet/utils/builder.py

* revert logger in otx/algorithms/classification/adapters/mmcls/utils/builder.py

* make change more readable

* Fix a minor bug of experiment.py (#2662)

fix bug

* Not check avg_time_per_image during test (#2665)

* ignore avg_time_per_image during test

* do not call stdev when length of array is less than 2

* ignore avg_time_per_image during regerssion test

* Update docs for enabling sphinx.ext.autosummary (#2654)

*  fix some errors/warnings on docs source

* enable sphinx-autosummary for API reference documentation

* Update Makefile

* update sphinx configuration

* Update PTQ docs (#2672)

* Replace POT -> PTQ

* Fixes from comments

* Update regression tests for develop (#2652)

* Update regression tests (#2556)

* update reg tests

* update test suit

* update regression criteria

---------

Co-authored-by: Eunwoo Shin <eunwoo.shin@intel.com>

* Exclude py37 target config for cibuildwheel (#2673)

* Add `--dryrun` option to tools/experiment.py (#2674)

* Fix variable override bug

* Add --dryrun option to see experiment list

---------
Signed-off-by: Songki Choi <songki.choi@intel.com>

* Update OTX explain CLI arguments (#2671)

* Change int8 to uint8 to XAI tests

* Add probabilities for CLI demo

* Rename arguments for explain

* Fix pre-commit

* Remove extra changes

* Fix integration tests

* Fix integration "explain_all_classes" test for OV

* Fix e2e tests for explain (#2681)

* Add README.md for experiment.py (#2688)

* write draft readme

* refine readme

* align with pre-commit

* Fix typo in reg test cmd (#2691)

* Select more proper model weight file according to commands run just before (#2696)

* consider more complex case when prepare eval and optimize

* update readme

* align with pre-commit

* add comment

* Add visual prompting zero-shot learning (`learn` & `infer`) (#2616)

* Add algobackend & temp configs

* Update config

* WIP

* Fix to enable `algo_backend`

* (WIP) Update dataset

* (WIP) Update configs

* (WIP) Update tasks

* (WIP) Update models

* Enable `learn` task through otx.train

* (WIP) enable `infer` (TODO : normalize points)

* Fix when `state_dict` is None

* Enable `ZeroShotInferenceCallback`

* Enable otx infer

* Enable to independently use processor

* Revert max_steps

* Change `postprocess_masks` to `staticmethod`

* Add `PromptGetter` & Enable `learn` and `infer`

* precommit

* Fix args

* Fix typo

* Change `id` to `id_`

* Fix import

* Fix args

* precommit

* (WIP) Add unit tests

* Fix

* Add unit tests

* Fix

* Add integration tests

* precommit

* Update CHANGELOG.md

* Update docstring and type annotations

* Fix

* precommit

* Fix unused args

* precommit

* Fix

* Fix unsupported dtype in ov graph constant converter (#2676)

* Fix unsupported dtype in ov graph constant converter

* Fix more ov-graph related unit tests

* Skip failure TC with adding issue number ref. (#2717)

* Fix visual prompting e2e test (#2719)

Skip zero-shot e2e

* Remove duplicated variable combination in experiment.py (#2713)

* Enhance detection & instance segmentation experiment (#2710)

* Compute precision and recall along with f-measure

* Log performance

* Accept ellipse annotation from datumaro format

* Fix dataset adapter condition for det/iset

* Insert garbage collection btw experiments

* Upgrade NNCF & OpenVINO (#2656)

* Upgrade OV MAPI and NNCF version

* Update demo requirements

* Update changelog

* Update datumaro

* Add rust installation

* Update NNCF configs for IS models

* Update more fqs

* Exclude nncf from upgrade

* Revert "Update NNCF configs for IS models"

This reverts commit 7c8db8c.

* Revert "Update more fqs"

This reverts commit 5b91c32.

* Revert "Exclude nncf from upgrade"

This reverts commit 8926c51.

* Update FQs

* Revert "Revert "Update NNCF configs for IS models""

This reverts commit f904c0c.

* Disable explain for NNCF detection task

* Update FQs for anomaly

* Update cls FQs

* Update datumaro

* Update exportable code requirements

* Add unit test to cover the changes

* Fix multilabel classification class index (#2736)

Fix multilabel cls

* Refine parsing final score of training in experiment.py (#2738)

refine val parser

* Make mean teacher algorithm consider distributed training (#2729)

* make mean_teacher consider distributed training

* align with pre-commit

* re-enable test case

* move tensor not to cuda but current device

* apply comment

* Add visual prompting zero-shot learning (`export`, IR inference) (#2706)

* Add algobackend & temp configs

* Update config

* WIP

* Fix to enable `algo_backend`

* (WIP) Update dataset

* (WIP) Update configs

* (WIP) Update tasks

* (WIP) Update models

* Enable `learn` task through otx.train

* (WIP) enable `infer` (TODO : normalize points)

* Fix when `state_dict` is None

* Enable `ZeroShotInferenceCallback`

* Enable otx infer

* Enable to independently use processor

* Revert max_steps

* Change `postprocess_masks` to `staticmethod`

* Add `PromptGetter` & Enable `learn` and `infer`

* precommit

* Fix args

* Fix typo

* Change `id` to `id_`

* Fix import

* Fix args

* precommit

* (WIP) Add unit tests

* Fix

* Add unit tests

* Fix

* Add integration tests

* precommit

* Update CHANGELOG.md

* Update docstring and type annotations

* Fix

* precommit

* Reuse SAM modules for `export` & Add dataset

* Fix

* Enable `export`

* Convert fp32

* Update logic & tests

* Fix & Add prompt getter in `model_adapter_keys`

* Initial `Inferencer`, `Task`, and `Model`

* Fix to use original mask decoder during inference

* Remove internal loop in `PromptGetter`

* Update IO

* (WIP) Add unit tests for export

* Update `PromptGetter` to use only tensor ops

* Fix issue about `original_size` disappear in onnx graph

* (WIP) Add export unit test

* Update

* Fix typo

* Update

* Fix unexpected IF & Update inputs to avoid issues which OV on CPU doesn't support dynamic operations

* Enable `PromptGetter` to handle #labels itself

* Add ov inferencer

* Fix overflow during casting dtype & duplicated cast

* Fix

* Add unit&integration tests

* pre-commit

* Fix original vpms

* Fix intg & e2e tests

* Change mo CLI to API

* precommit

* Remove blocks

* Update CHANGELOG.md

* Avoid repeatedly assigning constant tensors/arrays

* Fix typo

* Automate performance benchmark (#2742)

* Add parameterized perf test template

* Split acccuracy / perf tests

* Automate speed test setting

* Add benchmark summary fixture

* Add multi/h-label tests

* Add detection tests

* Add instance segmentationt tests

* Add tiling tests

* Add semantic segmenation tests

* Add anomaly test

* Update tools/expreiment.py (#2751)

* have constant exp directory name

* support to parse dynamic eval output

* align with pre-commit

* fix minor unit test bug

* Add performance benchmark github action workflow (#2762)

* Split accuracy & speed benchmark github workflows (#2763)

* Fix a bug that error is raised when train set size is greater than minimumof batch size in HPO by exactly 1 (#2760)

deal with HPO edge case

* Fix a bug that a process tracking resource usage doesn't exit when main process raises an error (#2765)

* termiate a process tracking resource usage if main process raises an error

* call stop() only if ExitStack is None

* Skip large datasets for iSeg perf benchmark (#2766)

* Support multiple experiments in single recipe for tools/experiment.py (#2757)

* implement draft version

* update logging failed cases

* align with pre-commit

* add doc string

* Update README file

* fix bugs: single command, failed case output

* exclude first epoch from calculating iter time

* fix weird name used when there is no variables

* align with pre-commit

* initialize iter_time and data_time at first

* Enable perf benchmark result logging to mlflow server (#2768)

* Bump datumaro version to 1.6.0rc1 (#2784)

* bump datumaro version to 1.6.0rc1

* remove rust toolchain installation step from workflows

* Update perf logging (#2785)

* Update perf logging workflow to get branch+sha from gh context (#2791)

* update perf logging workflow to get branch+sha from gh context
* skip logging when tracking server uri is not configured

* Add visual prompting zero-shot learning (optimize, documentation, bug fixes) (#2753)

* Fix to resize bbox

* (WIP) Add post-checking between masks with different labels

* Fix to use the first mask in the first loop

* Add post-checking between masks with different labels

* pre-commit

* Add optimize task

* pre-commit

* Add e2e

* Update documentation

* Update CHANGELOG

* Check performance benchmark result with reference (#2821)

* Average 'Small' (/1 /2 /3) dataset benchmark results

* Load perf result with indexing

* Add speed ref check for all tasks

* Add accuracy ref check for all tasks

* Mergeback releases/1.5.0 to develop (#2830)

* Update MAPI version (#2730)

* Update dependency for exportable code (#2732)

* Filter invalid polygon shapes (#2795)

---------

Co-authored-by: Vladislav Sovrasov <sovrasov.vlad@gmail.com>
Co-authored-by: Eugene Liu <eugene.liu@intel.com>

* Create OSSF scorecard workflow (#2831)

* Fix ossf/scorecard-action version (#2832)

* Update scorecard.yml

* Update perf benchmark reference (#2843)

* Set default wf permission to read-all (#2882)

* Remedy token permission issue (#2888)

* remedy token-permission issues - part2

* removed dispatch event from scorecard wf

* Add progress callback interface to HPO (#2889)

* add progress callback as HPO argument

* deal with edge case

* Restrict configurable parameters to avoid unreasonable cost for SaaS trial (#2891)

* Reduce max value of POT samples to 1k

* Reduce max value of num_iters to 1k

* Fix pre-commit

* Fix more token-permission issues - part3 (#2893)

* Resolve pinned-dependency issues on publish_internal workflow (#2907)

* Forward unittest workloads to AWS (#2887)

* Resolve pinned dependency issues on workflows (#2909)

* Fix pinned-dependency issues - part2 (#2911)

* Add pinning dependencies (#2916)

* Update pip install cmd to use hashes (#2919)

* Fix HPO progress callback bug (#2908)

fix minor bug

* Fix pinned-dependencies issues (#2929)

* Remove unused test files (#2930)

* Update weekly workflow to run perf tests (#2920)

* update weekly workflow to run perf tests

* Fix missing fixture in perf test

* update input to perf tests for weekly

---------

Co-authored-by: Songki Choi <songki.choi@intel.com>

* Adjust permission of documentation workflows from pages to contents for writing (#2933)

* remove unused import

---------

Signed-off-by: Kim, Vinnam <vinnam.kim@intel.com>
Signed-off-by: Songki Choi <songki.choi@intel.com>
Co-authored-by: Yunchu Lee <yunchu.lee@intel.com>
Co-authored-by: Kim, Sungchul <sungchul.kim@intel.com>
Co-authored-by: Vinnam Kim <vinnam.kim@intel.com>
Co-authored-by: Evgeny Tsykunov <evgeny.tsykunov@intel.com>
Co-authored-by: Songki Choi <songki.choi@intel.com>
Co-authored-by: Eunwoo Shin <eunwoo.shin@intel.com>
Co-authored-by: Jaeguk Hyun <jaeguk.hyun@intel.com>
Co-authored-by: Sungman Cho <sungman.cho@intel.com>
Co-authored-by: Eugene Liu <eugene.liu@intel.com>
Co-authored-by: Wonju Lee <wonju.lee@intel.com>
Co-authored-by: Dick Ameln <dick.ameln@intel.com>
Co-authored-by: Vladislav Sovrasov <sovrasov.vlad@gmail.com>
Co-authored-by: sungchul.kim <sungchul@ikvensx010>
Co-authored-by: GalyaZalesskaya <galina.zalesskaya@intel.com>
Co-authored-by: Harim Kang <harim.kang@intel.com>
Co-authored-by: Ashwin Vaidya <ashwin.vaidya@intel.com>
Co-authored-by: Ashwin Vaidya <ashwinitinvaidya@gmail.com>
Co-authored-by: sungmanc <sungmanc@intel.com>
  • Loading branch information
19 people authored Feb 20, 2024
1 parent 54751ca commit 6914551
Show file tree
Hide file tree
Showing 17 changed files with 157 additions and 176 deletions.
11 changes: 9 additions & 2 deletions .github/workflows/code_scan.yml
Original file line number Diff line number Diff line change
Expand Up @@ -20,7 +20,10 @@ jobs:
with:
python-version: "3.10"
- name: Install dependencies
run: python -m pip install tox==4.21.1
run: |
pip install --require-hashes --no-deps -r requirements/gh-actions.txt
pip-compile --generate-hashes -o /tmp/otx-dev-requirements.txt requirements/dev.txt
pip install --require-hashes --no-deps -r /tmp/otx-dev-requirements.txt
- name: Trivy Scanning
env:
TRIVY_DOWNLOAD_URL: ${{ vars.TRIVY_DOWNLOAD_URL }}
Expand All @@ -43,7 +46,11 @@ jobs:
with:
python-version: "3.10"
- name: Install dependencies
run: python -m pip install tox==4.21.1
run: |
pip install --require-hashes --no-deps -r requirements/gh-actions.txt
pip-compile --generate-hashes -o /tmp/otx-dev-requirements.txt requirements/dev.txt
pip install --require-hashes --no-deps -r /tmp/otx-dev-requirements.txt
rm /tmp/otx-dev-requirements.txt
- name: Bandit Scanning
run: tox -e bandit-scan
- name: Upload Bandit artifact
Expand Down
6 changes: 5 additions & 1 deletion .github/workflows/docs.yml
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,11 @@ jobs:
with:
python-version: "3.10"
- name: Install dependencies
run: python -m pip install -r requirements/dev.txt
run: |
pip install --require-hashes --no-deps -r requirements/gh-actions.txt
pip-compile --generate-hashes -o /tmp/otx-dev-requirements.txt requirements/dev.txt
pip install --require-hashes --no-deps -r /tmp/otx-dev-requirements.txt
rm /tmp/otx-dev-requirements.txt
- name: Build-Docs
run: tox -e build-doc
- name: Create gh-pages branch
Expand Down
6 changes: 5 additions & 1 deletion .github/workflows/docs_stable.yml
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,11 @@ jobs:
with:
python-version: "3.10"
- name: Install dependencies
run: python -m pip install -r requirements/dev.txt
run: |
pip install --require-hashes --no-deps -r requirements/gh-actions.txt
pip-compile --generate-hashes -o /tmp/otx-dev-requirements.txt requirements/dev.txt
pip install --require-hashes --no-deps -r /tmp/otx-dev-requirements.txt
rm /tmp/otx-dev-requirements.txt
- name: Build-Docs
run: tox -e build-doc
- name: Create gh-pages branch
Expand Down
30 changes: 29 additions & 1 deletion .github/workflows/perf-accuracy.yml
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,34 @@ on:
- export
- optimize
default: optimize
artifact-prefix:
type: string
default: perf-accuracy-benchmark
workflow_call:
inputs:
model-type:
type: string
description: Model type to run benchmark [default, all]
default: default
data-size:
type: string
description: Dataset size to run benchmark [small, medium, large, all]
default: all
num-repeat:
type: number
description: Overrides default per-data-size number of repeat setting
default: 0
num-epoch:
type: number
description: Overrides default per-model number of epoch setting
default: 0
eval-upto:
type: string
description: The last operation to evaluate. 'optimize' means all. [train, export, optimize]
default: optimize
artifact-prefix:
type: string
default: perf-accuracy-benchmark

# Declare default permissions as read only.
permissions: read-all
Expand Down Expand Up @@ -73,4 +101,4 @@ jobs:
task: ${{ matrix.task }}
timeout-minutes: 8640
upload-artifact: true
artifact-prefix: perf-accuracy-benchmark
artifact-prefix: ${{ inputs.perf-accuracy-benchmark }}
30 changes: 29 additions & 1 deletion .github/workflows/perf-speed.yml
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,34 @@ on:
- export
- optimize
default: optimize
artifact-prefix:
type: string
default: perf-speed-benchmark
workflow_call:
inputs:
model-type:
type: string
description: Model type to run benchmark [default, all]
default: default
data-size:
type: string
description: Dataset size to run benchmark [small, medium, large, all]
default: medium
num-repeat:
type: number
description: Overrides default per-data-size number of repeat setting
default: 1
num-epoch:
type: number
description: Overrides default per-model number of epoch setting
default: 3
eval-upto:
type: string
description: The last operation to evaluate. 'optimize' means all [train, export, optimize]
default: optimize
artifact-prefix:
type: string
default: perf-speed-benchmark

# Declare default permissions as read only.
permissions: read-all
Expand All @@ -59,4 +87,4 @@ jobs:
task: all
timeout-minutes: 8640
upload-artifact: true
artifact-prefix: perf-speed-benchmark
artifact-prefix: ${{ inputs.artifact-prefix }}
6 changes: 4 additions & 2 deletions .github/workflows/pre_merge.yml
Original file line number Diff line number Diff line change
Expand Up @@ -31,9 +31,10 @@ jobs:
python-version: "3.10"
- name: Install dependencies
run: |
pip install pip-tools==7.3.0
pip install --require-hashes --no-deps -r requirements/gh-actions.txt
pip-compile --generate-hashes -o /tmp/otx-dev-requirements.txt requirements/dev.txt
pip install --require-hashes --no-deps -r /tmp/otx-dev-requirements.txt
rm /tmp/otx-dev-requirements.txt
- name: Code quality checks
run: tox -vv -e pre-commit-all-py310-pt1
Unit-Test:
Expand Down Expand Up @@ -79,9 +80,10 @@ jobs:
python-version: "3.8"
- name: Install dependencies
run: |
pip install pip-tools==7.3.0
pip install --require-hashes --no-deps -r requirements/gh-actions.txt
pip-compile --generate-hashes -o /tmp/otx-dev-requirements.txt requirements/dev.txt
pip install --require-hashes --no-deps -r /tmp/otx-dev-requirements.txt
rm /tmp/otx-dev-requirements.txt
- name: Run unit test
run: tox -vv -e unittest-all-py38-pt1
- name: Upload coverage artifact
Expand Down
3 changes: 2 additions & 1 deletion .github/workflows/publish.yml
Original file line number Diff line number Diff line change
Expand Up @@ -33,9 +33,10 @@ jobs:
python-version: "3.10"
- name: Install pypa/build
run: |
pip install pip-tools==7.3.0
pip install --require-hashes --no-deps -r requirements/gh-actions.txt
pip-compile --generate-hashes -o /tmp/otx-publish-requirements.txt requirements/publish.txt
pip install --require-hashes --no-deps -r /tmp/otx-publish-requirements.txt
rm /tmp/otx-publish-requirements.txt
- name: Build sdist
run: python -m build --sdist
- uses: actions/upload-artifact@a8a3f3ad30e3422c9c7b888a15615d19a852ae32 # v3.1.3
Expand Down
6 changes: 4 additions & 2 deletions .github/workflows/publish_internal.yml
Original file line number Diff line number Diff line change
Expand Up @@ -31,9 +31,10 @@ jobs:
python-version: "3.10"
- name: Install pypa/build
run: |
pip install pip-tools==7.3.0
pip install --require-hashes --no-deps -r requirements/gh-actions.txt
pip-compile --generate-hashes -o /tmp/otx-publish-requirements.txt requirements/publish.txt
pip install --require-hashes --no-deps -r /tmp/otx-publish-requirements.txt
rm /tmp/otx-publish-requirements.txt
- name: Build sdist
run: python -m build --sdist
- uses: actions/upload-artifact@a8a3f3ad30e3422c9c7b888a15615d19a852ae32 # v3.1.3
Expand All @@ -56,9 +57,10 @@ jobs:
python-version: "3.10"
- name: Install dependencies
run: |
pip install pip-tools==7.3.0
pip install --require-hashes --no-deps -r requirements/gh-actions.txt
pip-compile --generate-hashes -o /tmp/otx-publish-requirements.txt requirements/publish.txt
pip install --require-hashes --no-deps -r /tmp/otx-publish-requirements.txt
rm /tmp/otx-publish-requirements.txt
- name: Download artifacts
uses: actions/download-artifact@9bc31d5ccc31df68ecc42ccf4149144866c47d8a # v3.0.2
with:
Expand Down
3 changes: 2 additions & 1 deletion .github/workflows/run_tests_in_tox.yml
Original file line number Diff line number Diff line change
Expand Up @@ -52,9 +52,10 @@ jobs:
python-version: ${{ inputs.python-version }}
- name: Install dependencies
run: |
pip install pip-tools==7.3.0
pip install --require-hashes --no-deps -r requirements/gh-actions.txt
pip-compile --generate-hashes -o /tmp/otx-dev-requirements.txt requirements/dev.txt
pip install --require-hashes --no-deps -r /tmp/otx-dev-requirements.txt
rm /tmp/otx-dev-requirements.txt
- name: Run Tests
env:
MLFLOW_TRACKING_SERVER_URI: ${{ vars.MLFLOW_TRACKING_SERVER_URI }}
Expand Down
3 changes: 2 additions & 1 deletion .github/workflows/run_tests_in_tox_custom.yml
Original file line number Diff line number Diff line change
Expand Up @@ -58,9 +58,10 @@ jobs:
python-version: ${{ inputs.python-version }}
- name: Install dependencies
run: |
pip install pip-tools==7.3.0
pip install --require-hashes --no-deps -r requirements/gh-actions.txt
pip-compile --generate-hashes -o /tmp/otx-dev-requirements.txt requirements/dev.txt
pip install --require-hashes --no-deps -r /tmp/otx-dev-requirements.txt
rm /tmp/otx-dev-requirements.txt
- name: Run Tests
env:
MLFLOW_TRACKING_SERVER_URI: ${{ vars.MLFLOW_TRACKING_SERVER_URI }}
Expand Down
56 changes: 19 additions & 37 deletions .github/workflows/weekly.yml
Original file line number Diff line number Diff line change
Expand Up @@ -10,41 +10,23 @@ on:
permissions: read-all

jobs:
Regression-Tests:
strategy:
fail-fast: false
matrix:
include:
- toxenv_task: "iseg"
test_dir: "tests/regression/instance_segmentation/test_instance_segmentation.py"
task: "instance_segmentation"
- toxenv_task: "iseg_t"
test_dir: "tests/regression/instance_segmentation/test_tiling_instance_segmentation.py"
task: "instance_segmentation"
- toxenv_task: "seg"
test_dir: "tests/regression/semantic_segmentation"
task: "segmentation"
- toxenv_task: "det"
test_dir: "tests/regression/detection"
task: "detection"
- toxenv_task: "ano"
test_dir: "tests/regression/anomaly"
task: "anomaly"
- toxenv_task: "act"
test_dir: "tests/regression/action"
task: "action"
- toxenv_task: "cls"
test_dir: "tests/regression/classification"
task: "classification"
name: Regression-Test-py310-${{ matrix.toxenv_task }}
uses: ./.github/workflows/run_tests_in_tox.yml
Performance-Speed-Tests:
name: Performance-Speed-py310
uses: ./.github/workflows/perf-speed.yml
with:
python-version: "3.10"
toxenv-pyver: "py310"
toxenv-task: ${{ matrix.toxenv_task }}
tests-dir: ${{ matrix.test_dir }}
runs-on: "['self-hosted', 'Linux', 'X64', 'dmount']"
task: ${{ matrix.task }}
timeout-minutes: 8640
upload-artifact: true
artifact-prefix: "weekly-test-results"
model-type: default
data-size: medium
num-repeat: 1
num-epoch: 3
eval-upto: optimize
artifact-prefix: weekly-perf-speed-benchmark
Performance-Accuracy-Tests:
name: Performance-Accuracy-py310
uses: ./.github/workflows/perf-accuracy.yml
with:
model-type: default
data-size: all
num-repeat: 0
num-epoch: 0
eval-upto: optimize
artifact-prefix: weekly-perf-accuracy-benchmark
45 changes: 45 additions & 0 deletions requirements/gh-actions.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,45 @@
#
# This file is autogenerated by pip-compile with Python 3.10
# by the following command:
#
# pip-compile --generate-hashes --output-file=requirements.txt requirements/gh-actions.txt
#
build==1.0.3 \
--hash=sha256:538aab1b64f9828977f84bc63ae570b060a8ed1be419e7870b8b4fc5e6ea553b \
--hash=sha256:589bf99a67df7c9cf07ec0ac0e5e2ea5d4b37ac63301c4986d1acb126aa83f8f
# via pip-tools
click==8.1.7 \
--hash=sha256:ae74fb96c20a0277a1d615f1e4d73c8414f5a98db8b799a7931d1582f3390c28 \
--hash=sha256:ca9853ad459e787e2192211578cc907e7594e294c7ccc834310722b41b9ca6de
# via pip-tools
packaging==23.2 \
--hash=sha256:048fb0e9405036518eaaf48a55953c750c11e1a1b68e0dd1a9d62ed0c092cfc5 \
--hash=sha256:8c491190033a9af7e1d931d0b5dacc2ef47509b34dd0de67ed209b5203fc88c7
# via build
pip-tools==7.4.0 \
--hash=sha256:a92a6ddfa86ff389fe6ace381d463bc436e2c705bd71d52117c25af5ce867bb7 \
--hash=sha256:b67432fd0759ed834c5367f9e0ce8c95441acecfec9c8e24b41aca166757adf0
# via -r requirements/gh-actions.txt
pyproject-hooks==1.0.0 \
--hash=sha256:283c11acd6b928d2f6a7c73fa0d01cb2bdc5f07c57a2eeb6e83d5e56b97976f8 \
--hash=sha256:f271b298b97f5955d53fb12b72c1fb1948c22c1a6b70b315c54cedaca0264ef5
# via
# build
# pip-tools
tomli==2.0.1 \
--hash=sha256:939de3e7a6161af0c887ef91b7d41a53e7c5a1ca976325f429cb46ea9bc30ecc \
--hash=sha256:de526c12914f0c550d15924c62d72abc48d6fe7364aa87328337a31007fe8a4f
# via
# build
# pip-tools
# pyproject-hooks
wheel==0.42.0 \
--hash=sha256:177f9c9b0d45c47873b619f5b650346d632cdc35fb5e4d25058e09c9e581433d \
--hash=sha256:c45be39f7882c9d34243236f2d63cbd58039e360f85d0913425fbd7ceea617a8
# via pip-tools

# WARNING: The following packages were not pinned, but pip requires them to be
# pinned when the requirements file includes hashes and the requirement is not
# satisfied by a package already installed. Consider using the --allow-unsafe flag.
# pip
# setuptools
Original file line number Diff line number Diff line change
Expand Up @@ -6,8 +6,6 @@
from mmcls.models.builder import HEADS
from mmcls.models.heads import VisionTransformerClsHead

from otx.algorithms.common.utils import cast_bf16_to_fp32


@HEADS.register_module()
class CustomVisionTransformerClsHead(VisionTransformerClsHead):
Expand All @@ -34,15 +32,6 @@ def loss(self, cls_score, gt_label, feature=None):
losses["loss"] = loss
return losses

def post_process(self, pred):
"""Post processing."""
pred = cast_bf16_to_fp32(pred)
return super().post_process(pred)

def forward(self, x):
"""Forward fuction of CustomVisionTransformerClsHead class."""
return self.simple_test(x)

def forward_train(self, x, gt_label, **kwargs):
"""Forward_train fuction of CustomVisionTransformerClsHead class."""
x = self.pre_logits(x)
Expand Down
4 changes: 2 additions & 2 deletions tests/perf/test_classification.py
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ class TestPerfSingleLabelClassification:

@pytest.mark.parametrize("fxt_model_id", MODEL_TEMPLATES, ids=MODEL_IDS, indirect=True)
@pytest.mark.parametrize("fxt_benchmark", BENCHMARK_CONFIGS.items(), ids=BENCHMARK_CONFIGS.keys(), indirect=True)
def test_accuracy(self, fxt_model_id: str, fxt_benchmark: OTXBenchmark):
def test_accuracy(self, fxt_model_id: str, fxt_benchmark: OTXBenchmark, fxt_check_benchmark_result: Callable):
"""Benchmark accruacy metrics."""
result = fxt_benchmark.run(
model_id=fxt_model_id,
Expand Down Expand Up @@ -301,7 +301,7 @@ def test_accuracy(self, fxt_model_id: str, fxt_benchmark: OTXBenchmark, fxt_chec

@pytest.mark.parametrize("fxt_model_id", MODEL_TEMPLATES, ids=MODEL_IDS, indirect=True)
@pytest.mark.parametrize("fxt_benchmark", BENCHMARK_CONFIGS.items(), ids=BENCHMARK_CONFIGS.keys(), indirect=True)
def test_speed(self, fxt_model_id: str, fxt_benchmark: OTXBenchmark, fxt_check_benchmark_results: Callable):
def test_speed(self, fxt_model_id: str, fxt_benchmark: OTXBenchmark, fxt_check_benchmark_result: Callable):
"""Benchmark train time per iter / infer time per image."""
fxt_benchmark.track_resources = True
result = fxt_benchmark.run(
Expand Down
22 changes: 0 additions & 22 deletions tests/run_code_checks.sh

This file was deleted.

Loading

0 comments on commit 6914551

Please sign in to comment.