diff --git a/benchmark_results.md b/benchmark_results.md
index 325b3445..ae930e38 100644
--- a/benchmark_results.md
+++ b/benchmark_results.md
@@ -114,5 +114,5 @@
-#### Notes
-- Top-1 and Top-5: Accuracy reported on the validation set of ImageNet-1K.
+### Notes
+- top-1 and top-5: Accuracy reported on the validation set of ImageNet-1K.
diff --git a/configs/README.md b/configs/README.md
index 1949ff20..0e9a17ef 100644
--- a/configs/README.md
+++ b/configs/README.md
@@ -31,17 +31,17 @@ Please follow the outline structure and **table format** shown in [densenet/READ
#### Table Format
-
+
| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
| ----------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | --------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------- |
| densenet121 | 8.06 | 8 | 32 | 224x224 | O2 | 300s | 47,34 | 5446.81 | 75.67 | 92.77 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/densenet/densenet_121_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/densenet/densenet121-bf4ab27f-910v2.ckpt) |
-
+
Illustration:
- Model: model name in lower case with _ seperator.
-- Top-1 and Top-5: Accuracy reported on the validatoin set of ImageNet-1K. Keep 2 digits after the decimal point.
+- top-1 and top-5: Accuracy reported on the validatoin set of ImageNet-1K. Keep 2 digits after the decimal point.
- Params (M): # of model parameters in millions (10^6). Keep **2 digits** after the decimal point
- Batch Size: Training batch size
- Cards: # of cards
diff --git a/configs/bit/README.md b/configs/bit/README.md
index 4364f065..70d53463 100644
--- a/configs/bit/README.md
+++ b/configs/bit/README.md
@@ -2,10 +2,6 @@
> [Big Transfer (BiT): General Visual Representation Learning](https://arxiv.org/abs/1912.11370)
-## Requirements
-| mindspore | ascend driver | firmware | cann toolkit/kernel |
-| :-------: | :-----------: | :---------: | :-----------------: |
-| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
## Introduction
@@ -17,30 +13,10 @@ is required. 3) Long pre-training time: Pretraining on a larger dataset requires
BiT use GroupNorm combined with Weight Standardisation instead of BatchNorm. Since BatchNorm performs worse when the number of images on each accelerator is
too low. 5) With BiT fine-tuning, good performance can be achieved even if there are only a few examples of each type on natural images.[[1, 2](#References)]
-
-## Performance
-
-Our reproduced model performance on ImageNet-1K is reported as follows.
-
-- Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode
-
-*coming soon*
-
-- Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode
-
-
-
-
-
-| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
-| ------------ | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | ---------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------- |
-| bit_resnet50 | 25.55 | 8 | 32 | 224x224 | O2 | 146s | 74.52 | 3413.33 | 76.81 | 93.17 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/bit/bit_resnet50_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/bit/BiT_resnet50-1e4795a4.ckpt) |
-
-
-
-
-#### Notes
-- Top-1 and Top-5: Accuracy reported on the validation set of ImageNet-1K.
+## Requirements
+| mindspore | ascend driver | firmware | cann toolkit/kernel |
+| :-------: | :-----------: | :---------: | :-----------------: |
+| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
## Quick Start
@@ -87,6 +63,26 @@ To validate the accuracy of the trained model, you can use `validate.py` and par
python validate.py -c configs/bit/bit_resnet50_ascend.yaml --data_dir /path/to/imagenet --ckpt_path /path/to/ckpt
```
+## Performance
+
+Our reproduced model performance on ImageNet-1K is reported as follows.
+
+Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode.
+
+*coming soon*
+
+Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode.
+
+
+| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
+| ------------ | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | ---------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------- |
+| bit_resnet50 | 25.55 | 8 | 32 | 224x224 | O2 | 146s | 74.52 | 3413.33 | 76.81 | 93.17 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/bit/bit_resnet50_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/bit/BiT_resnet50-1e4795a4.ckpt) |
+
+
+
+### Notes
+- top-1 and top-5: Accuracy reported on the validation set of ImageNet-1K.
+
## References
diff --git a/configs/cmt/README.md b/configs/cmt/README.md
index c3edf53f..41fd3978 100644
--- a/configs/cmt/README.md
+++ b/configs/cmt/README.md
@@ -2,10 +2,6 @@
> [CMT: Convolutional Neural Networks Meet Vision Transformers](https://arxiv.org/abs/2107.06263)
-## Requirements
-| mindspore | ascend driver | firmware | cann toolkit/kernel |
-| :-------: | :-----------: | :---------: | :-----------------: |
-| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
## Introduction
@@ -14,29 +10,11 @@ dependencies and extract local information. In addition, to reduce computation c
and depthwise convolution and pointwise convolution like MobileNet. By combing these parts, CMT could get a SOTA performance
on ImageNet-1K dataset.
+## Requirements
+| mindspore | ascend driver | firmware | cann toolkit/kernel |
+| :-------: | :-----------: | :---------: | :-----------------: |
+| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
-## Performance
-
-Our reproduced model performance on ImageNet-1K is reported as follows.
-
-- Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode
-
-*coming soon*
-
-- Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode
-
-
-
-
-| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
-| ---------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | ------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------ |
-| cmt_small | 26.09 | 8 | 128 | 224x224 | O2 | 1268s | 500.64 | 2048.01 | 83.24 | 96.41 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/cmt/cmt_small_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/cmt/cmt_small-6858ee22.ckpt) |
-
-
-
-
-#### Notes
-- Top-1 and Top-5: Accuracy reported on the validation set of ImageNet-1K.
## Quick Start
@@ -83,6 +61,23 @@ To validate the accuracy of the trained model, you can use `validate.py` and par
python validate.py -c configs/cmt/cmt_small_ascend.yaml --data_dir /path/to/imagenet --ckpt_path /path/to/ckpt
```
+## Performance
+
+Our reproduced model performance on ImageNet-1K is reported as follows.
+
+Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode.
+
+*coming soon*
+
+Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode.
+
+| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
+| ---------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | ------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------ |
+| cmt_small | 26.09 | 8 | 128 | 224x224 | O2 | 1268s | 500.64 | 2048.01 | 83.24 | 96.41 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/cmt/cmt_small_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/cmt/cmt_small-6858ee22.ckpt) |
+
+### Notes
+- top-1 and top-5: Accuracy reported on the validation set of ImageNet-1K.
+
## References
diff --git a/configs/coat/README.md b/configs/coat/README.md
index 59bedefc..6899d69d 100644
--- a/configs/coat/README.md
+++ b/configs/coat/README.md
@@ -2,37 +2,37 @@
> [Co-Scale Conv-Attentional Image Transformers](https://arxiv.org/abs/2104.06399v2)
+## Introduction
+
+Co-Scale Conv-Attentional Image Transformer (CoaT) is a Transformer-based image classifier equipped with co-scale and conv-attentional mechanisms. First, the co-scale mechanism maintains the integrity of Transformers' encoder branches at individual scales, while allowing representations learned at different scales to effectively communicate with each other. Second, the conv-attentional mechanism is designed by realizing a relative position embedding formulation in the factorized attention module with an efficient convolution-like implementation. CoaT empowers image Transformers with enriched multi-scale and contextual modeling capabilities.
+
## Requirements
| mindspore | ascend driver | firmware | cann toolkit/kernel |
| :-------: | :-----------: | :---------: | :-----------------: |
| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
-## Introduction
-
-Co-Scale Conv-Attentional Image Transformer (CoaT) is a Transformer-based image classifier equipped with co-scale and conv-attentional mechanisms. First, the co-scale mechanism maintains the integrity of Transformers' encoder branches at individual scales, while allowing representations learned at different scales to effectively communicate with each other. Second, the conv-attentional mechanism is designed by realizing a relative position embedding formulation in the factorized attention module with an efficient convolution-like implementation. CoaT empowers image Transformers with enriched multi-scale and contextual modeling capabilities.
-
## Performance
Our reproduced model performance on ImageNet-1K is reported as follows.
-- Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode
+Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode.
*coming soon*
-- Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode
+Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode.
+
-
| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
| ---------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | -------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------- |
| coat_tiny | 5.50 | 8 | 32 | 224x224 | O2 | 543s | 254.95 | 1003.92 | 79.67 | 94.88 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/coat/coat_tiny_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/coat/coat_tiny-071cb792.ckpt) |
-
-#### Notes
-- Top-1 and Top-5: Accuracy reported on the validation set of ImageNet-1K.
+
+### Notes
+- top-1 and top-5: Accuracy reported on the validation set of ImageNet-1K.
## Quick Start
diff --git a/configs/convit/README.md b/configs/convit/README.md
index 8b6225cc..5475c9fc 100644
--- a/configs/convit/README.md
+++ b/configs/convit/README.md
@@ -1,10 +1,6 @@
# ConViT
> [ConViT: Improving Vision Transformers with Soft Convolutional Inductive Biases](https://arxiv.org/abs/2103.10697)
-## Requirements
-| mindspore | ascend driver | firmware | cann toolkit/kernel |
-| :-------: | :-----------: | :---------: | :-----------------: |
-| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
## Introduction
@@ -24,36 +20,12 @@ while offering a much improved sample efficiency.[[1](#references)]
-
-
-| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
-| ----------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | ------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------- |
-| convit_tiny | 5.71 | 8 | 256 | 224x224 | O2 | 153s | 226.51 | 9022.03 | 73.79 | 91.70 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/convit/convit_tiny_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/convit/convit_tiny-1961717e-910v2.ckpt) |
-
-
-
-- Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode
-
-
-
-
-| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
-| ----------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | ------------------------------------------------------------------------------------------------ | ----------------------------------------------------------------------------------------- |
-| convit_tiny | 5.71 | 8 | 256 | 224x224 | O2 | 133s | 231.62 | 8827.59 | 73.66 | 91.72 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/convit/convit_tiny_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/convit/convit_tiny-e31023f2.ckpt) |
-
-
-#### Notes
-- Top-1 and Top-5: Accuracy reported on the validation set of ImageNet-1K.
## Quick Start
@@ -98,6 +70,26 @@ To validate the accuracy of the trained model, you can use `validate.py` and par
python validate.py -c configs/convit/convit_tiny_ascend.yaml --data_dir /path/to/imagenet --ckpt_path /path/to/ckpt
```
+## Performance
+
+Our reproduced model performance on ImageNet-1K is reported as follows.
+
+Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode.
+
+| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
+| ----------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | ------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------- |
+| convit_tiny | 5.71 | 8 | 256 | 224x224 | O2 | 153s | 226.51 | 9022.03 | 73.79 | 91.70 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/convit/convit_tiny_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/convit/convit_tiny-1961717e-910v2.ckpt) |
+
+Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode.
+
+| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
+| ----------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | ------------------------------------------------------------------------------------------------ | ----------------------------------------------------------------------------------------- |
+| convit_tiny | 5.71 | 8 | 256 | 224x224 | O2 | 133s | 231.62 | 8827.59 | 73.66 | 91.72 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/convit/convit_tiny_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/convit/convit_tiny-e31023f2.ckpt) |
+
+
+### Notes
+- top-1 and top-5: Accuracy reported on the validation set of ImageNet-1K.
+
## References
diff --git a/configs/convnext/README.md b/configs/convnext/README.md
index 13c6fb4d..db6b075c 100644
--- a/configs/convnext/README.md
+++ b/configs/convnext/README.md
@@ -1,11 +1,6 @@
# ConvNeXt
> [A ConvNet for the 2020s](https://arxiv.org/abs/2201.03545)
-## Requirements
-| mindspore | ascend driver | firmware | cann toolkit/kernel |
-| :-------: | :-----------: | :---------: | :-----------------: |
-| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
-
## Introduction
In this work, the authors reexamine the design spaces and test the limits of what a pure ConvNet can achieve.
@@ -22,36 +17,12 @@ simplicity and efficiency of standard ConvNets.[[1](#references)]
-
-
-| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
-| ------------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | ---------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------- |
-| convnext_tiny | 28.59 | 8 | 16 | 224x224 | O2 | 137s | 48.7 | 2612.24 | 81.28 | 95.61 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/convnext/convnext_tiny_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/convnext/convnext_tiny-db11dc82-910v2.ckpt) |
-
-
-
-
-- Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode
-
-
-
-
-| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
-| ------------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | ---------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------- |
-| convnext_tiny | 28.59 | 8 | 16 | 224x224 | O2 | 127s | 66.79 | 1910.45 | 81.91 | 95.79 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/convnext/convnext_tiny_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/convnext/convnext_tiny-ae5ff8d7.ckpt) |
-
+## Requirements
+| mindspore | ascend driver | firmware | cann toolkit/kernel |
+| :-------: | :-----------: | :---------: | :-----------------: |
+| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
-
-#### Notes
-- Top-1 and Top-5: Accuracy reported on the validation set of ImageNet-1K.
## Quick Start
@@ -97,6 +68,25 @@ To validate the accuracy of the trained model, you can use `validate.py` and par
python validate.py -c configs/convnext/convnext_tiny_ascend.yaml --data_dir /path/to/imagenet --ckpt_path /path/to/ckpt
```
+## Performance
+
+Our reproduced model performance on ImageNet-1K is reported as follows.
+
+Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode.
+
+| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
+| ------------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | ---------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------- |
+| convnext_tiny | 28.59 | 8 | 16 | 224x224 | O2 | 137s | 48.7 | 2612.24 | 81.28 | 95.61 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/convnext/convnext_tiny_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/convnext/convnext_tiny-db11dc82-910v2.ckpt) |
+
+Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode.
+
+| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
+| ------------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | ---------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------- |
+| convnext_tiny | 28.59 | 8 | 16 | 224x224 | O2 | 127s | 66.79 | 1910.45 | 81.91 | 95.79 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/convnext/convnext_tiny_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/convnext/convnext_tiny-ae5ff8d7.ckpt) |
+
+### Notes
+- top-1 and top-5: Accuracy reported on the validation set of ImageNet-1K.
+
## References
[1] Liu Z, Mao H, Wu C Y, et al. A convnet for the 2020s[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022: 11976-11986.
diff --git a/configs/convnextv2/README.md b/configs/convnextv2/README.md
index 267a4455..c3579555 100644
--- a/configs/convnextv2/README.md
+++ b/configs/convnextv2/README.md
@@ -1,10 +1,6 @@
# ConvNeXt V2
> [ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders](https://arxiv.org/abs/2301.00808)
-## Requirements
-| mindspore | ascend driver | firmware | cann toolkit/kernel |
-| :-------: | :-----------: | :---------: | :-----------------: |
-| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
## Introduction
@@ -21,33 +17,15 @@ benchmarks, including ImageNet classification, COCO detection, and ADE20K segmen
-
-| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
-| --------------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | -------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------- |
-| convnextv2_tiny | 28.64 | 8 | 128 | 224x224 | O2 | 268s | 257.2 | 3984.44 | 82.39 | 95.95 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/convnextv2/convnextv2_tiny_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/convnextv2/convnextv2_tiny-a35b79ce-910v2.ckpt) |
-
-
-
-- Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode
-
-
-
+## Requirements
+| mindspore | ascend driver | firmware | cann toolkit/kernel |
+| :-------: | :-----------: | :---------: | :-----------------: |
+| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
-| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
-| --------------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | -------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------- |
-| convnextv2_tiny | 28.64 | 8 | 128 | 224x224 | O2 | 237s | 400.20 | 2560.00 | 82.43 | 95.98 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/convnextv2/convnextv2_tiny_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/convnextv2/convnextv2_tiny-d441ba2c.ckpt) |
-
-#### Notes
-- Top-1 and Top-5: Accuracy reported on the validation set of ImageNet-1K.
+### Notes
+- top-1 and top-5: Accuracy reported on the validation set of ImageNet-1K.
## Quick Start
@@ -93,6 +71,23 @@ To validate the accuracy of the trained model, you can use `validate.py` and par
python validate.py -c configs/convnextv2/convnextv2_tiny_ascend.yaml --data_dir /path/to/imagenet --ckpt_path /path/to/ckpt
```
+## Performance
+
+Our reproduced model performance on ImageNet-1K is reported as follows.
+
+Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode.
+
+| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
+| --------------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | -------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------- |
+| convnextv2_tiny | 28.64 | 8 | 128 | 224x224 | O2 | 268s | 257.2 | 3984.44 | 82.39 | 95.95 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/convnextv2/convnextv2_tiny_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/convnextv2/convnextv2_tiny-a35b79ce-910v2.ckpt) |
+
+Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode.
+
+| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
+| --------------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | -------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------- |
+| convnextv2_tiny | 28.64 | 8 | 128 | 224x224 | O2 | 237s | 400.20 | 2560.00 | 82.43 | 95.98 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/convnextv2/convnextv2_tiny_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/convnextv2/convnextv2_tiny-d441ba2c.ckpt) |
+
+
## References
[1] Woo S, Debnath S, Hu R, et al. ConvNeXt V2: Co-designing and Scaling ConvNets with Masked Autoencoders[J]. arXiv preprint arXiv:2301.00808, 2023.
diff --git a/configs/crossvit/README.md b/configs/crossvit/README.md
index c54348f8..144e9bc6 100644
--- a/configs/crossvit/README.md
+++ b/configs/crossvit/README.md
@@ -1,11 +1,6 @@
# CrossViT
> [CrossViT: Cross-Attention Multi-Scale Vision Transformer for Image Classification](https://arxiv.org/abs/2103.14899)
-## Requirements
-| mindspore | ascend driver | firmware | cann toolkit/kernel |
-| :-------: | :-----------: | :---------: | :-----------------: |
-| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
-
## Introduction
CrossViT is a type of vision transformer that uses a dual-branch architecture to extract multi-scale feature representations for image classification. The architecture combines image patches (i.e. tokens in a transformer) of different sizes to produce stronger visual features for image classification. It processes small and large patch tokens with two separate branches of different computational complexities and these tokens are fused together multiple times to complement each other.
@@ -19,35 +14,11 @@ Fusion is achieved by an efficient cross-attention module, in which each transfo
-
-
-| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
-| ---------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | ------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------- |
-| crossvit_9 | 8.55 | 8 | 256 | 240x240 | O2 | 221s | 514.36 | 3984.44 | 73.38 | 91.51 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/crossvit/crossvit_9_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/crossvit/crossvit_9-32c69c96-910v2.ckpt) |
-
-
-
-- Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode
-
-
-
-
-| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
-| ---------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | ------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------ |
-| crossvit_9 | 8.55 | 8 | 256 | 240x240 | O2 | 206s | 550.79 | 3719.30 | 73.56 | 91.79 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/crossvit/crossvit_9_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/crossvit/crossvit_9-e74c8e18.ckpt) |
-
-
-
-#### Notes
-- Top-1 and Top-5: Accuracy reported on the validation set of ImageNet-1K.
## Quick Start
@@ -92,6 +63,25 @@ To validate the accuracy of the trained model, you can use `validate.py` and par
python validate.py -c configs/crossvit/crossvit_15_ascend.yaml --data_dir /path/to/imagenet --ckpt_path /path/to/ckpt
```
+## Performance
+
+Our reproduced model performance on ImageNet-1K is reported as follows.
+
+Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode.
+
+| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
+| ---------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | ------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------- |
+| crossvit_9 | 8.55 | 8 | 256 | 240x240 | O2 | 221s | 514.36 | 3984.44 | 73.38 | 91.51 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/crossvit/crossvit_9_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/crossvit/crossvit_9-32c69c96-910v2.ckpt) |
+
+Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode.
+
+| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
+| ---------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | ------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------ |
+| crossvit_9 | 8.55 | 8 | 256 | 240x240 | O2 | 206s | 550.79 | 3719.30 | 73.56 | 91.79 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/crossvit/crossvit_9_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/crossvit/crossvit_9-e74c8e18.ckpt) |
+
+### Notes
+- top-1 and top-5: Accuracy reported on the validation set of ImageNet-1K.
+
## References
diff --git a/configs/densenet/README.md b/configs/densenet/README.md
index f0dfd896..a22fa93f 100644
--- a/configs/densenet/README.md
+++ b/configs/densenet/README.md
@@ -2,11 +2,6 @@
> [Densely Connected Convolutional Networks](https://arxiv.org/abs/1608.06993)
-## Requirements
-| mindspore | ascend driver | firmware | cann toolkit/kernel |
-| :-------: | :-----------: | :---------: | :-----------------: |
-| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
-
## Introduction
@@ -27,43 +22,10 @@ propagation, encourage feature reuse, and substantially reduce the number of par
-
-| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
-| ----------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | --------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------- |
-| densenet121 | 8.06 | 8 | 32 | 224x224 | O2 | 300s | 47,34 | 5446.81 | 75.67 | 92.77 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/densenet/densenet_121_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/densenet/densenet121-bf4ab27f-910v2.ckpt) |
-
-- Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode
-
-
-
-
-
-| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
-| ----------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | --------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------- |
-| densenet121 | 8.06 | 8 | 32 | 224x224 | O2 | 191s | 43.28 | 5914.97 | 75.64 | 92.84 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/densenet/densenet_121_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/densenet/densenet121-120_5004_Ascend.ckpt) |
-
-
-
-
-#### Notes
-- Top-1 and Top-5: Accuracy reported on the validation set of ImageNet-1K.
+## Requirements
+| mindspore | ascend driver | firmware | cann toolkit/kernel |
+| :-------: | :-----------: | :---------: | :-----------------: |
+| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
## Quick Start
@@ -109,6 +71,26 @@ To validate the accuracy of the trained model, you can use `validate.py` and par
python validate.py -c configs/densenet/densenet_121_ascend.yaml --data_dir /path/to/imagenet --ckpt_path /path/to/ckpt
```
+## Performance
+
+Our reproduced model performance on ImageNet-1K is reported as follows.
+
+Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode.
+
+
+| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
+| ----------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | --------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------- |
+| densenet121 | 8.06 | 8 | 32 | 224x224 | O2 | 300s | 47,34 | 5446.81 | 75.67 | 92.77 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/densenet/densenet_121_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/densenet/densenet121-bf4ab27f-910v2.ckpt) |
+
+Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode.
+
+| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
+| ----------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | --------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------- |
+| densenet121 | 8.06 | 8 | 32 | 224x224 | O2 | 191s | 43.28 | 5914.97 | 75.64 | 92.84 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/densenet/densenet_121_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/densenet/densenet121-120_5004_Ascend.ckpt) |
+
+### Notes
+- top-1 and top-5: Accuracy reported on the validation set of ImageNet-1K.
+
## References
diff --git a/configs/dpn/README.md b/configs/dpn/README.md
index 33307d76..d29c13eb 100644
--- a/configs/dpn/README.md
+++ b/configs/dpn/README.md
@@ -2,11 +2,6 @@
> [Dual Path Networks](https://arxiv.org/abs/1707.01629v2)
-## Requirements
-| mindspore | ascend driver | firmware | cann toolkit/kernel |
-| :-------: | :-----------: | :---------: | :-----------------: |
-| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
-
## Introduction
@@ -22,36 +17,12 @@ fewer computation cost compared with ResNet and DenseNet on ImageNet-1K dataset.
Figure 1. Architecture of DPN [1]
-## Performance
-
-
-Our reproduced model performance on ImageNet-1K is reported as follows.
-
-- Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode
-
-*coming soon*
-
-- Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode
-
-
-
-
-| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
-| ---------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | --------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------- |
-| dpn92 | 37.79 | 8 | 32 | 224x224 | O2 | 293s | 78.22 | 3272.82 | 79.46 | 94.49 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/dpn/dpn92_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/dpn/dpn92-e3e0fca.ckpt) |
-
+## Requirements
+| mindspore | ascend driver | firmware | cann toolkit/kernel |
+| :-------: | :-----------: | :---------: | :-----------------: |
+| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
-
-#### Notes
-- Top-1 and Top-5: Accuracy reported on the validation set of ImageNet-1K.
## Quick Start
@@ -98,6 +69,24 @@ To validate the accuracy of the trained model, you can use `validate.py` and par
python validate.py -c configs/dpn/dpn92_ascend.yaml --data_dir /path/to/imagenet --ckpt_path /path/to/ckpt
```
+## Performance
+
+Our reproduced model performance on ImageNet-1K is reported as follows.
+
+Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode.
+
+*coming soon*
+
+Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode.
+
+
+| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
+| ---------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | --------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------- |
+| dpn92 | 37.79 | 8 | 32 | 224x224 | O2 | 293s | 78.22 | 3272.82 | 79.46 | 94.49 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/dpn/dpn92_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/dpn/dpn92-e3e0fca.ckpt) |
+
+### Notes
+- top-1 and top-5: Accuracy reported on the validation set of ImageNet-1K.
+
## References
diff --git a/configs/edgenext/README.md b/configs/edgenext/README.md
index aed70e88..a52a9dbe 100644
--- a/configs/edgenext/README.md
+++ b/configs/edgenext/README.md
@@ -2,10 +2,6 @@
> [EdgeNeXt: Efficiently Amalgamated CNN-Transformer Architecture for Mobile Vision Applications](https://arxiv.org/abs/2206.10589)
-## Requirements
-| mindspore | ascend driver | firmware | cann toolkit/kernel |
-| :-------: | :-----------: | :---------: | :-----------------: |
-| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
## Introduction
@@ -22,36 +18,10 @@ to implicitly increase the receptive field and encode multi-scale features.[[1](
Figure 1. Architecture of EdgeNeXt [1]
-## Performance
-
-Our reproduced model performance on ImageNet-1K is reported as follows.
-
-- Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode
-
-
-
-
-| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
-| ----------------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | -------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------- |
-| edgenext_xx_small | 1.33 | 8 | 256 | 256x256 | O2 | 389s | 239.38 | 8555.43 | 70.64 | 89.75 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/edgenext/edgenext_xx_small_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/edgenext/edgenext_xx_small-cad13d2c-910v2.ckpt) |
-
-
-
-
-- Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode
-
-
-
-
-| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
-| ----------------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | -------- | -------- | -------- | -------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------- |
-| edgenext_xx_small | 1.33 | 8 | 256 | 256x256 | O2 | 311s | 191.24 | 10709.06 | 71.02 | 89.99 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/edgenext/edgenext_xx_small_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/edgenext/edgenext_xx_small-afc971fb.ckpt) |
-
-
-
-
-#### Notes
-- Top-1 and Top-5: Accuracy reported on the validation set of ImageNet-1K.
+## Requirements
+| mindspore | ascend driver | firmware | cann toolkit/kernel |
+| :-------: | :-----------: | :---------: | :-----------------: |
+| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
## Quick Start
@@ -99,6 +69,25 @@ To validate the accuracy of the trained model, you can use `validate.py` and par
python validate.py -c configs/edgenext/edgenext_small_ascend.yaml --data_dir /path/to/imagenet --ckpt_path /path/to/ckpt
```
+## Performance
+
+Our reproduced model performance on ImageNet-1K is reported as follows.
+
+Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode.
+
+| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
+| ----------------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | -------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------- |
+| edgenext_xx_small | 1.33 | 8 | 256 | 256x256 | O2 | 389s | 239.38 | 8555.43 | 70.64 | 89.75 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/edgenext/edgenext_xx_small_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/edgenext/edgenext_xx_small-cad13d2c-910v2.ckpt) |
+
+Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode.
+
+| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
+| ----------------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | -------- | -------- | -------- | -------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------- |
+| edgenext_xx_small | 1.33 | 8 | 256 | 256x256 | O2 | 311s | 191.24 | 10709.06 | 71.02 | 89.99 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/edgenext/edgenext_xx_small_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/edgenext/edgenext_xx_small-afc971fb.ckpt) |
+
+### Notes
+- top-1 and top-5: Accuracy reported on the validation set of ImageNet-1K.
+
## References
diff --git a/configs/efficientnet/README.md b/configs/efficientnet/README.md
index e28d8981..b432f6ca 100644
--- a/configs/efficientnet/README.md
+++ b/configs/efficientnet/README.md
@@ -2,11 +2,6 @@
> [EfficientNet: Rethinking Model Scaling for Convolutional Neural Networks](https://arxiv.org/abs/1905.11946)
-## Requirements
-| mindspore | ascend driver | firmware | cann toolkit/kernel |
-| :-------: | :-----------: | :---------: | :-----------------: |
-| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
-
## Introduction
@@ -23,45 +18,11 @@ and resolution scaling could be found. EfficientNet could achieve better model p
Figure 1. Architecture of Efficientent [1]
-## Performance
-
-
-Our reproduced model performance on ImageNet-1K is reported as follows.
-
-- Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode
-
-
-
-
-| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
-| --------------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | ---------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------- |
-| efficientnet_b0 | 5.33 | 8 | 128 | 224x224 | O2 | 353s | 172.64 | 5931.42 | 76.88 | 93.28 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/efficientnet/efficientnet_b0_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/efficientnet/efficientnet_b0-f8d7aa2a-910v2.ckpt) |
-
-
-
-
-- Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode
-
-
-
-
-
-| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
-| --------------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | ---------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------- |
-| efficientnet_b0 | 5.33 | 8 | 128 | 224x224 | O2 | 203s | 172.78 | 5926.61 | 76.89 | 93.16 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/efficientnet/efficientnet_b0_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/efficientnet/efficientnet_b0-103ec70c.ckpt) |
-
-
-
+## Requirements
+| mindspore | ascend driver | firmware | cann toolkit/kernel |
+| :-------: | :-----------: | :---------: | :-----------------: |
+| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
-#### Notes
-- Top-1 and Top-5: Accuracy reported on the validation set of ImageNet-1K.
## Quick Start
@@ -108,6 +69,26 @@ To validate the accuracy of the trained model, you can use `validate.py` and par
python validate.py -c configs/efficientnet/efficientnet_b0_ascend.yaml --data_dir /path/to/imagenet --ckpt_path /path/to/ckpt
```
+## Performance
+
+Our reproduced model performance on ImageNet-1K is reported as follows.
+
+Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode.
+
+| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
+| --------------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | ---------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------- |
+| efficientnet_b0 | 5.33 | 8 | 128 | 224x224 | O2 | 353s | 172.64 | 5931.42 | 76.88 | 93.28 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/efficientnet/efficientnet_b0_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/efficientnet/efficientnet_b0-f8d7aa2a-910v2.ckpt) |
+
+Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode.
+
+| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
+| --------------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | ---------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------- |
+| efficientnet_b0 | 5.33 | 8 | 128 | 224x224 | O2 | 203s | 172.78 | 5926.61 | 76.89 | 93.16 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/efficientnet/efficientnet_b0_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/efficientnet/efficientnet_b0-103ec70c.ckpt) |
+
+### Notes
+- top-1 and top-5: Accuracy reported on the validation set of ImageNet-1K.
+
+
## References
diff --git a/configs/ghostnet/README.md b/configs/ghostnet/README.md
index c6e94ac7..b96c56ab 100644
--- a/configs/ghostnet/README.md
+++ b/configs/ghostnet/README.md
@@ -1,11 +1,6 @@
# GhostNet
> [GhostNet: More Features from Cheap Operations](https://arxiv.org/abs/1911.11907)
-## Requirements
-| mindspore | ascend driver | firmware | cann toolkit/kernel |
-| :-------: | :-----------: | :---------: | :-----------------: |
-| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
-
## Introduction
The redundancy in feature maps is an important characteristic of those successful CNNs, but has rarely been
@@ -26,29 +21,6 @@ dataset.[[1](#references)]
Figure 1. Architecture of GhostNet [1]
-## Performance
-
-Our reproduced model performance on ImageNet-1K is reported as follows.
-
-- Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode
-
-*coming soon*
-
-- Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode
-
-
-
-
-| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
-| ------------ | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | --------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------- |
-| ghostnet_050 | 2.60 | 8 | 128 | 224x224 | O2 | 383s | 211.13 | 4850.09 | 66.03 | 86.64 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/ghostnet/ghostnet_050_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/ghostnet/ghostnet_050-85b91860.ckpt) |
-
-
-
-
-#### Notes
-- Top-1 and Top-5: Accuracy reported on the validation set of ImageNet-1K.
-
## Quick Start
### Preparation
@@ -94,6 +66,28 @@ To validate the accuracy of the trained model, you can use `validate.py` and par
python validate.py -c configs/ghostnet/ghostnet_100_ascend.yaml --data_dir /path/to/imagenet --ckpt_path /path/to/ckpt
```
+## Requirements
+| mindspore | ascend driver | firmware | cann toolkit/kernel |
+| :-------: | :-----------: | :---------: | :-----------------: |
+| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
+
+## Performance
+
+Our reproduced model performance on ImageNet-1K is reported as follows.
+
+Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode.
+
+*coming soon*
+
+Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode.
+
+| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
+| ------------ | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | --------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------- |
+| ghostnet_050 | 2.60 | 8 | 128 | 224x224 | O2 | 383s | 211.13 | 4850.09 | 66.03 | 86.64 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/ghostnet/ghostnet_050_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/ghostnet/ghostnet_050-85b91860.ckpt) |
+
+### Notes
+- top-1 and top-5: Accuracy reported on the validation set of ImageNet-1K.
+
## References
[1] Han K, Wang Y, Tian Q, et al. Ghostnet: More features from cheap operations[C]//Proceedings of the IEEE/CVF conference on computer vision and pattern recognition. 2020: 1580-1589.
diff --git a/configs/googlenet/README.md b/configs/googlenet/README.md
index 1edd59dd..13ba0310 100644
--- a/configs/googlenet/README.md
+++ b/configs/googlenet/README.md
@@ -1,11 +1,6 @@
# GoogLeNet
> [GoogLeNet: Going Deeper with Convolutions](https://arxiv.org/abs/1409.4842)
-## Requirements
-| mindspore | ascend driver | firmware | cann toolkit/kernel |
-| :-------: | :-----------: | :---------: | :-----------------: |
-| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
-
## Introduction
GoogLeNet is a new deep learning structure proposed by Christian Szegedy in 2014. Prior to this, AlexNet, VGG and other
@@ -22,35 +17,10 @@ training results.[[1](#references)]
Figure 1. Architecture of GoogLeNet [1]
-## Performance
-
-Our reproduced model performance on ImageNet-1K is reported as follows.
-
-- Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode
-
-
-
-
-
-| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
-| ---------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | -------- | -------- | -------- | ------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------- |
-| googlenet | 6.99 | 8 | 32 | 224x224 | O2 | 113s | 23.5 | 10893.62 | 72.89 | 90.89 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/googlenet/googlenet_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/googlenet/googlenet-de74c31d-910v2.ckpt) |
-
-
-
-- Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode
-
-
-
-
-| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
-| ---------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | -------- | -------- | -------- | ------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------ |
-| googlenet | 6.99 | 8 | 32 | 224x224 | O2 | 72s | 21.40 | 11962.62 | 72.68 | 90.89 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/googlenet/googlenet_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/googlenet/googlenet-5552fcd3.ckpt) |
-
-
-
-#### Notes
-- Top-1 and Top-5: Accuracy reported on the validation set of ImageNet-1K.
+## Requirements
+| mindspore | ascend driver | firmware | cann toolkit/kernel |
+| :-------: | :-----------: | :---------: | :-----------------: |
+| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
## Quick Start
@@ -97,6 +67,25 @@ To validate the accuracy of the trained model, you can use `validate.py` and par
python validate.py -c configs/googlenet/googlenet_ascend.yaml --data_dir /path/to/imagenet --ckpt_path /path/to/ckpt
```
+## Performance
+
+Our reproduced model performance on ImageNet-1K is reported as follows.
+
+Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode.
+
+| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
+| ---------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | -------- | -------- | -------- | ------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------- |
+| googlenet | 6.99 | 8 | 32 | 224x224 | O2 | 113s | 23.5 | 10893.62 | 72.89 | 90.89 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/googlenet/googlenet_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/googlenet/googlenet-de74c31d-910v2.ckpt) |
+
+Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode.
+
+| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
+| ---------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | -------- | -------- | -------- | ------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------ |
+| googlenet | 6.99 | 8 | 32 | 224x224 | O2 | 72s | 21.40 | 11962.62 | 72.68 | 90.89 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/googlenet/googlenet_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/googlenet/googlenet-5552fcd3.ckpt) |
+
+### Notes
+- top-1 and top-5: Accuracy reported on the validation set of ImageNet-1K.
+
## References
[1] Szegedy C, Liu W, Jia Y, et al. Going deeper with convolutions[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2015: 1-9.
diff --git a/configs/halonet/README.md b/configs/halonet/README.md
index 4b878548..9fac726e 100644
--- a/configs/halonet/README.md
+++ b/configs/halonet/README.md
@@ -2,10 +2,6 @@
> [Scaling Local Self-Attention for Parameter Efficient Visual Backbones](https://arxiv.org/abs/2103.12731)
-## Requirements
-| mindspore | ascend driver | firmware | cann toolkit/kernel |
-| :-------: | :-----------: | :---------: | :-----------------: |
-| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
## Introduction
@@ -29,28 +25,14 @@ Down Sampling:In order to reduce the amount of computation, each block is samp
Figure 2. Architecture of Down Sampling [1]
+## Requirements
+| mindspore | ascend driver | firmware | cann toolkit/kernel |
+| :-------: | :-----------: | :---------: | :-----------------: |
+| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
-## Performance
-
-Our reproduced model performance on ImageNet-1K is reported as follows.
-
-- Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode
-
-*coming soon*
-
-- Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode
-
-
-
-
-| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
-| ----------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | ------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------ |
-| halonet_50t | 22.79 | 8 | 64 | 256x256 | O2 | 261s | 421.66 | 6437.82 | 79.53 | 94.79 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/halonet/halonet_50t_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/halonet/halonet_50t-533da6be.ckpt) |
-
-
-#### Notes
-- Top-1 and Top-5: Accuracy reported on the validation set of ImageNet-1K.
+### Notes
+- top-1 and top-5: Accuracy reported on the validation set of ImageNet-1K.
## Quick Start
@@ -97,6 +79,18 @@ To validate the accuracy of the trained model, you can use `validate.py` and par
python validate.py -c configs/halonet/halonet_50t_ascend.yaml --data_dir /path/to/imagenet --ckpt_path /path/to/ckpt
```
+## Performance
+
+Our reproduced model performance on ImageNet-1K is reported as follows.
+
+Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode.
+
+*coming soon*
+
+Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode.
+
+*coming soon*
+
## References
[1] Vaswani A, Ramachandran P, Srinivas A, et al. Scaling local self-attention for parameter efficient visual backbones[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2021: 12894-12904.
diff --git a/configs/hrnet/README.md b/configs/hrnet/README.md
index 15a8abe2..6e2540fa 100644
--- a/configs/hrnet/README.md
+++ b/configs/hrnet/README.md
@@ -3,10 +3,6 @@
> [Deep High-Resolution Representation Learning for Visual Recognition](https://arxiv.org/abs/1908.07919)
-## Requirements
-| mindspore | ascend driver | firmware | cann toolkit/kernel |
-| :-------: | :-----------: | :---------: | :-----------------: |
-| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
## Introduction
@@ -22,47 +18,10 @@ High-resolution representations are essential for position-sensitive vision prob
Figure 1. Architecture of HRNet [1]
-## Performance
-
-
-Our reproduced model performance on ImageNet-1K is reported as follows.
-
-- Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode
-
-
-
-
-| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
-| ---------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | --------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------- |
-| hrnet_w32 | 41.30 | 8 | 128 | 224x224 | O2 | 1069s | 238.03 | 4301.98 | 80.66 | 95.30 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/hrnet/hrnet_w32_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/hrnet/hrnet_w32-e616cdcb-910v2.ckpt) |
-
-
-
-
-
-- Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode
-
-
-
-
-| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
-| ---------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | --------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------- |
-| hrnet_w32 | 41.30 | 128 | 8 | 224x224 | O2 | 1312s | 279.10 | 3668.94 | 80.64 | 95.44 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/hrnet/hrnet_w32_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/hrnet/hrnet_w32-cc4fbd91.ckpt) |
-
-
-
-
-
-#### Notes
-- Top-1 and Top-5: Accuracy reported on the validation set of ImageNet-1K.
-
+## Requirements
+| mindspore | ascend driver | firmware | cann toolkit/kernel |
+| :-------: | :-----------: | :---------: | :-----------------: |
+| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
## Quick Start
### Preparation
@@ -108,6 +67,25 @@ To validate the accuracy of the trained model, you can use `validate.py` and par
python validate.py -c configs/hrnet/hrnet_w32_ascend.yaml --data_dir /path/to/imagenet --ckpt_path /path/to/ckpt
```
+## Performance
+
+Our reproduced model performance on ImageNet-1K is reported as follows.
+
+Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode.
+
+| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
+| ---------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | --------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------- |
+| hrnet_w32 | 41.30 | 8 | 128 | 224x224 | O2 | 1069s | 238.03 | 4301.98 | 80.66 | 95.30 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/hrnet/hrnet_w32_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/hrnet/hrnet_w32-e616cdcb-910v2.ckpt) |
+
+Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode.
+
+| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
+| ---------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | --------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------- |
+| hrnet_w32 | 41.30 | 128 | 8 | 224x224 | O2 | 1312s | 279.10 | 3668.94 | 80.64 | 95.44 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/hrnet/hrnet_w32_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/hrnet/hrnet_w32-cc4fbd91.ckpt) |
+
+### Notes
+- top-1 and top-5: Accuracy reported on the validation set of ImageNet-1K.
+
## References
diff --git a/configs/inceptionv3/README.md b/configs/inceptionv3/README.md
index a5f55eb8..b41814de 100644
--- a/configs/inceptionv3/README.md
+++ b/configs/inceptionv3/README.md
@@ -1,11 +1,6 @@
# InceptionV3
> [InceptionV3: Rethinking the Inception Architecture for Computer Vision](https://arxiv.org/pdf/1512.00567.pdf)
-## Requirements
-| mindspore | ascend driver | firmware | cann toolkit/kernel |
-| :-------: | :-----------: | :---------: | :-----------------: |
-| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
-
## Introduction
InceptionV3 is an upgraded version of GoogLeNet. One of the most important improvements of V3 is Factorization, which
@@ -23,35 +18,12 @@ regularization and effectively reduces overfitting.[[1](#references)]
Figure 1. Architecture of InceptionV3 [1]
-## Performance
-
-Our reproduced model performance on ImageNet-1K is reported as follows.
-
-- Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode
-
-
-
-
-
-| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
-| ------------ | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | ------------------------------------------------------------------------------------------------------ | -------------------------------------------------------------------------------------------------------------- |
-| inception_v3 | 27.20 | 8 | 32 | 299x299 | O2 | 172s | 70.83 | 3614.29 | 79.25 | 94.47 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/inceptionv3/inception_v3_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/inception_v3/inception_v3-61a8e9ed-910v2.ckpt) |
-
-
-
-- Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode
-
-
-
-
-| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
-| ------------ | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | ------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------ |
-| inception_v3 | 27.20 | 8 | 32 | 299x299 | O2 | 120s | 76.42 | 3349.91 | 79.11 | 94.40 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/inceptionv3/inception_v3_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/inception_v3/inception_v3-38f67890.ckpt) |
+## Requirements
+| mindspore | ascend driver | firmware | cann toolkit/kernel |
+| :-------: | :-----------: | :---------: | :-----------------: |
+| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
-
-#### Notes
-- Top-1 and Top-5: Accuracy reported on the validation set of ImageNet-1K.
## Quick Start
@@ -98,6 +70,25 @@ To validate the accuracy of the trained model, you can use `validate.py` and par
python validate.py -c configs/inceptionv3/inception_v3_ascend.yaml --data_dir /path/to/imagenet --ckpt_path /path/to/ckpt
```
+## Performance
+
+Our reproduced model performance on ImageNet-1K is reported as follows.
+
+Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode.
+
+| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
+| ------------ | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | ------------------------------------------------------------------------------------------------------ | -------------------------------------------------------------------------------------------------------------- |
+| inception_v3 | 27.20 | 8 | 32 | 299x299 | O2 | 172s | 70.83 | 3614.29 | 79.25 | 94.47 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/inceptionv3/inception_v3_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/inception_v3/inception_v3-61a8e9ed-910v2.ckpt) |
+
+Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode.
+
+| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
+| ------------ | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | ------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------ |
+| inception_v3 | 27.20 | 8 | 32 | 299x299 | O2 | 120s | 76.42 | 3349.91 | 79.11 | 94.40 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/inceptionv3/inception_v3_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/inception_v3/inception_v3-38f67890.ckpt) |
+
+### Notes
+- top-1 and top-5: Accuracy reported on the validation set of ImageNet-1K.
+
## References
[1] Szegedy C, Vanhoucke V, Ioffe S, et al. Rethinking the inception architecture for computer vision[C]//Proceedings of the IEEE conference on computer vision and pattern recognition. 2016: 2818-2826.
diff --git a/configs/inceptionv4/README.md b/configs/inceptionv4/README.md
index a68b0396..6eaa4718 100644
--- a/configs/inceptionv4/README.md
+++ b/configs/inceptionv4/README.md
@@ -1,11 +1,6 @@
# InceptionV4
> [InceptionV4: Inception-v4, Inception-ResNet and the Impact of Residual Connections on Learning](https://arxiv.org/pdf/1602.07261.pdf)
-## Requirements
-| mindspore | ascend driver | firmware | cann toolkit/kernel |
-| :-------: | :-----------: | :---------: | :-----------------: |
-| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
-
## Introduction
InceptionV4 studies whether the Inception module combined with Residual Connection can be improved. It is found that the
@@ -20,34 +15,11 @@ performance with Inception-ResNet v2.[[1](#references)]
Figure 1. Architecture of InceptionV4 [1]
-## Performance
-
-Our reproduced model performance on ImageNet-1K is reported as follows.
-
-- Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode
-
-
-
-
-| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
-| ------------ | --------- | ----- | ---------- | ---------- | --------- |---------------| ------- | ------- | -------- | -------- | ------------------------------------------------------------------------------------------------------ | -------------------------------------------------------------------------------------------------------------- |
-| inception_v4 | 42.74 | 8 | 32 | 299x299 | O2 | 263s | 80.97 | 3161.66 | 80.98 | 95.25 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/inceptionv4/inception_v4_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/inception_v4/inception_v4-56e798fc-910v2.ckpt) |
-
-
-
-- Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode
-
-
-
-
-| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
-| ------------ | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | ------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------ |
-| inception_v4 | 42.74 | 8 | 32 | 299x299 | O2 | 177s | 76.19 | 3360.02 | 80.88 | 95.34 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/inceptionv4/inception_v4_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/inception_v4/inception_v4-db9c45b3.ckpt) |
-
-
+## Requirements
+| mindspore | ascend driver | firmware | cann toolkit/kernel |
+| :-------: | :-----------: | :---------: | :-----------------: |
+| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
-#### Notes
-- Top-1 and Top-5: Accuracy reported on the validation set of ImageNet-1K.
## Quick Start
@@ -94,6 +66,25 @@ To validate the accuracy of the trained model, you can use `validate.py` and par
python validate.py -c configs/inceptionv4/inception_v4_ascend.yaml --data_dir /path/to/imagenet --ckpt_path /path/to/ckpt
```
+## Performance
+
+Our reproduced model performance on ImageNet-1K is reported as follows.
+
+Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode.
+
+| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
+| ------------ | --------- | ----- | ---------- | ---------- | --------- |---------------| ------- | ------- | -------- | -------- | ------------------------------------------------------------------------------------------------------ | -------------------------------------------------------------------------------------------------------------- |
+| inception_v4 | 42.74 | 8 | 32 | 299x299 | O2 | 263s | 80.97 | 3161.66 | 80.98 | 95.25 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/inceptionv4/inception_v4_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/inception_v4/inception_v4-56e798fc-910v2.ckpt) |
+
+
+Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode.
+
+| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
+| ------------ | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | ------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------ |
+| inception_v4 | 42.74 | 8 | 32 | 299x299 | O2 | 177s | 76.19 | 3360.02 | 80.88 | 95.34 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/inceptionv4/inception_v4_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/inception_v4/inception_v4-db9c45b3.ckpt) |
+
+### Notes
+- top-1 and top-5: Accuracy reported on the validation set of ImageNet-1K.
## References
diff --git a/configs/mixnet/README.md b/configs/mixnet/README.md
index 81070b62..6cb91e41 100644
--- a/configs/mixnet/README.md
+++ b/configs/mixnet/README.md
@@ -1,10 +1,7 @@
# MixNet
> [MixConv: Mixed Depthwise Convolutional Kernels](https://arxiv.org/abs/1907.09595)
-## Requirements
-| mindspore | ascend driver | firmware | cann toolkit/kernel |
-| :-------: | :-----------: | :---------: | :-----------------: |
-| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
+
## Introduction
@@ -22,36 +19,10 @@ and efficiency for existing MobileNets on both ImageNet classification and COCO
Figure 1. Architecture of MixNet [1]
-## Performance
-
-Our reproduced model performance on ImageNet-1K is reported as follows.
-
-- Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode
-
-
-
-
-| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
-| ---------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | --------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------- |
-| mixnet_s | 4.17 | 8 | 128 | 224x224 | O2 | 706s | 228.03 | 4490.64 | 75.58 | 95.54 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mixnet/mixnet_s_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/mixnet/mixnet_s-fe4fcc63-910v2.ckpt) |
-
-
-
-
-- Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode
-
-
-
-
-| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
-| ---------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | --------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------- |
-| mixnet_s | 4.17 | 8 | 128 | 224x224 | O2 | 556s | 252.49 | 4055.61 | 75.52 | 92.52 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mixnet/mixnet_s_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/mixnet/mixnet_s-2a5ef3a3.ckpt) |
-
-
-
-
-#### Notes
-- Top-1 and Top-5: Accuracy reported on the validation set of ImageNet-1K.
+## Requirements
+| mindspore | ascend driver | firmware | cann toolkit/kernel |
+| :-------: | :-----------: | :---------: | :-----------------: |
+| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
## Quick Start
@@ -98,6 +69,27 @@ To validate the accuracy of the trained model, you can use `validate.py` and par
python validate.py -c configs/mixnet/mixnet_s_ascend.yaml --data_dir /path/to/imagenet --ckpt_path /path/to/ckpt
```
+## Performance
+
+Our reproduced model performance on ImageNet-1K is reported as follows.
+
+Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode.
+
+
+| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
+| ---------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | --------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------- |
+| mixnet_s | 4.17 | 8 | 128 | 224x224 | O2 | 706s | 228.03 | 4490.64 | 75.58 | 95.54 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mixnet/mixnet_s_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/mixnet/mixnet_s-fe4fcc63-910v2.ckpt) |
+
+
+Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode.
+
+| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
+| ---------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | --------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------- |
+| mixnet_s | 4.17 | 8 | 128 | 224x224 | O2 | 556s | 252.49 | 4055.61 | 75.52 | 92.52 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mixnet/mixnet_s_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/mixnet/mixnet_s-2a5ef3a3.ckpt) |
+
+
+### Notes
+- top-1 and top-5: Accuracy reported on the validation set of ImageNet-1K.
## References
diff --git a/configs/mnasnet/README.md b/configs/mnasnet/README.md
index c1515f6b..fee586e0 100644
--- a/configs/mnasnet/README.md
+++ b/configs/mnasnet/README.md
@@ -1,10 +1,7 @@
# MnasNet
> [MnasNet: Platform-Aware Neural Architecture Search for Mobile](https://arxiv.org/abs/1807.11626)
-## Requirements
-| mindspore | ascend driver | firmware | cann toolkit/kernel |
-| :-------: | :-----------: | :---------: | :-----------------: |
-| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
+
## Introduction
@@ -17,36 +14,12 @@ Designing convolutional neural networks (CNN) for mobile devices is challenging
Figure 1. Architecture of MnasNet [1]
-## Performance
-
-Our reproduced model performance on ImageNet-1K is reported as follows.
-
-- Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode
-
-
-
-
-| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
-| ----------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | -------- | -------- | -------- | -------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------- |
-| mnasnet_075 | 3.20 | 8 | 256 | 224x224 | O2 | 144s | 175.85 | 11646.29 | 71.77 | 90.52 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mnasnet/mnasnet_0.75_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/mnasnet/mnasnet_075-083b2bc4-910v2.ckpt) |
-
-
-
-
-- Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode
-
-
-
-
-| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
-| ----------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | -------- | -------- | -------- | -------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------ |
-| mnasnet_075 | 3.20 | 8 | 256 | 224x224 | O2 | 140s | 165.43 | 12379.86 | 71.81 | 90.53 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mnasnet/mnasnet_0.75_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/mnasnet/mnasnet_075-465d366d.ckpt) |
-
+## Requirements
+| mindspore | ascend driver | firmware | cann toolkit/kernel |
+| :-------: | :-----------: | :---------: | :-----------------: |
+| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
-
-#### Notes
-- Top-1 and Top-5: Accuracy reported on the validation set of ImageNet-1K.
## Quick Start
@@ -93,6 +66,27 @@ To validate the accuracy of the trained model, you can use `validate.py` and par
python validate.py -c configs/mnasnet/mnasnet_0.75_ascend.yaml --data_dir /path/to/imagenet --ckpt_path /path/to/ckpt
```
+## Performance
+
+Our reproduced model performance on ImageNet-1K is reported as follows.
+
+Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode.
+
+
+| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
+| ----------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | -------- | -------- | -------- | -------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------- |
+| mnasnet_075 | 3.20 | 8 | 256 | 224x224 | O2 | 144s | 175.85 | 11646.29 | 71.77 | 90.52 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mnasnet/mnasnet_0.75_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/mnasnet/mnasnet_075-083b2bc4-910v2.ckpt) |
+
+
+Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode.
+
+
+| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
+| ----------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | -------- | -------- | -------- | -------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------ |
+| mnasnet_075 | 3.20 | 8 | 256 | 224x224 | O2 | 140s | 165.43 | 12379.86 | 71.81 | 90.53 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mnasnet/mnasnet_0.75_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/mnasnet/mnasnet_075-465d366d.ckpt) |
+
+### Notes
+- top-1 and top-5: Accuracy reported on the validation set of ImageNet-1K.
## References
diff --git a/configs/mobilenetv1/README.md b/configs/mobilenetv1/README.md
index ea370c24..62cc2788 100644
--- a/configs/mobilenetv1/README.md
+++ b/configs/mobilenetv1/README.md
@@ -1,11 +1,6 @@
# MobileNetV1
> [MobileNets: Efficient Convolutional Neural Networks for Mobile Vision Applications](https://arxiv.org/abs/1704.04861)
-## Requirements
-| mindspore | ascend driver | firmware | cann toolkit/kernel |
-| :-------: | :-----------: | :---------: | :-----------------: |
-| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
-
## Introduction
Compared with the traditional convolutional neural network, MobileNetV1's parameters and the amount of computation are greatly reduced on the premise that the accuracy rate is slightly reduced. (Compared to VGG16, the accuracy rate is reduced by 0.9%, but the model parameters are only 1/32 of VGG). The model is based on a streamlined architecture that uses depthwise separable convolutions to build lightweight deep neural networks. At the same time, two simple global hyperparameters are introduced, which can effectively trade off latency and accuracy.[[1](#references)]
@@ -17,36 +12,16 @@ Compared with the traditional convolutional neural network, MobileNetV1's parame
Figure 1. Architecture of MobileNetV1 [1]
-## Performance
-
-Our reproduced model performance on ImageNet-1K is reported as follows.
-
-- Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode
-
-
-
-
-| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
-| ---------------- | --------- | ----- | ---------- | ---------- | --------- |---------------| ------- | -------- | -------- | -------- | ----------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------- |
-| mobilenet_v1_025 | 0.47 | 8 | 64 | 224x224 | O2 | 195s | 47.47 | 10785.76 | 54.05 | 77.74 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mobilenetv1/mobilenet_v1_0.25_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/mobilenet/mobilenetv1/mobilenet_v1_025-cbe3d3b3-910v2.ckpt) |
-
-
-
-
-- Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode
-
-
-
+## Requirements
+| mindspore | ascend driver | firmware | cann toolkit/kernel |
+| :-------: | :-----------: | :---------: | :-----------------: |
+| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
-| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
-| ---------------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | -------- | -------- | -------- | ----------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------- |
-| mobilenet_v1_025 | 0.47 | 8 | 64 | 224x224 | O2 | 89s | 42.43 | 12066.93 | 53.87 | 77.66 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mobilenetv1/mobilenet_v1_0.25_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/mobilenet/mobilenetv1/mobilenet_v1_025-d3377fba.ckpt) |
-
-#### Notes
-- Top-1 and Top-5: Accuracy reported on the validation set of ImageNet-1K.
+### Notes
+- top-1 and top-5: Accuracy reported on the validation set of ImageNet-1K.
## Quick Start
@@ -93,6 +68,22 @@ To validate the accuracy of the trained model, you can use `validate.py` and par
python validate.py -c configs/mobilenetv1/mobilenet_v1_0.25_ascend.yaml --data_dir /path/to/imagenet --ckpt_path /path/to/ckpt
```
+## Performance
+
+Our reproduced model performance on ImageNet-1K is reported as follows.
+
+Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode.
+
+| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
+| ---------------- | --------- | ----- | ---------- | ---------- | --------- |---------------| ------- | -------- | -------- | -------- | ----------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------- |
+| mobilenet_v1_025 | 0.47 | 8 | 64 | 224x224 | O2 | 195s | 47.47 | 10785.76 | 54.05 | 77.74 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mobilenetv1/mobilenet_v1_0.25_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/mobilenet/mobilenetv1/mobilenet_v1_025-cbe3d3b3-910v2.ckpt) |
+
+Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode.
+
+| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
+| ---------------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | -------- | -------- | -------- | ----------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------- |
+| mobilenet_v1_025 | 0.47 | 8 | 64 | 224x224 | O2 | 89s | 42.43 | 12066.93 | 53.87 | 77.66 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mobilenetv1/mobilenet_v1_0.25_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/mobilenet/mobilenetv1/mobilenet_v1_025-d3377fba.ckpt) |
+
## References
diff --git a/configs/mobilenetv2/README.md b/configs/mobilenetv2/README.md
index 1e588bcb..932de95c 100644
--- a/configs/mobilenetv2/README.md
+++ b/configs/mobilenetv2/README.md
@@ -1,11 +1,6 @@
# MobileNetV2
> [MobileNetV2: Inverted Residuals and Linear Bottlenecks](https://arxiv.org/abs/1801.04381)
-## Requirements
-| mindspore | ascend driver | firmware | cann toolkit/kernel |
-| :-------: | :-----------: | :---------: | :-----------------: |
-| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
-
## Introduction
The model is a new neural network architecture that is specifically tailored for mobile and resource-constrained environments. This network pushes the state of the art for mobile custom computer vision models, significantly reducing the amount of operations and memory required while maintaining the same accuracy.
@@ -19,36 +14,12 @@ The main innovation of the model is the proposal of a new layer module: The Inve
Figure 1. Architecture of MobileNetV2 [1]
-## Performance
-
-Our reproduced model performance on ImageNet-1K is reported as follows.
-
-- Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode
-
-
-
-
-| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
-| ---------------- | --------- | ----- | ---------- | ---------- | --------- |---------------| ------- | -------- | -------- | -------- | ----------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------- |
-| mobilenet_v2_075 | 2.66 | 8 | 256 | 224x224 | O2 | 233s | 174.65 | 11726.31 | 69.73 | 89.35 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mobilenetv2/mobilenet_v2_0.75_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/mobilenet/mobilenetv2/mobilenet_v2_075-755932c4-910v2.ckpt) |
-
-
-
-
-- Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode
-
-
-
-
-| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
-| ---------------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | -------- | -------- | -------- | ----------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------- |
-| mobilenet_v2_075 | 2.66 | 8 | 256 | 224x224 | O2 | 164s | 155.94 | 13133.26 | 69.98 | 89.32 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mobilenetv2/mobilenet_v2_0.75_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/mobilenet/mobilenetv2/mobilenet_v2_075-bd7bd4c4.ckpt) |
-
+## Requirements
+| mindspore | ascend driver | firmware | cann toolkit/kernel |
+| :-------: | :-----------: | :---------: | :-----------------: |
+| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
-
-#### Notes
-- Top-1 and Top-5: Accuracy reported on the validation set of ImageNet-1K.
## Quick Start
@@ -95,6 +66,26 @@ To validate the accuracy of the trained model, you can use `validate.py` and par
python validate.py -c configs/mobilenetv2/mobilenet_v2_0.75_ascend.yaml --data_dir /path/to/imagenet --ckpt_path /path/to/ckpt
```
+## Performance
+
+Our reproduced model performance on ImageNet-1K is reported as follows.
+
+Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode.
+
+
+| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
+| ---------------- | --------- | ----- | ---------- | ---------- | --------- |---------------| ------- | -------- | -------- | -------- | ----------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------------------- |
+| mobilenet_v2_075 | 2.66 | 8 | 256 | 224x224 | O2 | 233s | 174.65 | 11726.31 | 69.73 | 89.35 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mobilenetv2/mobilenet_v2_0.75_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/mobilenet/mobilenetv2/mobilenet_v2_075-755932c4-910v2.ckpt) |
+
+
+Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode.
+
+| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
+| ---------------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | -------- | -------- | -------- | ----------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------- |
+| mobilenet_v2_075 | 2.66 | 8 | 256 | 224x224 | O2 | 164s | 155.94 | 13133.26 | 69.98 | 89.32 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mobilenetv2/mobilenet_v2_0.75_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/mobilenet/mobilenetv2/mobilenet_v2_075-bd7bd4c4.ckpt) |
+
+### Notes
+- top-1 and top-5: Accuracy reported on the validation set of ImageNet-1K.
## References
diff --git a/configs/mobilenetv3/README.md b/configs/mobilenetv3/README.md
index 376a6f32..62c119af 100644
--- a/configs/mobilenetv3/README.md
+++ b/configs/mobilenetv3/README.md
@@ -1,10 +1,7 @@
# MobileNetV3
> [Searching for MobileNetV3](https://arxiv.org/abs/1905.02244)
-## Requirements
-| mindspore | ascend driver | firmware | cann toolkit/kernel |
-| :-------: | :-----------: | :---------: | :-----------------: |
-| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
+
## Introduction
@@ -19,36 +16,12 @@ mobilenet-v3 offers two versions, mobilenet-v3 large and mobilenet-v3 small, for
Figure 1. Architecture of MobileNetV3 [1]
-## Performance
-
-Our reproduced model performance on ImageNet-1K is reported as follows.
-
-- Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode
-
-
-
-
-| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
-| ---------------------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | -------- | -------- | -------- | ------------------------------------------------------------------------------------------------------------ | --------------------------------------------------------------------------------------------------------------------------------- |
-| mobilenet_v3_small_100 | 2.55 | 8 | 75 | 224x224 | O2 | 184s | 52.38 | 11454.75 | 68.07 | 87.77 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mobilenetv3/mobilenet_v3_small_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/mobilenet/mobilenetv3/mobilenet_v3_small_100-6fa3c17d-910v2.ckpt) |
-| mobilenet_v3_large_100 | 5.51 | 8 | 75 | 224x224 | O2 | 354s | 55.89 | 10735.37 | 75.59 | 92.57 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mobilenetv3/mobilenet_v3_large_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/mobilenet/mobilenetv3/mobilenet_v3_large_100-bd4e7bdc-910v2.ckpt) |
-
-
-
-- Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode
-
-
-
-
-| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
-| ---------------------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | -------- | -------- | -------- | ------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------- |
-| mobilenet_v3_small_100 | 2.55 | 8 | 75 | 224x224 | O2 | 145s | 48.14 | 12463.65 | 68.10 | 87.86 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mobilenetv3/mobilenet_v3_small_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/mobilenet/mobilenetv3/mobilenet_v3_small_100-509c6047.ckpt) |
-| mobilenet_v3_large_100 | 5.51 | 8 | 75 | 224x224 | O2 | 271s | 47.49 | 12634.24 | 75.23 | 92.31 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mobilenetv3/mobilenet_v3_large_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/mobilenet/mobilenetv3/mobilenet_v3_large_100-1279ad5f.ckpt) |
+## Requirements
+| mindspore | ascend driver | firmware | cann toolkit/kernel |
+| :-------: | :-----------: | :---------: | :-----------------: |
+| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
-
-#### Notes
-- Top-1 and Top-5: Accuracy reported on the validation set of ImageNet-1K.
## Quick Start
@@ -95,6 +68,28 @@ To validate the accuracy of the trained model, you can use `validate.py` and par
python validate.py -c configs/mobilenetv3/mobilenet_v3_small_ascend.yaml --data_dir /path/to/imagenet --ckpt_path /path/to/ckpt
```
+## Performance
+
+Our reproduced model performance on ImageNet-1K is reported as follows.
+
+Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode.
+
+
+| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
+| ---------------------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | -------- | -------- | -------- | ------------------------------------------------------------------------------------------------------------ | --------------------------------------------------------------------------------------------------------------------------------- |
+| mobilenet_v3_small_100 | 2.55 | 8 | 75 | 224x224 | O2 | 184s | 52.38 | 11454.75 | 68.07 | 87.77 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mobilenetv3/mobilenet_v3_small_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/mobilenet/mobilenetv3/mobilenet_v3_small_100-6fa3c17d-910v2.ckpt) |
+| mobilenet_v3_large_100 | 5.51 | 8 | 75 | 224x224 | O2 | 354s | 55.89 | 10735.37 | 75.59 | 92.57 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mobilenetv3/mobilenet_v3_large_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/mobilenet/mobilenetv3/mobilenet_v3_large_100-bd4e7bdc-910v2.ckpt) |
+
+Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode.
+
+
+| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
+| ---------------------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | -------- | -------- | -------- | ------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------- |
+| mobilenet_v3_small_100 | 2.55 | 8 | 75 | 224x224 | O2 | 145s | 48.14 | 12463.65 | 68.10 | 87.86 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mobilenetv3/mobilenet_v3_small_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/mobilenet/mobilenetv3/mobilenet_v3_small_100-509c6047.ckpt) |
+| mobilenet_v3_large_100 | 5.51 | 8 | 75 | 224x224 | O2 | 271s | 47.49 | 12634.24 | 75.23 | 92.31 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mobilenetv3/mobilenet_v3_large_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/mobilenet/mobilenetv3/mobilenet_v3_large_100-1279ad5f.ckpt) |
+
+### Notes
+- top-1 and top-5: Accuracy reported on the validation set of ImageNet-1K.
## References
diff --git a/configs/mobilevit/README.md b/configs/mobilevit/README.md
index 3a68dc6c..53104cff 100644
--- a/configs/mobilevit/README.md
+++ b/configs/mobilevit/README.md
@@ -1,10 +1,6 @@
# MobileViT
> [MobileViT:Light-weight, General-purpose, and Mobile-friendly Vision Transformer](https://arxiv.org/pdf/2110.02178.pdf)
-## Requirements
-| mindspore | ascend driver | firmware | cann toolkit/kernel |
-| :-------: | :-----------: | :---------: | :-----------------: |
-| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
## Introduction
@@ -17,36 +13,12 @@ MobileViT, a light-weight and general-purpose vision transformer for mobile devi
Figure 1. Architecture of MobileViT [1]
-## Performance
-
-Our reproduced model performance on ImageNet-1K is reported as follows.
-
-- Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode
-
-
-
-
-| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
-| ------------------ | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | ---------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------- |
-| mobilevit_xx_small | 1.27 | 8 | 64 | 256x256 | O2 | 437s | 67.24 | 7614.52 | 67.11 | 87.85 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mobilevit/mobilevit_xx_small_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/mobilevit/mobilevit_xx_small-6f2745c3-910v2.ckpt) |
-
-
-
-
-- Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode
-
-
-
-
-| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
-| ------------------ | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | ---------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------- |
-| mobilevit_xx_small | 1.27 | 64 | 8 | 256x256 | O2 | 301s | 53.52 | 9566.52 | 68.91 | 88.91 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mobilevit/mobilevit_xx_small_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/mobilevit/mobilevit_xx_small-af9da8a0.ckpt) |
-
+## Requirements
+| mindspore | ascend driver | firmware | cann toolkit/kernel |
+| :-------: | :-----------: | :---------: | :-----------------: |
+| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
-
-#### Notes
-- Top-1 and Top-5: Accuracy reported on the validation set of ImageNet-1K.
## Quick Start
@@ -91,3 +63,25 @@ To validate the accuracy of the trained model, you can use `validate.py` and par
```
python validate.py -c configs/mobilevit/mobilevit_xx_small_ascend.yaml --data_dir /path/to/imagenet --ckpt_path /path/to/ckpt
```
+
+## Performance
+
+Our reproduced model performance on ImageNet-1K is reported as follows.
+
+Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode.
+
+
+| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
+| ------------------ | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | ---------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------- |
+| mobilevit_xx_small | 1.27 | 8 | 64 | 256x256 | O2 | 437s | 67.24 | 7614.52 | 67.11 | 87.85 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mobilevit/mobilevit_xx_small_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/mobilevit/mobilevit_xx_small-6f2745c3-910v2.ckpt) |
+
+
+Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode.
+
+
+| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
+| ------------------ | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | ---------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------- |
+| mobilevit_xx_small | 1.27 | 64 | 8 | 256x256 | O2 | 301s | 53.52 | 9566.52 | 68.91 | 88.91 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/mobilevit/mobilevit_xx_small_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/mobilevit/mobilevit_xx_small-af9da8a0.ckpt) |
+
+### Notes
+- top-1 and top-5: Accuracy reported on the validation set of ImageNet-1K.
diff --git a/configs/nasnet/README.md b/configs/nasnet/README.md
index 26bde068..6a63eb22 100644
--- a/configs/nasnet/README.md
+++ b/configs/nasnet/README.md
@@ -2,10 +2,6 @@
> [Learning Transferable Architectures for Scalable Image Recognition](https://arxiv.org/abs/1707.07012)
-## Requirements
-| mindspore | ascend driver | firmware | cann toolkit/kernel |
-| :-------: | :-----------: | :---------: | :-----------------: |
-| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
## Introduction
@@ -23,42 +19,12 @@ compared with previous state-of-the-art methods on ImageNet-1K dataset.[[1](#ref
Figure 1. Architecture of Nasnet [1]
-## Performance
-
-
-Our reproduced model performance on ImageNet-1K is reported as follows.
-
-- Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode
-
-
-
-
-| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
-| --------------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | ---------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------ |
-| nasnet_a_4x1056 | 5.33 | 8 | 256 | 224x224 | O2 | 800s | 364.35 | 5620.97 | 74.12 | 91.36 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/nasnet/nasnet_a_4x1056_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/nasnet/nasnet_a_4x1056-015ba575c-910v2.ckpt) |
-
-
-
-- Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode
-
-
-
-
-| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
-| --------------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | ---------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------- |
-| nasnet_a_4x1056 | 5.33 | 8 | 256 | 224x224 | O2 | 656s | 330.89 | 6189.37 | 73.65 | 91.25 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/nasnet/nasnet_a_4x1056_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/nasnet/nasnet_a_4x1056-0fbb5cdd.ckpt) |
+## Requirements
+| mindspore | ascend driver | firmware | cann toolkit/kernel |
+| :-------: | :-----------: | :---------: | :-----------------: |
+| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
-
-#### Notes
-- Top-1 and Top-5: Accuracy reported on the validation set of ImageNet-1K.
## Quick Start
@@ -105,6 +71,28 @@ To validate the accuracy of the trained model, you can use `validate.py` and par
python validate.py -c configs/nasnet/nasnet_a_4x1056_ascend.yaml --data_dir /path/to/imagenet --ckpt_path /path/to/ckpt
```
+## Performance
+
+Our reproduced model performance on ImageNet-1K is reported as follows.
+
+Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode.
+
+
+| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
+| --------------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | ---------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------ |
+| nasnet_a_4x1056 | 5.33 | 8 | 256 | 224x224 | O2 | 800s | 364.35 | 5620.97 | 74.12 | 91.36 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/nasnet/nasnet_a_4x1056_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/nasnet/nasnet_a_4x1056-015ba575c-910v2.ckpt) |
+
+
+
+Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode.
+
+| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
+| --------------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | ---------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------- |
+| nasnet_a_4x1056 | 5.33 | 8 | 256 | 224x224 | O2 | 656s | 330.89 | 6189.37 | 73.65 | 91.25 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/nasnet/nasnet_a_4x1056_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/nasnet/nasnet_a_4x1056-0fbb5cdd.ckpt) |
+
+### Notes
+- top-1 and top-5: Accuracy reported on the validation set of ImageNet-1K.
+
## References
diff --git a/configs/pit/README.md b/configs/pit/README.md
index 9a54c585..afbdd9ca 100644
--- a/configs/pit/README.md
+++ b/configs/pit/README.md
@@ -23,9 +23,9 @@ PiT (Pooling-based Vision Transformer) is an improvement of Vision Transformer (
Our reproduced model performance on ImageNet-1K is reported as follows.
-- Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode
+Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode.
+
-
| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
@@ -33,11 +33,11 @@ Our reproduced model performance on ImageNet-1K is reported as follows.
| pit_ti | 4.85 | 8 | 128 | 224x224 | O2 | 212s | 266.47 | 3842.83 | 73.26 | 91.57 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/pit/pit_ti_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/pit/pit_ti-33466a0d-910v2.ckpt) |
-
-- Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode
-
+Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode.
+
+
| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
@@ -45,10 +45,10 @@ Our reproduced model performance on ImageNet-1K is reported as follows.
| pit_ti | 4.85 | 8 | 128 | 224x224 | O2 | 192s | 271.50 | 3771.64 | 72.96 | 91.33 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/pit/pit_ti_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/pit/pit_ti-e647a593.ckpt) |
-
-#### Notes
-- Top-1 and Top-5: Accuracy reported on the validation set of ImageNet-1K.
+
+### Notes
+- top-1 and top-5: Accuracy reported on the validation set of ImageNet-1K.
## Quick Start
diff --git a/configs/poolformer/README.md b/configs/poolformer/README.md
index 2d2647dc..c37e5a63 100644
--- a/configs/poolformer/README.md
+++ b/configs/poolformer/README.md
@@ -2,10 +2,7 @@
> [MetaFormer Is Actually What You Need for Vision](https://arxiv.org/abs/2111.11418)
-## Requirements
-| mindspore | ascend driver | firmware | cann toolkit/kernel |
-| :-------: | :-----------: | :---------: | :-----------------: |
-| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
+
## Introduction
@@ -17,35 +14,12 @@ Figure 1. MetaFormer and performance of MetaFormer-based models on ImageNet-1K v

Figure 2. (a) The overall framework of PoolFormer. (b) The architecture of PoolFormer block. Compared with Transformer block, it replaces attention with an extremely simple non-parametric operator, pooling, to conduct only basic token mixing.[[1](#References)]
-## Performance
-
-Our reproduced model performance on ImageNet-1K is reported as follows.
-
-- Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode
-
-
-
-
-| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
-| -------------- | --------- | ----- | ---------- | ---------- | --------- |---------------| ------- | ------- | -------- | -------- | ------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------- |
-| poolformer_s12 | 11.92 | 8 | 128 | 224x224 | O2 | 177s | 211.81 | 4834.52 | 77.49 | 93.55 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/poolformer/poolformer_s12_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/poolformer/poolformer_s12-c7e14eea-910v2.ckpt) |
-
-
-
-- Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode
-
-
-
-
-
-| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
-| -------------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | ------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------ |
-| poolformer_s12 | 11.92 | 8 | 128 | 224x224 | O2 | 118s | 220.13 | 4651.80 | 77.33 | 93.34 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/poolformer/poolformer_s12_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/poolformer/poolformer_s12-5be5c4e4.ckpt) |
+## Requirements
+| mindspore | ascend driver | firmware | cann toolkit/kernel |
+| :-------: | :-----------: | :---------: | :-----------------: |
+| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
-
-#### Notes
-- Top-1 and Top-5: Accuracy reported on the validation set of ImageNet-1K.
## Quick Start
@@ -91,6 +65,27 @@ python train.py --config configs/poolformer/poolformer_s12_ascend.yaml --data_di
validation of poolformer has to be done in amp O3 mode which is not supported, coming soon...
```
+## Performance
+
+Our reproduced model performance on ImageNet-1K is reported as follows.
+
+Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode.
+
+
+| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
+| -------------- | --------- | ----- | ---------- | ---------- | --------- |---------------| ------- | ------- | -------- | -------- | ------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------- |
+| poolformer_s12 | 11.92 | 8 | 128 | 224x224 | O2 | 177s | 211.81 | 4834.52 | 77.49 | 93.55 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/poolformer/poolformer_s12_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/poolformer/poolformer_s12-c7e14eea-910v2.ckpt) |
+
+Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode.
+
+
+| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
+| -------------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | ------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------ |
+| poolformer_s12 | 11.92 | 8 | 128 | 224x224 | O2 | 118s | 220.13 | 4651.80 | 77.33 | 93.34 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/poolformer/poolformer_s12_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/poolformer/poolformer_s12-5be5c4e4.ckpt) |
+
+### Notes
+- top-1 and top-5: Accuracy reported on the validation set of ImageNet-1K.
+
## References
[1]. Yu W, Luo M, Zhou P, et al. Metaformer is actually what you need for vision[C]//Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition. 2022: 10819-10829.
diff --git a/configs/pvt/README.md b/configs/pvt/README.md
index 9d9c6d7f..f724d9e7 100644
--- a/configs/pvt/README.md
+++ b/configs/pvt/README.md
@@ -2,10 +2,7 @@
> [Pyramid Vision Transformer: A Versatile Backbone for Dense Prediction without Convolutions](https://arxiv.org/abs/2102.12122)
-## Requirements
-| mindspore | ascend driver | firmware | cann toolkit/kernel |
-| :-------: | :-----------: | :---------: | :-----------------: |
-| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
+
## Introduction
@@ -17,35 +14,11 @@ overhead.[[1](#References)]

-## Performance
-
-Our reproduced model performance on ImageNet-1K is reported as follows.
-
-- Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode
-
-
-
-
-| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
-| ---------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | ------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------- |
-| pvt_tiny | 13.23 | 8 | 128 | 224x224 | O2 | 212s | 237.5 | 4311.58 | 74.88 | 92.12 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/pvt/pvt_tiny_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/pvt/pvt_tiny-6676051f-910v2.ckpt) |
-
-
-
-- Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode
-
-
-
-
-| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
-| ---------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | ------------------------------------------------------------------------------------------ | ----------------------------------------------------------------------------------- |
-| pvt_tiny | 13.23 | 8 | 128 | 224x224 | O2 | 192s | 229.63 | 4459.35 | 74.81 | 92.18 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/pvt/pvt_tiny_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/pvt/pvt_tiny-6abb953d.ckpt) |
-
-
-
-#### Notes
+## Requirements
+| mindspore | ascend driver | firmware | cann toolkit/kernel |
+| :-------: | :-----------: | :---------: | :-----------------: |
+| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
-- Top-1 and Top-5: Accuracy reported on the validation set of ImageNet-1K.
## Quick Start
@@ -101,6 +74,28 @@ with `--ckpt_path`.
python validate.py --model=pvt_tiny --data_dir /path/to/imagenet --ckpt_path /path/to/ckpt
```
+## Performance
+
+Our reproduced model performance on ImageNet-1K is reported as follows.
+
+Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode.
+
+| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
+| ---------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | ------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------- |
+| pvt_tiny | 13.23 | 8 | 128 | 224x224 | O2 | 212s | 237.5 | 4311.58 | 74.88 | 92.12 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/pvt/pvt_tiny_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/pvt/pvt_tiny-6676051f-910v2.ckpt) |
+
+
+Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode.
+
+
+| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
+| ---------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | ------------------------------------------------------------------------------------------ | ----------------------------------------------------------------------------------- |
+| pvt_tiny | 13.23 | 8 | 128 | 224x224 | O2 | 192s | 229.63 | 4459.35 | 74.81 | 92.18 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/pvt/pvt_tiny_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/pvt/pvt_tiny-6abb953d.ckpt) |
+
+
+### Notes
+
+- top-1 and top-5: Accuracy reported on the validation set of ImageNet-1K.
## References
diff --git a/configs/pvtv2/README.md b/configs/pvtv2/README.md
index 78e258d8..872dac75 100644
--- a/configs/pvtv2/README.md
+++ b/configs/pvtv2/README.md
@@ -2,11 +2,6 @@
> [PVT v2: Improved Baselines with Pyramid Vision Transformer](https://arxiv.org/abs/2106.13797)
-## Requirements
-| mindspore | ascend driver | firmware | cann toolkit/kernel |
-| :-------: | :-----------: | :---------: | :-----------------: |
-| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
-
## Introduction
In this work, the authors present new baselines by improving the original Pyramid Vision Transformer (PVT v1) by adding
@@ -22,35 +17,10 @@ segmentation.[[1](#references)]
Figure 1. Architecture of PVTV2 [1]
-## Performance
-
-Our reproduced model performance on ImageNet-1K is reported as follows.
-
-- Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode
-
-
-
-
-| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
-| ---------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | --------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------- |
-| pvt_v2_b0 | 3.67 | 8 | 128 | 224x224 | O2 | 323s | 255.76 | 4003.75 | 71.25 | 90.50 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/pvtv2/pvt_v2_b0_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/pvt_v2/pvt_v2_b0-d9cd9d6a-910v2.ckpt) |
-
-
-
-- Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode
-
-
-
-
-| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
-| ---------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | --------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------- |
-| pvt_v2_b0 | 3.67 | 8 | 128 | 224x224 | O2 | 269s | 269.38 | 3801.32 | 71.50 | 90.60 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/pvtv2/pvt_v2_b0_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/pvt_v2/pvt_v2_b0-1c4f6683.ckpt) |
-
-
-
-#### Notes
-
-- Top-1 and Top-5: Accuracy reported on the validation set of ImageNet-1K.
+## Requirements
+| mindspore | ascend driver | firmware | cann toolkit/kernel |
+| :-------: | :-----------: | :---------: | :-----------------: |
+| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
## Quick Start
@@ -104,6 +74,29 @@ with `--ckpt_path`.
python validate.py -c configs/pvtv2/pvt_v2_b0_ascend.yaml --data_dir /path/to/imagenet --ckpt_path /path/to/ckpt
```
+## Performance
+
+Our reproduced model performance on ImageNet-1K is reported as follows.
+
+Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode.
+
+
+| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
+| ---------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | --------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------- |
+| pvt_v2_b0 | 3.67 | 8 | 128 | 224x224 | O2 | 323s | 255.76 | 4003.75 | 71.25 | 90.50 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/pvtv2/pvt_v2_b0_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/pvt_v2/pvt_v2_b0-d9cd9d6a-910v2.ckpt) |
+
+
+Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode.
+
+
+| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
+| ---------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | --------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------- |
+| pvt_v2_b0 | 3.67 | 8 | 128 | 224x224 | O2 | 269s | 269.38 | 3801.32 | 71.50 | 90.60 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/pvtv2/pvt_v2_b0_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/pvt_v2/pvt_v2_b0-1c4f6683.ckpt) |
+
+
+### Notes
+
+- top-1 and top-5: Accuracy reported on the validation set of ImageNet-1K.
## References
diff --git a/configs/regnet/README.md b/configs/regnet/README.md
index faffe3e8..6bcc738b 100644
--- a/configs/regnet/README.md
+++ b/configs/regnet/README.md
@@ -2,11 +2,6 @@
> [Designing Network Design Spaces](https://arxiv.org/abs/2003.13678)
-## Requirements
-| mindspore | ascend driver | firmware | cann toolkit/kernel |
-| :-------: | :-----------: | :---------: | :-----------------: |
-| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
-
## Introduction
In this work, the authors present a new network design paradigm that combines the advantages of manual design and NAS.
@@ -26,35 +21,10 @@ has a higher concentration of good models.[[1](#References)]

-## Performance
-
-Our reproduced model performance on ImageNet-1K is reported as follows.
-
-- Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode
-
-
-
-
-| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
-| -------------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | -------- | -------- | -------- | --------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------- |
-| regnet_x_800mf | 7.26 | 8 | 64 | 224x224 | O2 | 228s | 50.74 | 10090.66 | 76.11 | 93.00 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/regnet/regnet_x_800mf_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/regnet/regnet_x_800mf-68fe1cca-910v2.ckpt) |
-
-
-
-- Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode
-
-
-
-
-| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
-| -------------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | -------- | -------- | -------- | --------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------- |
-| regnet_x_800mf | 7.26 | 8 | 64 | 224x224 | O2 | 99s | 42.49 | 12049.89 | 76.04 | 92.97 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/regnet/regnet_x_800mf_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/regnet/regnet_x_800mf-617227f4.ckpt) |
-
-
-
-#### Notes
-
-- Top-1 and Top-5: Accuracy reported on the validation set of ImageNet-1K.
+## Requirements
+| mindspore | ascend driver | firmware | cann toolkit/kernel |
+| :-------: | :-----------: | :---------: | :-----------------: |
+| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
## Quick Start
@@ -106,7 +76,28 @@ with `--ckpt_path`.
```shell
python validate.py --model=regnet_x_800mf --data_dir /path/to/imagenet --ckpt_path /path/to/ckpt
```
+## Performance
+
+Our reproduced model performance on ImageNet-1K is reported as follows.
+
+Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode.
+
+| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
+| -------------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | -------- | -------- | -------- | --------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------------- |
+| regnet_x_800mf | 7.26 | 8 | 64 | 224x224 | O2 | 228s | 50.74 | 10090.66 | 76.11 | 93.00 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/regnet/regnet_x_800mf_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/regnet/regnet_x_800mf-68fe1cca-910v2.ckpt) |
+
+
+Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode.
+
+
+| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
+| -------------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | -------- | -------- | -------- | --------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------- |
+| regnet_x_800mf | 7.26 | 8 | 64 | 224x224 | O2 | 99s | 42.49 | 12049.89 | 76.04 | 92.97 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/regnet/regnet_x_800mf_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/regnet/regnet_x_800mf-617227f4.ckpt) |
+
+
+### Notes
+- top-1 and top-5: Accuracy reported on the validation set of ImageNet-1K.
## References
diff --git a/configs/repmlp/README.md b/configs/repmlp/README.md
index 69ede006..048b1d90 100644
--- a/configs/repmlp/README.md
+++ b/configs/repmlp/README.md
@@ -2,11 +2,6 @@
> [RepMLPNet: Hierarchical Vision MLP with Re-parameterized Locality](https://arxiv.org/abs/2112.11081)
-## Requirements
-| mindspore | ascend driver | firmware | cann toolkit/kernel |
-| :-------: | :-----------: | :---------: | :-----------------: |
-| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
-
## Introduction
Compared to convolutional layers, fully-connected (FC) layers are better at modeling the long-range dependencies
@@ -29,28 +24,11 @@ segmentation.

Figure 1. RepMLP Block.[[1](#References)]
-## Performance
-
-Our reproduced model performance on ImageNet-1K is reported as follows.
-
-- Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode
-
-*coming soon*
-
-- Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode
-
-
-
-
-| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
-| ----------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | ------------------------------------------------------------------------------------------------ | ----------------------------------------------------------------------------------------- |
-| repmlp_t224 | 38.30 | 8 | 128 | 224x224 | O2 | 289s | 578.23 | 1770.92 | 76.71 | 93.30 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/repmlp/repmlp_t224_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/repmlp/repmlp_t224-8dbedd00.ckpt) |
-
-
-
-#### Notes
+## Requirements
+| mindspore | ascend driver | firmware | cann toolkit/kernel |
+| :-------: | :-----------: | :---------: | :-----------------: |
+| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
-- Top-1 and Top-5: Accuracy reported on the validation set of ImageNet-1K.
## Quick Start
@@ -103,6 +81,24 @@ with `--ckpt_path`.
python validate.py --model=repmlp_t224 --data_dir /path/to/imagenet --ckpt_path /path/to/ckpt
```
+## Performance
+
+Our reproduced model performance on ImageNet-1K is reported as follows.
+
+Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode.
+
+*coming soon*
+
+Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode.
+
+
+| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
+| ----------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | ------------------------------------------------------------------------------------------------ | ----------------------------------------------------------------------------------------- |
+| repmlp_t224 | 38.30 | 8 | 128 | 224x224 | O2 | 289s | 578.23 | 1770.92 | 76.71 | 93.30 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/repmlp/repmlp_t224_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/repmlp/repmlp_t224-8dbedd00.ckpt) |
+
+### Notes
+
+- top-1 and top-5: Accuracy reported on the validation set of ImageNet-1K.
## References
diff --git a/configs/repvgg/README.md b/configs/repvgg/README.md
index a108040c..b1b5ac7b 100644
--- a/configs/repvgg/README.md
+++ b/configs/repvgg/README.md
@@ -3,10 +3,7 @@
> [RepVGG: Making VGG-style ConvNets Great Again](https://arxiv.org/abs/2101.03697)
-## Requirements
-| mindspore | ascend driver | firmware | cann toolkit/kernel |
-| :-------: | :-----------: | :---------: | :-----------------: |
-| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
+
## Introduction
@@ -27,46 +24,12 @@ previous methods.[[1](#references)]
Figure 1. Architecture of Repvgg [1]
-## Performance
-
-
-
-Our reproduced model performance on ImageNet-1K is reported as follows.
-
-- Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode
-
-
-
-
-| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
-| ---------- | --------- | ----- | ---------- | ---------- | --------- |---------------| ------- | -------- | -------- | -------- | ---------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------- |
-| repvgg_a0 | 9.13 | 8 | 32 | 224x224 | O2 | 76s | 24.12 | 10613.60 | 72.29 | 90.78 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/repvgg/repvgg_a0_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/repvgg/repvgg_a0-b67a9f15-910v2.ckpt) |
-| repvgg_a1 | 14.12 | 8 | 32 | 224x224 | O2 | 81s | 28.29 | 9096.13 | 73.68 | 91.51 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/repvgg/repvgg_a1_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/repvgg/repvgg_a1-a40aa623-910v2.ckpt) |
-
-
-
-- Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode
-
-
-
-
-| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
-| ---------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | -------- | -------- | -------- | ---------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------- |
-| repvgg_a0 | 9.13 | 8 | 32 | 224x224 | O2 | 50s
| 20.58 | 12439.26 | 72.19 | 90.75 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/repvgg/repvgg_a0_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/repvgg/repvgg_a0-6e71139d.ckpt) |
-| repvgg_a1 | 14.12 | 8 | 32 | 224x224 | O2 | 29s | 20.70 | 12367.15 | 74.19 | 91.89 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/repvgg/repvgg_a1_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/repvgg/repvgg_a1-539513ac.ckpt) |
-
-
+## Requirements
+| mindspore | ascend driver | firmware | cann toolkit/kernel |
+| :-------: | :-----------: | :---------: | :-----------------: |
+| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
-#### Notes
-- Top-1 and Top-5: Accuracy reported on the validation set of ImageNet-1K.
## Quick Start
@@ -122,6 +85,31 @@ python validate.py -c configs/repvgg/repvgg_a1_ascend.yaml --data_dir /path/to/i
```
+## Performance
+
+Our reproduced model performance on ImageNet-1K is reported as follows.
+
+Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode.
+
+
+| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
+| ---------- | --------- | ----- | ---------- | ---------- | --------- |---------------| ------- | -------- | -------- | -------- | ---------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------- |
+| repvgg_a0 | 9.13 | 8 | 32 | 224x224 | O2 | 76s | 24.12 | 10613.60 | 72.29 | 90.78 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/repvgg/repvgg_a0_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/repvgg/repvgg_a0-b67a9f15-910v2.ckpt) |
+| repvgg_a1 | 14.12 | 8 | 32 | 224x224 | O2 | 81s | 28.29 | 9096.13 | 73.68 | 91.51 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/repvgg/repvgg_a1_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/repvgg/repvgg_a1-a40aa623-910v2.ckpt) |
+
+
+Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode.
+
+
+| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
+| ---------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | -------- | -------- | -------- | ---------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------- |
+| repvgg_a0 | 9.13 | 8 | 32 | 224x224 | O2 | 50s
| 20.58 | 12439.26 | 72.19 | 90.75 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/repvgg/repvgg_a0_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/repvgg/repvgg_a0-6e71139d.ckpt) |
+| repvgg_a1 | 14.12 | 8 | 32 | 224x224 | O2 | 29s | 20.70 | 12367.15 | 74.19 | 91.89 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/repvgg/repvgg_a1_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/repvgg/repvgg_a1-539513ac.ckpt) |
+
+### Notes
+
+- top-1 and top-5: Accuracy reported on the validation set of ImageNet-1K.
+
## References
diff --git a/configs/res2net/README.md b/configs/res2net/README.md
index b074c83b..db15dc4c 100644
--- a/configs/res2net/README.md
+++ b/configs/res2net/README.md
@@ -2,10 +2,7 @@
> [Res2Net: A New Multi-scale Backbone Architecture](https://arxiv.org/abs/1904.01169)
-## Requirements
-| mindspore | ascend driver | firmware | cann toolkit/kernel |
-| :-------: | :-----------: | :---------: | :-----------------: |
-| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
+
## Introduction
@@ -23,35 +20,12 @@ state-of-the-art baseline methods such as ResNet-50, DLA-60 and etc.
Figure 1. Architecture of Res2Net [1]
-## Performance
-
-Our reproduced model performance on ImageNet-1K is reported as follows.
-
-- Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode
-
-
-
-
-| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
-| ---------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | ------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------ |
-| res2net50 | 25.76 | 8 | 32 | 224x224 | O2 | 174s | 39.6 | 6464.65 | 79.33 | 94.64 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/res2net/res2net_50_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/res2net/res2net50-aa758355-910v2.ckpt) |
-
-
-
-- Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode
-
-
-
-
-| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
-| ---------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | ------------------------------------------------------------------------------------------------ | ---------------------------------------------------------------------------------------- |
-| res2net50 | 25.76 | 8 | 32 | 224x224 | O2 | 119s | 39.68 | 6451.61 | 79.35 | 94.64 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/res2net/res2net_50_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/res2net/res2net50-f42cf71b.ckpt) |
-
-
+## Requirements
+| mindspore | ascend driver | firmware | cann toolkit/kernel |
+| :-------: | :-----------: | :---------: | :-----------------: |
+| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
-#### Notes
-- Top-1 and Top-5: Accuracy reported on the validation set of ImageNet-1K.
## Quick Start
@@ -105,6 +79,35 @@ with `--ckpt_path`.
python validate.py -c configs/res2net/res2net_50_ascend.yaml --data_dir /path/to/imagenet --ckpt_path /path/to/ckpt
```
+## Performance
+
+Our reproduced model performance on ImageNet-1K is reported as follows.
+
+Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode.
+
+
+
+
+| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
+| ---------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | ------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------ |
+| res2net50 | 25.76 | 8 | 32 | 224x224 | O2 | 174s | 39.6 | 6464.65 | 79.33 | 94.64 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/res2net/res2net_50_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/res2net/res2net50-aa758355-910v2.ckpt) |
+
+
+
+Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode.
+
+
+
+
+| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
+| ---------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | ------------------------------------------------------------------------------------------------ | ---------------------------------------------------------------------------------------- |
+| res2net50 | 25.76 | 8 | 32 | 224x224 | O2 | 119s | 39.68 | 6451.61 | 79.35 | 94.64 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/res2net/res2net_50_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/res2net/res2net50-f42cf71b.ckpt) |
+
+
+
+### Notes
+
+- top-1 and top-5: Accuracy reported on the validation set of ImageNet-1K.
## References
diff --git a/configs/resnest/README.md b/configs/resnest/README.md
index 8072ee3a..d05a4df0 100644
--- a/configs/resnest/README.md
+++ b/configs/resnest/README.md
@@ -2,10 +2,7 @@
> [ResNeSt: Split-Attention Networks](https://arxiv.org/abs/2004.08955)
-## Requirements
-| mindspore | ascend driver | firmware | cann toolkit/kernel |
-| :-------: | :-----------: | :---------: | :-----------------: |
-| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
+
## Introduction
@@ -22,28 +19,12 @@ classification.[[1](#references)]
Figure 1. Architecture of ResNeSt [1]
-## Performance
-
-Our reproduced model performance on ImageNet-1K is reported as follows.
-
-- Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode
-
-*coming soon*
-
-- Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode
-
-
-
-
-| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
-| ---------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | ----------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------- |
-| resnest50 | 27.55 | 8 | 128 | 224x224 | O2 | 83s | 244.92 | 4552.73 | 80.81 | 95.16 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/resnest/resnest50_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/resnest/resnest50-f2e7fc9c.ckpt) |
-
-
+## Requirements
+| mindspore | ascend driver | firmware | cann toolkit/kernel |
+| :-------: | :-----------: | :---------: | :-----------------: |
+| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
-#### Notes
-- Top-1 and Top-5: Accuracy reported on the validation set of ImageNet-1K.
## Quick Start
@@ -97,6 +78,28 @@ with `--ckpt_path`.
python validate.py -c configs/resnest/resnest50_ascend.yaml --data_dir /path/to/imagenet --ckpt_path /path/to/ckpt
```
+## Performance
+
+Our reproduced model performance on ImageNet-1K is reported as follows.
+
+Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode.
+
+*coming soon*
+
+Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode.
+
+
+
+
+| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
+| ---------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | ----------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------- |
+| resnest50 | 27.55 | 8 | 128 | 224x224 | O2 | 83s | 244.92 | 4552.73 | 80.81 | 95.16 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/resnest/resnest50_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/resnest/resnest50-f2e7fc9c.ckpt) |
+
+
+
+### Notes
+
+- top-1 and top-5: Accuracy reported on the validation set of ImageNet-1K.
## References
diff --git a/configs/resnet/README.md b/configs/resnet/README.md
index 258c4568..dc168334 100644
--- a/configs/resnet/README.md
+++ b/configs/resnet/README.md
@@ -2,10 +2,7 @@
> [Deep Residual Learning for Image Recognition](https://arxiv.org/abs/1512.03385)
-## Requirements
-| mindspore | ascend driver | firmware | cann toolkit/kernel |
-| :-------: | :-----------: | :---------: | :-----------------: |
-| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
+
## Introduction
@@ -21,35 +18,16 @@ networks are easier to optimize, and can gain accuracy from considerably increas
Figure 1. Architecture of ResNet [1]
-## Performance
-
-Our reproduced model performance on ImageNet-1K is reported as follows.
-
-- Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode
-
-
-
-
-| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
-| ---------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | ---------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------- |
-| resnet50 | 25.61 | 8 | 32 | 224x224 | O2 | 77s | 31.9 | 8025.08 | 76.76 | 93.31 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/resnet/resnet_50_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/resnet/resnet50-f369a08d-910v2.ckpt) |
-
-
-
-- Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode
-
-
+## Requirements
+| mindspore | ascend driver | firmware | cann toolkit/kernel |
+| :-------: | :-----------: | :---------: | :-----------------: |
+| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
-| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
-| ---------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | ---------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------- |
-| resnet50 | 25.61 | 8 | 32 | 224x224 | O2 | 43s | 31.41 | 8150.27 | 76.69 | 93.50 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/resnet/resnet_50_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/resnet/resnet50-e0733ab8.ckpt) |
-
+### Notes
-#### Notes
-
-- Top-1 and Top-5: Accuracy reported on the validation set of ImageNet-1K.
+- top-1 and top-5: Accuracy reported on the validation set of ImageNet-1K.
## Quick Start
@@ -103,6 +81,31 @@ with `--ckpt_path`.
python validate.py -c configs/resnet/resnet_18_ascend.yaml --data_dir /path/to/imagenet --ckpt_path /path/to/ckpt
```
+## Performance
+
+Our reproduced model performance on ImageNet-1K is reported as follows.
+
+Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode.
+
+
+
+
+| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
+| ---------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | ---------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------------- |
+| resnet50 | 25.61 | 8 | 32 | 224x224 | O2 | 77s | 31.9 | 8025.08 | 76.76 | 93.31 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/resnet/resnet_50_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/resnet/resnet50-f369a08d-910v2.ckpt) |
+
+
+
+Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode.
+
+
+
+
+| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
+| ---------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | ---------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------- |
+| resnet50 | 25.61 | 8 | 32 | 224x224 | O2 | 43s | 31.41 | 8150.27 | 76.69 | 93.50 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/resnet/resnet_50_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/resnet/resnet50-e0733ab8.ckpt) |
+
+
## References
diff --git a/configs/resnetv2/README.md b/configs/resnetv2/README.md
index 11ae0f9c..9714980e 100644
--- a/configs/resnetv2/README.md
+++ b/configs/resnetv2/README.md
@@ -2,10 +2,7 @@
> [Identity Mappings in Deep Residual Networks](https://arxiv.org/abs/1603.05027)
-## Requirements
-| mindspore | ascend driver | firmware | cann toolkit/kernel |
-| :-------: | :-----------: | :---------: | :-----------------: |
-| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
+
## Introduction
@@ -20,35 +17,12 @@ to any other block, when using identity mappings as the skip connections and aft
Figure 1. Architecture of ResNetV2 [1]
-## Performance
-
-Our reproduced model performance on ImageNet-1K is reported as follows.
-
-- Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode
-
-
-
-
-| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
-| ----------- | --------- | ----- | ---------- | ---------- | --------- |---------------| ------- | ------- | -------- | -------- | -------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------- |
-| resnetv2_50 | 25.60 | 8 | 32 | 224x224 | O2 | 120s | 32.19 | 7781.16 | 77.03 | 93.29 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/resnetv2/resnetv2_50_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/resnetv2/resnetv2_50-a0b9f7f8-910v2.ckpt) |
-
-
-
-- Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode
-
-
-
-
-| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
-| ----------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | -------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------- |
-| resnetv2_50 | 25.60 | 8 | 32 | 224x224 | O2 | 52s | 32.66 | 7838.33 | 76.90 | 93.37 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/resnetv2/resnetv2_50_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/resnetv2/resnetv2_50-3c2f143b.ckpt) |
-
-
+## Requirements
+| mindspore | ascend driver | firmware | cann toolkit/kernel |
+| :-------: | :-----------: | :---------: | :-----------------: |
+| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
-#### Notes
-- Top-1 and Top-5: Accuracy reported on the validation set of ImageNet-1K.
## Quick Start
@@ -102,6 +76,35 @@ with `--ckpt_path`.
python validate.py -c configs/resnetv2/resnetv2_50_ascend.yaml --data_dir /path/to/imagenet --ckpt_path /path/to/ckpt
```
+## Performance
+
+Our reproduced model performance on ImageNet-1K is reported as follows.
+
+Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode.
+
+
+
+
+| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
+| ----------- | --------- | ----- | ---------- | ---------- | --------- |---------------| ------- | ------- | -------- | -------- | -------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------- |
+| resnetv2_50 | 25.60 | 8 | 32 | 224x224 | O2 | 120s | 32.19 | 7781.16 | 77.03 | 93.29 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/resnetv2/resnetv2_50_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/resnetv2/resnetv2_50-a0b9f7f8-910v2.ckpt) |
+
+
+
+Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode.
+
+
+
+
+| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
+| ----------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | -------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------- |
+| resnetv2_50 | 25.60 | 8 | 32 | 224x224 | O2 | 52s | 32.66 | 7838.33 | 76.90 | 93.37 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/resnetv2/resnetv2_50_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/resnetv2/resnetv2_50-3c2f143b.ckpt) |
+
+
+
+### Notes
+
+- top-1 and top-5: Accuracy reported on the validation set of ImageNet-1K.
## References
diff --git a/configs/resnext/README.md b/configs/resnext/README.md
index 07702f26..3596d96a 100644
--- a/configs/resnext/README.md
+++ b/configs/resnext/README.md
@@ -2,10 +2,7 @@
> [Aggregated Residual Transformations for Deep Neural Networks](https://arxiv.org/abs/1611.05431)
-## Requirements
-| mindspore | ascend driver | firmware | cann toolkit/kernel |
-| :-------: | :-----------: | :---------: | :-----------------: |
-| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
+
## Introduction
@@ -24,35 +21,12 @@ accuracy.[[1](#references)]
Figure 1. Architecture of ResNeXt [1]
-## Performance
-
-Our reproduced model performance on ImageNet-1K is reported as follows.
-
-- Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode
-
-
-
-
-| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
-| --------------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | ----------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------ |
-| resnext50_32x4d | 25.10 | 8 | 32 | 224x224 | O2 | 156s | 44.61 | 5738.62 | 78.64 | 94.18 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/resnext/resnext50_32x4d_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/resnext/resnext50_32x4d-988f75bc-910v2.ckpt) |
-
-
-
-- Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode
-
-
-
-
-| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
-| --------------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | ----------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------- |
-| resnext50_32x4d | 25.10 | 8 | 32 | 224x224 | O2 | 49s | 37.22 | 6878.02 | 78.53 | 94.10 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/resnext/resnext50_32x4d_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/resnext/resnext50_32x4d-af8aba16.ckpt) |
-
-
+## Requirements
+| mindspore | ascend driver | firmware | cann toolkit/kernel |
+| :-------: | :-----------: | :---------: | :-----------------: |
+| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
-#### Notes
-- Top-1 and Top-5: Accuracy reported on the validation set of ImageNet-1K.
## Quick Start
@@ -106,6 +80,35 @@ with `--ckpt_path`.
python validate.py -c configs/resnext/resnext50_32x4d_ascend.yaml --data_dir /path/to/imagenet --ckpt_path /path/to/ckpt
```
+## Performance
+
+Our reproduced model performance on ImageNet-1K is reported as follows.
+
+Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode.
+
+
+
+
+| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
+| --------------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | ----------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------ |
+| resnext50_32x4d | 25.10 | 8 | 32 | 224x224 | O2 | 156s | 44.61 | 5738.62 | 78.64 | 94.18 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/resnext/resnext50_32x4d_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/resnext/resnext50_32x4d-988f75bc-910v2.ckpt) |
+
+
+
+Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode.
+
+
+
+
+| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
+| --------------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | ----------------------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------- |
+| resnext50_32x4d | 25.10 | 8 | 32 | 224x224 | O2 | 49s | 37.22 | 6878.02 | 78.53 | 94.10 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/resnext/resnext50_32x4d_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/resnext/resnext50_32x4d-af8aba16.ckpt) |
+
+
+
+### Notes
+
+- top-1 and top-5: Accuracy reported on the validation set of ImageNet-1K.
## References
diff --git a/configs/rexnet/README.md b/configs/rexnet/README.md
index c6e289ad..ca628036 100644
--- a/configs/rexnet/README.md
+++ b/configs/rexnet/README.md
@@ -2,10 +2,7 @@
> [ReXNet: Rethinking Channel Dimensions for Efficient Model Design](https://arxiv.org/abs/2007.00992)
-## Requirements
-| mindspore | ascend driver | firmware | cann toolkit/kernel |
-| :-------: | :-----------: | :---------: | :-----------------: |
-| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
+
## Introduction
@@ -15,35 +12,16 @@ configuration that can be parameterized by a linear function of the block index
lightweight models including NAS-based models and further showed remarkable fine-tuning performances on COCO object
detection, instance segmentation, and fine-grained classifications.
-## Performance
-
-Our reproduced model performance on ImageNet-1K is reported as follows.
-
-- Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode
-
-
-
-
-| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
-| ---------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | ----------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------- |
-| rexnet_09 | 4.13 | 8 | 64 | 224x224 | O2 | 515s | 115.61 | 3290.28 | 76.14 | 92.96 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/rexnet/rexnet_x09_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/rexnet/rexnet_09-00223eb4-910v2.ckpt) |
-
-
-
-- Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode
-
-
-
+## Requirements
+| mindspore | ascend driver | firmware | cann toolkit/kernel |
+| :-------: | :-----------: | :---------: | :-----------------: |
+| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
-| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
-| ---------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | ----------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------- |
-| rexnet_09 | 4.13 | 8 | 64 | 224x224 | O2 | 462s | 130.10 | 3935.43 | 77.06 | 93.41 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/rexnet/rexnet_x09_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/rexnet/rexnet_09-da498331.ckpt) |
-
-#### Notes
+### Notes
-- Top-1 and Top-5: Accuracy reported on the validation set of ImageNet-1K.
+- top-1 and top-5: Accuracy reported on the validation set of ImageNet-1K.
## Quick Start
@@ -97,6 +75,17 @@ with `--ckpt_path`.
python validate.py -c configs/rexnet/rexnet_x09_ascend.yaml --data_dir /path/to/imagenet --ckpt_path /path/to/ckpt
```
+## Performance
+
+Our reproduced model performance on ImageNet-1K is reported as follows.
+
+Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode.
+
+*coming soon*
+
+Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode.
+
+*coming soon*
## References
diff --git a/configs/senet/README.md b/configs/senet/README.md
index 738cc1c1..e4e1b795 100644
--- a/configs/senet/README.md
+++ b/configs/senet/README.md
@@ -2,10 +2,7 @@
> [Squeeze-and-Excitation Networks](https://arxiv.org/abs/1709.01507)
-## Requirements
-| mindspore | ascend driver | firmware | cann toolkit/kernel |
-| :-------: | :-----------: | :---------: | :-----------------: |
-| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
+
## Introduction
@@ -23,35 +20,12 @@ additional computational cost.[[1](#references)]
Figure 1. Architecture of SENet [1]
-## Performance
-
-Our reproduced model performance on ImageNet-1K is reported as follows.
-
-- Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode
-
-
-
-
-| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
-| ---------- | --------- | ----- | ---------- | ---------- | --------- |---------------| ------- | -------- | -------- | -------- | ---------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------- |
-| seresnet18 | 11.80 | 8 | 64 | 224x224 | O2 | 90s | 51.09 | 10021.53 | 72.05 | 90.59 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/senet/seresnet18_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/senet/seresnet18-7b971c78-910v2.ckpt) |
-
-
-
-- Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode
-
-
-
-
-| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
-| ---------- | --------- | ----- | ---------- | ---------- | --------- |---------------| ------- | -------- | -------- | -------- | ---------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------- |
-| seresnet18 | 11.80 | 8 | 64 | 224x224 | O2 | 43s | 44.40 | 11531.53 | 71.81 | 90.49 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/senet/seresnet18_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/senet/seresnet18-7880643b.ckpt) |
-
-
+## Requirements
+| mindspore | ascend driver | firmware | cann toolkit/kernel |
+| :-------: | :-----------: | :---------: | :-----------------: |
+| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
-#### Notes
-- Top-1 and Top-5: Accuracy reported on the validation set of ImageNet-1K.
## Quick Start
@@ -105,6 +79,35 @@ with `--ckpt_path`.
python validate.py -c configs/senet/seresnet50_ascend.yaml --data_dir /path/to/imagenet --ckpt_path /path/to/ckpt
```
+## Performance
+
+Our reproduced model performance on ImageNet-1K is reported as follows.
+
+Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode.
+
+
+
+
+| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
+| ---------- | --------- | ----- | ---------- | ---------- | --------- |---------------| ------- | -------- | -------- | -------- | ---------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------- |
+| seresnet18 | 11.80 | 8 | 64 | 224x224 | O2 | 90s | 51.09 | 10021.53 | 72.05 | 90.59 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/senet/seresnet18_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/senet/seresnet18-7b971c78-910v2.ckpt) |
+
+
+
+Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode.
+
+
+
+
+| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
+| ---------- | --------- | ----- | ---------- | ---------- | --------- |---------------| ------- | -------- | -------- | -------- | ---------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------- |
+| seresnet18 | 11.80 | 8 | 64 | 224x224 | O2 | 43s | 44.40 | 11531.53 | 71.81 | 90.49 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/senet/seresnet18_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/senet/seresnet18-7880643b.ckpt) |
+
+
+
+### Notes
+
+- top-1 and top-5: Accuracy reported on the validation set of ImageNet-1K.
## References
diff --git a/configs/shufflenetv1/README.md b/configs/shufflenetv1/README.md
index f965e5d9..32e282e4 100644
--- a/configs/shufflenetv1/README.md
+++ b/configs/shufflenetv1/README.md
@@ -2,10 +2,7 @@
> [ShuffleNet: An Extremely Efficient Convolutional Neural Network for Mobile Devices](https://arxiv.org/abs/1707.01083)
-## Requirements
-| mindspore | ascend driver | firmware | cann toolkit/kernel |
-| :-------: | :-----------: | :---------: | :-----------------: |
-| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
+
## Introduction
@@ -22,35 +19,12 @@ migrating a large trained model.
Figure 1. Architecture of ShuffleNetV1 [1]
-## Performance
-
-Our reproduced model performance on ImageNet-1K is reported as follows.
-
-- Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode
-
-
-
-
-| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
-| ------------------- | --------- | ----- | ---------- | ---------- | --------- |---------------| ------- | -------- | -------- | -------- | ------------------------------------------------------------------------------------------------------------ | -------------------------------------------------------------------------------------------------------------------------------- |
-| shufflenet_v1_g3_05 | 0.73 | 8 | 64 | 224x224 | O2 | 191s | 47.77 | 10718.02 | 57.08 | 79.89 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/shufflenetv1/shufflenet_v1_0.5_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/shufflenet/shufflenetv1/shufflenet_v1_g3_05-56209ef3-910v2.ckpt) |
-
-
-
-- Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode
-
-
-
-
-| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
-| ------------------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | -------- | -------- | -------- | ------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------ |
-| shufflenet_v1_g3_05 | 0.73 | 8 | 64 | 224x224 | O2 | 169s | 40.62 | 12604.63 | 57.05 | 79.73 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/shufflenetv1/shufflenet_v1_0.5_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/shufflenet/shufflenetv1/shufflenet_v1_g3_05-42cfe109.ckpt) |
-
-
+## Requirements
+| mindspore | ascend driver | firmware | cann toolkit/kernel |
+| :-------: | :-----------: | :---------: | :-----------------: |
+| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
-#### Notes
-- Top-1 and Top-5: Accuracy reported on the validation set of ImageNet-1K.
## Quick Start
@@ -104,6 +78,35 @@ with `--ckpt_path`.
python validate.py -c configs/shufflenetv1/shufflenet_v1_0.5_ascend.yaml --data_dir /path/to/imagenet --ckpt_path /path/to/ckpt
```
+## Performance
+
+Our reproduced model performance on ImageNet-1K is reported as follows.
+
+Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode.
+
+
+
+
+| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
+| ------------------- | --------- | ----- | ---------- | ---------- | --------- |---------------| ------- | -------- | -------- | -------- | ------------------------------------------------------------------------------------------------------------ | -------------------------------------------------------------------------------------------------------------------------------- |
+| shufflenet_v1_g3_05 | 0.73 | 8 | 64 | 224x224 | O2 | 191s | 47.77 | 10718.02 | 57.08 | 79.89 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/shufflenetv1/shufflenet_v1_0.5_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/shufflenet/shufflenetv1/shufflenet_v1_g3_05-56209ef3-910v2.ckpt) |
+
+
+
+Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode.
+
+
+
+
+| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
+| ------------------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | -------- | -------- | -------- | ------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------ |
+| shufflenet_v1_g3_05 | 0.73 | 8 | 64 | 224x224 | O2 | 169s | 40.62 | 12604.63 | 57.05 | 79.73 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/shufflenetv1/shufflenet_v1_0.5_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/shufflenet/shufflenetv1/shufflenet_v1_g3_05-42cfe109.ckpt) |
+
+
+
+### Notes
+
+- top-1 and top-5: Accuracy reported on the validation set of ImageNet-1K.
## References
diff --git a/configs/shufflenetv2/README.md b/configs/shufflenetv2/README.md
index 57402132..d493067e 100644
--- a/configs/shufflenetv2/README.md
+++ b/configs/shufflenetv2/README.md
@@ -2,10 +2,7 @@
> [ShuffleNet V2: Practical Guidelines for Efficient CNN Architecture Design](https://arxiv.org/abs/1807.11164)
-## Requirements
-| mindspore | ascend driver | firmware | cann toolkit/kernel |
-| :-------: | :-----------: | :---------: | :-----------------: |
-| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
+
## Introduction
@@ -29,39 +26,12 @@ Therefore, based on these two principles, ShuffleNetV2 proposes four effective n
Figure 1. Architecture Design in ShuffleNetV2 [1]
-## Performance
-
-Our reproduced model performance on ImageNet-1K is reported as follows.
-
-- Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode
-
-
-
-
-| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
-| ------------------ | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | -------- | -------- | -------- | ------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------- |
-| shufflenet_v2_x0_5 | 1.37 | 8 | 64 | 224x224 | O2 | 100s | 47.32 | 10819.95 | 60.65 | 82.26 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/shufflenetv2/shufflenet_v2_0.5_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/shufflenet/shufflenetv2/shufflenet_v2_x0_5-39d05bb6-910v2.ckpt) |
-
-
-
-- Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode
-
-
-
-
-| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
-| ------------------ | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | -------- | -------- | -------- | ------------------------------------------------------------------------------------------------------------ | ----------------------------------------------------------------------------------------------------------------- |
-| shufflenet_v2_x0_5 | 1.37 | 8 | 64 | 224x224 | O2 | 62s | 41.87 | 12228.33 | 60.53 | 82.11 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/shufflenetv2/shufflenet_v2_0.5_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/shufflenet/shufflenetv2/shufflenet_v2_x0_5-8c841061.ckpt) |
-
-
-
-#### Notes
-
-- Top-1 and Top-5: Accuracy reported on the validation set of ImageNet-1K.
+## Requirements
+| mindspore | ascend driver | firmware | cann toolkit/kernel |
+| :-------: | :-----------: | :---------: | :-----------------: |
+| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
-#### Notes
-- All models are trained on ImageNet-1K training set and the top-1 accuracy is reported on the validatoin set.
## Quick Start
@@ -115,6 +85,36 @@ with `--ckpt_path`.
python validate.py -c configs/shufflenetv2/shufflenet_v2_0.5_ascend.yaml --data_dir /path/to/imagenet --ckpt_path /path/to/ckpt
```
+## Performance
+
+Our reproduced model performance on ImageNet-1K is reported as follows.
+
+Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode.
+
+
+
+
+| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
+| ------------------ | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | -------- | -------- | -------- | ------------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------- |
+| shufflenet_v2_x0_5 | 1.37 | 8 | 64 | 224x224 | O2 | 100s | 47.32 | 10819.95 | 60.65 | 82.26 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/shufflenetv2/shufflenet_v2_0.5_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/shufflenet/shufflenetv2/shufflenet_v2_x0_5-39d05bb6-910v2.ckpt) |
+
+
+
+Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode.
+
+
+
+
+| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
+| ------------------ | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | -------- | -------- | -------- | ------------------------------------------------------------------------------------------------------------ | ----------------------------------------------------------------------------------------------------------------- |
+| shufflenet_v2_x0_5 | 1.37 | 8 | 64 | 224x224 | O2 | 62s | 41.87 | 12228.33 | 60.53 | 82.11 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/shufflenetv2/shufflenet_v2_0.5_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/shufflenet/shufflenetv2/shufflenet_v2_x0_5-8c841061.ckpt) |
+
+
+
+### Notes
+
+- All models are trained on ImageNet-1K training set and the top-1 accuracy is reported on the validatoin set.
+- top-1 and top-5: Accuracy reported on the validation set of ImageNet-1K.
## References
diff --git a/configs/sknet/README.md b/configs/sknet/README.md
index c90f9d88..389049f1 100644
--- a/configs/sknet/README.md
+++ b/configs/sknet/README.md
@@ -2,10 +2,7 @@
> [Selective Kernel Networks](https://arxiv.org/pdf/1903.06586)
-## Requirements
-| mindspore | ascend driver | firmware | cann toolkit/kernel |
-| :-------: | :-----------: | :---------: | :-----------------: |
-| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
+
## Introduction
@@ -27,35 +24,12 @@ multi-scale information from, e.g., 3×3, 5×5, 7×7 convolutional kernels insid
Figure 1. Selective Kernel Convolution.
-## Performance
-
-Our reproduced model performance on ImageNet-1K is reported as follows.
-
-- Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode
-
-
-
-
-| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
-| ---------- | --------- | ----- | ---------- | ---------- | --------- |---------------| ------- | -------- | -------- | -------- | ---------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------- |
-| skresnet18 | 11.97 | 8 | 64 | 224x224 | O2 | 134s | 49.83 | 10274.93 | 72.85 | 90.83 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/sknet/skresnet18_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/sknet/skresnet18-9d8b1afc-910v2.ckpt) |
-
-
-
-- Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode
-
-
-
-
-| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
-| ---------- | --------- | ----- | ---------- | ---------- | --------- |---------------| ------- | -------- | -------- | -------- | ---------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------- |
-| skresnet18 | 11.97 | 8 | 64 | 224x224 | O2 | 60s | 45.84 | 11169.28 | 73.09 | 91.20 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/sknet/skresnet18_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/sknet/skresnet18-868228e5.ckpt) |
-
-
+## Requirements
+| mindspore | ascend driver | firmware | cann toolkit/kernel |
+| :-------: | :-----------: | :---------: | :-----------------: |
+| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
-#### Notes
-- Top-1 and Top-5: Accuracy reported on the validation set of ImageNet-1K.
## Quick Start
@@ -109,6 +83,36 @@ with `--ckpt_path`.
python validate.py -c configs/sknet/skresnext50_32x4d_ascend.yaml --data_dir /path/to/imagenet --ckpt_path /path/to/ckpt
```
+## Performance
+
+Our reproduced model performance on ImageNet-1K is reported as follows.
+
+Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode.
+
+
+
+
+| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
+| ---------- | --------- | ----- | ---------- | ---------- | --------- |---------------| ------- | -------- | -------- | -------- | ---------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------- |
+| skresnet18 | 11.97 | 8 | 64 | 224x224 | O2 | 134s | 49.83 | 10274.93 | 72.85 | 90.83 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/sknet/skresnet18_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/sknet/skresnet18-9d8b1afc-910v2.ckpt) |
+
+
+
+Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode.
+
+
+
+
+| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
+| ---------- | --------- | ----- | ---------- | ---------- | --------- |---------------| ------- | -------- | -------- | -------- | ---------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------- |
+| skresnet18 | 11.97 | 8 | 64 | 224x224 | O2 | 60s | 45.84 | 11169.28 | 73.09 | 91.20 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/sknet/skresnet18_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/sknet/skresnet18-868228e5.ckpt) |
+
+
+
+### Notes
+
+- top-1 and top-5: Accuracy reported on the validation set of ImageNet-1K.
+
## References
diff --git a/configs/squeezenet/README.md b/configs/squeezenet/README.md
index 6995d411..3ab4ffb4 100644
--- a/configs/squeezenet/README.md
+++ b/configs/squeezenet/README.md
@@ -2,10 +2,7 @@
> [SqueezeNet: AlexNet-level accuracy with 50x fewer parameters and <0.5MB model size](https://arxiv.org/abs/1602.07360)
-## Requirements
-| mindspore | ascend driver | firmware | cann toolkit/kernel |
-| :-------: | :-----------: | :---------: | :-----------------: |
-| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
+
## Introduction
@@ -24,35 +21,12 @@ Middle: SqueezeNet with simple bypass; Right: SqueezeNet with complex bypass.
Figure 1. Architecture of SqueezeNet [1]
-## Performance
-
-Our reproduced model performance on ImageNet-1K is reported as follows.
-
-- Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode
-
-
-
-
-| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
-| ------------- | --------- | ----- | ---------- | ---------- | --------- |---------------| ------- | -------- | -------- | -------- | ------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------- |
-| squeezenet1_0 | 1.25 | 8 | 32 | 224x224 | O2 | 64s | 23.48 | 10902.90 | 58.75 | 80.76 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/squeezenet/squeezenet_1.0_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/squeezenet/squeezenet1_0-24010b28-910v2.ckpt) |
-
-
-
-- Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode
-
-
-
-
-| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
-| ------------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | -------- | -------- | -------- | ------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------- |
-| squeezenet1_0 | 1.25 | 8 | 32 | 224x224 | O2 | 45s | 22.36 | 11449.02 | 58.67 | 80.61 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/squeezenet/squeezenet_1.0_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/squeezenet/squeezenet1_0-eb911778.ckpt) |
-
-
+## Requirements
+| mindspore | ascend driver | firmware | cann toolkit/kernel |
+| :-------: | :-----------: | :---------: | :-----------------: |
+| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
-#### Notes
-- Top-1 and Top-5: Accuracy reported on the validation set of ImageNet-1K.
## Quick Start
@@ -106,6 +80,35 @@ with `--ckpt_path`.
python validate.py -c configs/squeezenet/squeezenet_1.0_ascend.yaml --data_dir /path/to/imagenet --ckpt_path /path/to/ckpt
```
+## Performance
+
+Our reproduced model performance on ImageNet-1K is reported as follows.
+
+Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode.
+
+
+
+
+| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
+| ------------- | --------- | ----- | ---------- | ---------- | --------- |---------------| ------- | -------- | -------- | -------- | ------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------- |
+| squeezenet1_0 | 1.25 | 8 | 32 | 224x224 | O2 | 64s | 23.48 | 10902.90 | 58.75 | 80.76 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/squeezenet/squeezenet_1.0_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/squeezenet/squeezenet1_0-24010b28-910v2.ckpt) |
+
+
+
+Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode.
+
+
+
+
+| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
+| ------------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | -------- | -------- | -------- | ------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------- |
+| squeezenet1_0 | 1.25 | 8 | 32 | 224x224 | O2 | 45s | 22.36 | 11449.02 | 58.67 | 80.61 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/squeezenet/squeezenet_1.0_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/squeezenet/squeezenet1_0-eb911778.ckpt) |
+
+
+
+### Notes
+
+- top-1 and top-5: Accuracy reported on the validation set of ImageNet-1K.
## References
diff --git a/configs/swintransformer/README.md b/configs/swintransformer/README.md
index 85362aa9..910d4d21 100644
--- a/configs/swintransformer/README.md
+++ b/configs/swintransformer/README.md
@@ -3,10 +3,7 @@
> [Swin Transformer: Hierarchical Vision Transformer using Shifted Windows](https://arxiv.org/abs/2103.14030)
-## Requirements
-| mindspore | ascend driver | firmware | cann toolkit/kernel |
-| :-------: | :-----------: | :---------: | :-----------------: |
-| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
+
## Introduction
@@ -30,44 +27,12 @@ on ImageNet-1K dataset compared with ViT and ResNet.[[1](#references)]
Figure 1. Architecture of Swin Transformer [1]
-## Performance
-
-
-
-Our reproduced model performance on ImageNet-1K is reported as follows.
-
-- Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode
-
-
-
-
-| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
-| ---------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | ------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------- |
-| swin_tiny | 33.38 | 8 | 256 | 224x224 | O2 | 266s | 466.6 | 4389.20 | 80.90 | 94.90 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/swintransformer/swin_tiny_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/swin/swin_tiny-72b3c5e6-910v2.ckpt) |
-
-
-
-- Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode
-
-
-
-
-| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
-| ---------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | ------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------- |
-| swin_tiny | 33.38 | 8 | 256 | 224x224 | O2 | 226s | 454.49 | 4506.15 | 80.82 | 94.80 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/swintransformer/swin_tiny_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/swin/swin_tiny-0ff2f96d.ckpt) |
-
-
+## Requirements
+| mindspore | ascend driver | firmware | cann toolkit/kernel |
+| :-------: | :-----------: | :---------: | :-----------------: |
+| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
-#### Notes
-- Top-1 and Top-5: Accuracy reported on the validation set of ImageNet-1K.
## Quick Start
@@ -122,6 +87,36 @@ with `--ckpt_path`.
python validate.py -c configs/swintransformer/swin_tiny_ascend.yaml --data_dir /path/to/imagenet --ckpt_path /path/to/ckpt
```
+## Performance
+
+
+Our reproduced model performance on ImageNet-1K is reported as follows.
+
+Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode.
+
+
+
+
+| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
+| ---------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | ------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------- |
+| swin_tiny | 33.38 | 8 | 256 | 224x224 | O2 | 266s | 466.6 | 4389.20 | 80.90 | 94.90 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/swintransformer/swin_tiny_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/swin/swin_tiny-72b3c5e6-910v2.ckpt) |
+
+
+
+Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode.
+
+
+
+
+| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
+| ---------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | ------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------- |
+| swin_tiny | 33.38 | 8 | 256 | 224x224 | O2 | 226s | 454.49 | 4506.15 | 80.82 | 94.80 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/swintransformer/swin_tiny_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/swin/swin_tiny-0ff2f96d.ckpt) |
+
+
+
+### Notes
+
+- top-1 and top-5: Accuracy reported on the validation set of ImageNet-1K.
## References
diff --git a/configs/swintransformerv2/README.md b/configs/swintransformerv2/README.md
index c92f448c..672911eb 100644
--- a/configs/swintransformerv2/README.md
+++ b/configs/swintransformerv2/README.md
@@ -2,10 +2,7 @@
> [Swin Transformer V2: Scaling Up Capacity and Resolution](https://arxiv.org/abs/2111.09883)
-## Requirements
-| mindspore | ascend driver | firmware | cann toolkit/kernel |
-| :-------: | :-----------: | :---------: | :-----------------: |
-| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
+
## Introduction
@@ -25,35 +22,12 @@ semantic segmentation, and Kinetics-400 video action classification.[[1](#refere
Figure 1. Architecture of Swin Transformer V2 [1]
-## Performance
-
-Our reproduced model performance on ImageNet-1K is reported as follows.
-
-- Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode
-
-
-
-
-| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
-| ------------------- | --------- | ----- | ---------- | ---------- | --------- |---------------| ------- | ------- | -------- | -------- | ------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------- |
-| swinv2_tiny_window8 | 28.78 | 8 | 128 | 256x256 | O2 | 385s | 335.18 | 3055.07 | 81.38 | 95.46 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/swintransformerv2/swinv2_tiny_window8_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/swinv2/swinv2_tiny_window8-70c5e903-910v2.ckpt) |
-
-
-
-- Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode
-
-
-
-
-| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
-| ------------------- | --------- | ----- | ---------- | ---------- | --------- |---------------| ------- | ------- | -------- | -------- | ------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------- |
-| swinv2_tiny_window8 | 28.78 | 8 | 128 | 256x256 | O2 | 273s | 317.19 | 3228.35 | 81.42 | 95.43 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/swintransformerv2/swinv2_tiny_window8_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/swinv2/swinv2_tiny_window8-3ef8b787.ckpt) |
-
-
+## Requirements
+| mindspore | ascend driver | firmware | cann toolkit/kernel |
+| :-------: | :-----------: | :---------: | :-----------------: |
+| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
-#### Notes
-- Top-1 and Top-5: Accuracy reported on the validation set of ImageNet-1K.
## Quick Start
@@ -107,6 +81,35 @@ with `--ckpt_path`.
python validate.py -c configs/swintransformerv2/swinv2_tiny_window8_ascend.yaml --data_dir /path/to/imagenet --ckpt_path /path/to/ckpt
```
+## Performance
+
+Our reproduced model performance on ImageNet-1K is reported as follows.
+
+Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode.
+
+
+
+
+| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
+| ------------------- | --------- | ----- | ---------- | ---------- | --------- |---------------| ------- | ------- | -------- | -------- | ------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------- |
+| swinv2_tiny_window8 | 28.78 | 8 | 128 | 256x256 | O2 | 385s | 335.18 | 3055.07 | 81.38 | 95.46 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/swintransformerv2/swinv2_tiny_window8_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/swinv2/swinv2_tiny_window8-70c5e903-910v2.ckpt) |
+
+
+
+Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode.
+
+
+
+
+| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
+| ------------------- | --------- | ----- | ---------- | ---------- | --------- |---------------| ------- | ------- | -------- | -------- | ------------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------- |
+| swinv2_tiny_window8 | 28.78 | 8 | 128 | 256x256 | O2 | 273s | 317.19 | 3228.35 | 81.42 | 95.43 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/swintransformerv2/swinv2_tiny_window8_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/swinv2/swinv2_tiny_window8-3ef8b787.ckpt) |
+
+
+
+### Notes
+
+- top-1 and top-5: Accuracy reported on the validation set of ImageNet-1K.
## References
diff --git a/configs/vgg/README.md b/configs/vgg/README.md
index fe7c155a..190e010e 100644
--- a/configs/vgg/README.md
+++ b/configs/vgg/README.md
@@ -3,10 +3,7 @@
> [Very Deep Convolutional Networks for Large-Scale Image Recognition](https://arxiv.org/abs/1409.1556)
-## Requirements
-| mindspore | ascend driver | firmware | cann toolkit/kernel |
-| :-------: | :-----------: | :---------: | :-----------------: |
-| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
+
## Introduction
@@ -26,46 +23,12 @@ methods such as GoogleLeNet and AlexNet on ImageNet-1K dataset.[[1](#references)
Figure 1. Architecture of VGG [1]
-## Performance
-
-
-
-Our reproduced model performance on ImageNet-1K is reported as follows.
-
-- Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode
-
-
-
-
-| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
-| ---------- | --------- | ----- | ---------- | ---------- | --------- |---------------| ------- | ------- | -------- | -------- | --------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------- |
-| vgg13 | 133.04 | 8 | 32 | 224x224 | O2 | 41s | 30.52 | 8387.94 | 72.81 | 91.02 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/vgg/vgg13_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/vgg/vgg13-7756f33c-910v2.ckpt) |
-| vgg19 | 143.66 | 8 | 32 | 224x224 | O2 | 53s | 39.17 | 6535.61 | 75.24 | 92.55 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/vgg/vgg19_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/vgg/vgg19-5104d1ea-910v2.ckpt) |
-
-
-
-- Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode
-
-
-
-
-| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
-| ---------- | --------- | ----- | ---------- | ---------- | --------- |---------------| ------- | ------- | -------- | -------- | --------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------- |
-| vgg13 | 133.04 | 8 | 32 | 224x224 | O2 | 23s | 55.20 | 4637.68 | 72.87 | 91.02 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/vgg/vgg13_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/vgg/vgg13-da805e6e.ckpt) |
-| vgg19 | 143.66 | 8 | 32 | 224x224 | O2 | 22s | 67.42 | 3797.09 | 75.21 | 92.56 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/vgg/vgg19_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/vgg/vgg19-bedee7b6.ckpt) |
-
-
+## Requirements
+| mindspore | ascend driver | firmware | cann toolkit/kernel |
+| :-------: | :-----------: | :---------: | :-----------------: |
+| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
-#### Notes
-- Top-1 and Top-5: Accuracy reported on the validation set of ImageNet-1K.
## Quick Start
@@ -119,6 +82,37 @@ with `--ckpt_path`.
python validate.py -c configs/vgg/vgg16_ascend.yaml --data_dir /path/to/imagenet --ckpt_path /path/to/ckpt
```
+## Performance
+
+Our reproduced model performance on ImageNet-1K is reported as follows.
+
+Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode.
+
+
+
+
+| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
+| ---------- | --------- | ----- | ---------- | ---------- | --------- |---------------| ------- | ------- | -------- | -------- | --------------------------------------------------------------------------------------- | ---------------------------------------------------------------------------------------------- |
+| vgg13 | 133.04 | 8 | 32 | 224x224 | O2 | 41s | 30.52 | 8387.94 | 72.81 | 91.02 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/vgg/vgg13_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/vgg/vgg13-7756f33c-910v2.ckpt) |
+| vgg19 | 143.66 | 8 | 32 | 224x224 | O2 | 53s | 39.17 | 6535.61 | 75.24 | 92.55 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/vgg/vgg19_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/vgg/vgg19-5104d1ea-910v2.ckpt) |
+
+
+
+Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode.
+
+
+
+
+| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
+| ---------- | --------- | ----- | ---------- | ---------- | --------- |---------------| ------- | ------- | -------- | -------- | --------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------- |
+| vgg13 | 133.04 | 8 | 32 | 224x224 | O2 | 23s | 55.20 | 4637.68 | 72.87 | 91.02 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/vgg/vgg13_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/vgg/vgg13-da805e6e.ckpt) |
+| vgg19 | 143.66 | 8 | 32 | 224x224 | O2 | 22s | 67.42 | 3797.09 | 75.21 | 92.56 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/vgg/vgg19_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/vgg/vgg19-bedee7b6.ckpt) |
+
+
+
+### Notes
+
+- top-1 and top-5: Accuracy reported on the validation set of ImageNet-1K.
## References
diff --git a/configs/visformer/README.md b/configs/visformer/README.md
index 532556be..241b3df1 100644
--- a/configs/visformer/README.md
+++ b/configs/visformer/README.md
@@ -2,10 +2,7 @@
> [Visformer: The Vision-friendly Transformer](https://arxiv.org/abs/2104.12533)
-## Requirements
-| mindspore | ascend driver | firmware | cann toolkit/kernel |
-| :-------: | :-----------: | :---------: | :-----------------: |
-| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
+
## Introduction
@@ -23,37 +20,12 @@ BatchNorm to patch embedding modules as in CNNs. [[2](#references)]
Figure 1. Network Configuration of Visformer [1]
-## Performance
-
-## ImageNet-1k
-
-Our reproduced model performance on ImageNet-1K is reported as follows.
-
-- Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode
-
-
-
-
-| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
-| -------------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | ------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------- |
-| visformer_tiny | 10.33 | 8 | 128 | 224x224 | O2 | 169s | 201.14 | 5090.98 | 78.40 | 94.30 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/visformer/visformer_tiny_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/visformer/visformer_tiny-df995ba4-910v2.ckpt) |
-
-
-
-- Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode
-
-
-
-
-| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
-| -------------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | ------------------------------------------------------------------------------------------------------ | ----------------------------------------------------------------------------------------------- |
-| visformer_tiny | 10.33 | 8 | 128 | 224x224 | O2 | 137s | 217.92 | 4698.97 | 78.28 | 94.15 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/visformer/visformer_tiny_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/visformer/visformer_tiny-daee0322.ckpt) |
-
-
+## Requirements
+| mindspore | ascend driver | firmware | cann toolkit/kernel |
+| :-------: | :-----------: | :---------: | :-----------------: |
+| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
-#### Notes
-- Top-1 and Top-5: Accuracy reported on the validation set of ImageNet-1K.
## Quick Start
@@ -105,6 +77,27 @@ with `--ckpt_path`.
python validate.py -c configs/visformer/visformer_tiny_ascend.yaml --data_dir /path/to/imagenet --ckpt_path /path/to/ckpt
```
+## Performance
+
+Our reproduced model performance on ImageNet-1K is reported as follows.
+
+Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode.
+
+*coming soon*
+
+Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode.
+
+
+| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
+| -------------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | ------------------------------------------------------------------------------------------------------ | ----------------------------------------------------------------------------------------------- |
+| visformer_tiny | 10.33 | 8 | 128 | 224x224 | O2 | 137s | 217.92 | 4698.97 | 78.28 | 94.15 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/visformer/visformer_tiny_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/visformer/visformer_tiny-daee0322.ckpt) |
+
+
+
+### Notes
+
+- top-1 and top-5: Accuracy reported on the validation set of ImageNet-1K.
+
## References
diff --git a/configs/vit/README.md b/configs/vit/README.md
index b87ce777..dd1002cb 100644
--- a/configs/vit/README.md
+++ b/configs/vit/README.md
@@ -4,10 +4,7 @@
> [ An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale](https://arxiv.org/abs/2010.11929)
-## Requirements
-| mindspore | ascend driver | firmware | cann toolkit/kernel |
-| :-------: | :-----------: | :---------: | :-----------------: |
-| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
+
## Introduction
@@ -33,30 +30,12 @@ fewer computational resources. [[2](#references)]
Figure 1. Architecture of ViT [1]
-## Performance
-
-
-
-Our reproduced model performance on ImageNet-1K is reported as follows.
-
-- Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode
-
-*coming soon*
-
-- Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode
-
-*coming soon*
+## Requirements
+| mindspore | ascend driver | firmware | cann toolkit/kernel |
+| :-------: | :-----------: | :---------: | :-----------------: |
+| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
-#### Notes
-- Top-1 and Top-5: Accuracy reported on the validation set of ImageNet-1K.
## Quick Start
@@ -117,6 +96,21 @@ with `--ckpt_path`.
python validate.py -c configs/vit/vit_b32_224_ascend.yaml --data_dir /path/to/imagenet --ckpt_path /path/to/ckpt
```
+## Performance
+
+Our reproduced model performance on ImageNet-1K is reported as follows.
+
+Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode.
+
+*coming soon*
+
+Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode.
+
+*coming soon*
+
+### Notes
+
+- top-1 and top-5: Accuracy reported on the validation set of ImageNet-1K.
## References
diff --git a/configs/volo/README.md b/configs/volo/README.md
index 435761a6..b0799b88 100644
--- a/configs/volo/README.md
+++ b/configs/volo/README.md
@@ -2,10 +2,7 @@
> [VOLO: Vision Outlooker for Visual Recognition ](https://arxiv.org/abs/2106.13112)
-## Requirements
-| mindspore | ascend driver | firmware | cann toolkit/kernel |
-| :-------: | :-----------: | :---------: | :-----------------: |
-| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
+
## Introduction
@@ -24,27 +21,12 @@ without using any extra training data.
Figure 1. Illustration of outlook attention. [1]
-## Performance
-
-Our reproduced model performance on ImageNet-1K is reported as follows.
-
-performance tested on ascend 910*(8p) with graph mode
-
-*coming soon*
-
-performance tested on ascend 910(8p) with graph mode
-
-
-
-| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
-| ---------- | --------- | ----- | ---------- | ---------- | --------- |---------------| ------- | ------- | -------- | -------- | ------------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------------------------- |
-| volo_d1 | 27 | 8 | 128 | 224x224 | O2 | 275s | 270.79 | 3781.53 | 82.59 | 95.99 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/visformer/visformer_tiny_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/visformer/visformer_tiny-df995ba4-910v2.ckpt) |
-
-
+## Requirements
+| mindspore | ascend driver | firmware | cann toolkit/kernel |
+| :-------: | :-----------: | :---------: | :-----------------: |
+| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
-#### Notes
-- Top-1 and Top-5: Accuracy reported on the validation set of ImageNet-1K.
## Quick Start
@@ -98,6 +80,22 @@ with `--ckpt_path`.
python validate.py -c configs/volo/volo_d1_ascend.yaml --data_dir /path/to/imagenet --ckpt_path /path/to/ckpt
```
+## Performance
+
+Our reproduced model performance on ImageNet-1K is reported as follows.
+
+performance tested on ascend 910*(8p) with graph mode
+
+*coming soon*
+
+performance tested on ascend 910(8p) with graph mode
+
+*coming soon*
+
+### Notes
+
+- top-1 and top-5: Accuracy reported on the validation set of ImageNet-1K.
+
## References
diff --git a/configs/xception/README.md b/configs/xception/README.md
index 2e5a2ded..3fa097b4 100644
--- a/configs/xception/README.md
+++ b/configs/xception/README.md
@@ -2,10 +2,7 @@
> [Xception: Deep Learning with Depthwise Separable Convolutions](https://arxiv.org/pdf/1610.02357.pdf)
-## Requirements
-| mindspore | ascend driver | firmware | cann toolkit/kernel |
-| :-------: | :-----------: | :---------: | :-----------------: |
-| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
+
## Introduction
@@ -25,28 +22,12 @@ module.[[1](#references)]
Figure 1. Architecture of Xception [1]
-## Performance
-
-Our reproduced model performance on ImageNet-1K is reported as follows.
-
-- Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode
-
-*coming soon*
-
-- Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode
-
-
-
-
-| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
-| ---------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | ----------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------- |
-| xception | 22.91 | 8 | 32 | 299x299 | O2 | 161s | 96.78 | 2645.17 | 79.01 | 94.25 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/xception/xception_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/xception/xception-2c1e711df.ckpt) |
-
-
+## Requirements
+| mindspore | ascend driver | firmware | cann toolkit/kernel |
+| :-------: | :-----------: | :---------: | :-----------------: |
+| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
-#### Notes
-- Top-1 and Top-5: Accuracy reported on the validation set of ImageNet-1K.
## Quick Start
@@ -100,6 +81,23 @@ with `--ckpt_path`.
python validate.py -c configs/xception/xception_ascend.yaml --data_dir /path/to/imagenet --ckpt_path /path/to/ckpt
```
+## Performance
+
+Our reproduced model performance on ImageNet-1K is reported as follows.
+
+Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode.
+
+*coming soon*
+
+Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode.
+
+*coming soon*
+
+
+### Notes
+
+- top-1 and top-5: Accuracy reported on the validation set of ImageNet-1K.
+
## References
diff --git a/configs/xcit/README.md b/configs/xcit/README.md
index a42647e8..12e154b3 100644
--- a/configs/xcit/README.md
+++ b/configs/xcit/README.md
@@ -2,10 +2,7 @@
> [XCiT: Cross-Covariance Image Transformers](https://arxiv.org/abs/2106.09681)
-## Requirements
-| mindspore | ascend driver | firmware | cann toolkit/kernel |
-| :-------: | :-----------: | :---------: | :-----------------: |
-| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
+
## Introduction
@@ -22,35 +19,12 @@ transformers with the scalability of convolutional architectures.
Figure 1. Architecture of XCiT [1]
-## Performance
-
-Our reproduced model performance on ImageNet-1K is reported as follows.
-
-- Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode
-
-
-
-
-| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
-| -------------------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | --------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------- |
-| xcit_tiny_12_p16_224 | 7.00 | 8 | 128 | 224x224 | O2 | 330s | 229.25 | 4466.74 | 77.27 | 93.56 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/xcit/xcit_tiny_12_p16_ascend.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/xcit/xcit_tiny_12_p16_224-bd90776e-910v2.ckpt) |
-
-
-
-- Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode
-
-
-
-
-| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | acc@top1 | acc@top5 | recipe | weight |
-| -------------------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | -------- | -------- | --------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------ |
-| xcit_tiny_12_p16_224 | 7.00 | 8 | 128 | 224x224 | O2 | 382s | 252.98 | 4047.75 | 77.67 | 93.79 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/configs/xcit/xcit_tiny_12_p16_ascend.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/xcit/xcit_tiny_12_p16_224-1b1c9301.ckpt) |
-
-
+## Requirements
+| mindspore | ascend driver | firmware | cann toolkit/kernel |
+| :-------: | :-----------: | :---------: | :-----------------: |
+| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
-#### Notes
-- Top-1 and Top-5: Accuracy reported on the validation set of ImageNet-1K.
## Quick Start
@@ -101,6 +75,21 @@ with `--ckpt_path`.
```
python validate.py -c configs/xcit/xcit_tiny_12_p16_224_ascend.yaml --data_dir /path/to/imagenet --ckpt_path /path/to/ckpt
```
+## Performance
+
+Our reproduced model performance on ImageNet-1K is reported as follows.
+
+Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode.
+
+*coming soon*
+
+Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode.
+
+*coming soon*
+
+### Notes
+
+- top-1 and top-5: Accuracy reported on the validation set of ImageNet-1K.
## References
diff --git a/examples/det/ssd/README.md b/examples/det/ssd/README.md
index fa98492f..c0dac5df 100644
--- a/examples/det/ssd/README.md
+++ b/examples/det/ssd/README.md
@@ -2,10 +2,6 @@
> [SSD: Single Shot MultiBox Detector](https://arxiv.org/abs/1512.02325)
-## Requirements
-| mindspore | ascend driver | firmware | cann toolkit/kernel |
-| :-------: | :-----------: | :---------: | :-----------------: |
-| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
## Introduction
@@ -20,6 +16,11 @@ SSD is an single-staged object detector. It discretizes the output space of boun
In this example, by leveraging [the multi-scale feature extraction of MindCV](https://github.com/mindspore-lab/mindcv/blob/main/docs/en/how_to_guides/feature_extraction.md), we demonstrate that using backbones from MindCV much simplifies the implementation of SSD.
+## Requirements
+| mindspore | ascend driver | firmware | cann toolkit/kernel |
+| :-------: | :-----------: | :---------: | :-----------------: |
+| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
+
## Configurations
Here, we provide three configurations of SSD.
@@ -68,7 +69,7 @@ Specify the path of the preprocessed dataset at keyword `data_dir` in the config
|:----------------:|:----------------:|:----------------:|
| [backbone weights](https://download.mindspore.cn/toolkits/mindcv/mobilenet/mobilenetv2/mobilenet_v2_100-d5532038.ckpt) | [backbone weights](https://download.mindspore.cn/toolkits/mindcv/resnet/resnet50-e0733ab8.ckpt) | [backbone weights](https://download.mindspore.cn/toolkits/mindcv/mobilenet/mobilenetv3/mobilenet_v3_large_100-1279ad5f.ckpt) |
-
+
### Train
@@ -130,28 +131,22 @@ cd mindcv # change directory to the root of MindCV repository
python examples/det/ssd/eval.py --config examples/det/ssd/ssd_mobilenetv2.yaml
```
-## Requirements
-| mindspore | ascend driver | firmware | cann toolkit/kernel |
-| :-------: | :-----------: | :---------: | :-----------------: |
-| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
## Performance
-- Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode
+Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode.
*coming soon*
-- Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode
+Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode.
+
-
| model name | params(M) | cards | batch size | resolution | jit level | graph compile | ms/step | img/s | mAP | recipe | weight |
-| ---------------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | ---- | ------------------------------------------------------------------------------------------------ | ------------------------------------------------------------------------------------------- |
+| ---------------- | --------- | ----- | ---------- | ---------- | --------- | ------------- | ------- | ------- | ---- | ------------------------------------------------------------------------------------------------ |---------------------------------------------------------------------------------------------|
| ssd_mobilenetv2 | 4.45 | 8 | 32 | 300x300 | O2 | 202s | 60.14 | 4256.73 | 23.2 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/examples/det/ssd/ssd_mobilenetv2.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/ssd/ssd_mobilenetv2-5bbd7411.ckpt) |
| ssd_resnet50_fpn | 33.37 | 8 | 32 | 640x640 | O2 | 130s | 269.82 | 948.78 | 38.3 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/examples/det/ssd/ssd_resnet50_fpn.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/ssd/ssd_resnet50_fpn-ac87ddac.ckpt) |
-| ssd_mobilenetv3 | 4.88 | 8 | 32 | 300x300 | O2 | 245s | 59.91 | 4273.08 | 23.8 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/examples/det/ssd/ssd_mobilenetv3.yaml) | [weights](https://download.mindspore.cn/toolkits/mindcv/ssd/ssd_mobilenetv3-53d9f6e9.ckpt) |
-
## References
diff --git a/examples/seg/deeplabv3/README.md b/examples/seg/deeplabv3/README.md
index 723880a0..437a1352 100644
--- a/examples/seg/deeplabv3/README.md
+++ b/examples/seg/deeplabv3/README.md
@@ -4,10 +4,6 @@
>
> DeeplabV3+:[Encoder-Decoder with Atrous Separable Convolution for Semantic Image Segmentation](https://arxiv.org/abs/1802.02611)
-## Requirements
-| mindspore | ascend driver | firmware | cann toolkit/kernel |
-| :-------: | :-----------: | :---------: | :-----------------: |
-| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
## Introduction
@@ -34,6 +30,11 @@
This example provides implementations of DeepLabV3 and DeepLabV3+ using backbones from MindCV. More details about feature extraction of MindCV are in [this tutorial](https://github.com/mindspore-lab/mindcv/blob/main/docs/en/how_to_guides/feature_extraction.md). Note that the ResNet in DeepLab contains atrous convolutions with different rates, `dilated_resnet.py` is provided as a modification of ResNet from MindCV, with atrous convolutions in block 3-4.
+## Requirements
+| mindspore | ascend driver | firmware | cann toolkit/kernel |
+| :-------: | :-----------: | :---------: | :-----------------: |
+| 2.3.1 | 24.1.RC2 | 7.3.0.1.231 | 8.0.RC2.beta1 |
+
## Quick Start
### Preparation
@@ -148,9 +149,9 @@ python examples/seg/deeplabv3/eval.py --config examples/seg/deeplabv3/config/dee
```
## Performance
-- Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode
+Experiments are tested on ascend 910 with mindspore 2.3.1 graph mode.
+
-
| model name | params(M) | cards | batch size | jit level | graph compile | ms/step | img/s | mIoU | recipe | weight |
| ----------------- | --------- | ----- | ---------- | --------- | ------------- | ------- | ------ | ------------------- | -------------------------------------------------------------------------------------------------------------------------------- | ----------- |
@@ -159,9 +160,9 @@ python examples/seg/deeplabv3/eval.py --config examples/seg/deeplabv3/config/dee
| deeplabv3plus_s16 | 59.45 | 8 | 32 | O2 | 207s | 312.15 | 820.12 | 78.99 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/examples/seg/deeplabv3/config/deeplabv3plus_s16_dilated_resnet101.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/deeplabv3/deeplabv3plus-s16-best.ckpt) |
| deeplabv3plus_s8 | 59.45 | 8 | 16 | O2 | 170s | 403.43 | 217.28 | 80.31\|80.99\|81.10 | [yaml](https://github.com/mindspore-lab/mindcv/blob/main/examples/seg/deeplabv3/config/deeplabv3plus_s8_dilated_resnet101.yaml) | [weights](https://download-mindspore.osinfra.cn/toolkits/mindcv/deeplabv3/deeplabv3plus-s8-best.ckpt) |
-
-- Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode
+
+Experiments are tested on ascend 910* with mindspore 2.3.1 graph mode.
*coming soon*