Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add output_names argument for ONNX export with dynamic axes #3456

Merged
merged 3 commits into from
Jun 4, 2021
Merged

Add output_names argument for ONNX export with dynamic axes #3456

merged 3 commits into from
Jun 4, 2021

Conversation

SamSamhuns
Copy link
Contributor

@SamSamhuns SamSamhuns commented Jun 4, 2021

Pull request regarding bug #3444 to fix the dynamic output shape issue.

πŸ› οΈ PR Summary

Made with ❀️ by Ultralytics Actions

🌟 Summary

Enhanced ONNX export functionality in YOLOv5.

πŸ“Š Key Changes

  • πŸ›  Refactored ONNX export arguments, separating input_names and output_names.
  • πŸ“ Updated dynamic_axes definitions for inputs and outputs during ONNX export.

🎯 Purpose & Impact

  • 🎨 Provides clearer code structure around names of inputs and outputs, easing understanding and maintenance.
  • πŸŒ‰ Ensures better compatibility and flexibility with ONNX by explicitly naming inputs and outputs and adjusting dynamic axes, facilitating easier integration with ONNX-compatible tools.
  • πŸ’Ό This may benefit users requiring custom input/output handling when exporting their models to ONNX, improving the model's interoperability and deployment potential.

Samridha Shrestha added 2 commits June 4, 2021 14:48
Add output_names and dynamic_axes names for all outputs in torch.onnx.export. The first four outputs of the model will have names output0, output1, output2, output3
Copy link
Contributor

@github-actions github-actions bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

πŸ‘‹ Hello @SamSamhuns, thank you for submitting a πŸš€ PR! To allow your work to be integrated as seamlessly as possible, we advise you to:

  • βœ… Verify your PR is up-to-date with origin/master. If your PR is behind origin/master an automatic GitHub actions rebase may be attempted by including the /rebase command in a comment body, or by running the following code, replacing 'feature' with the name of your local branch:
git remote add upstream https://github.com/ultralytics/yolov5.git
git fetch upstream
git checkout feature  # <----- replace 'feature' with local branch name
git rebase upstream/develop
git push -u origin -f
  • βœ… Verify all Continuous Integration (CI) checks are passing.
  • βœ… Reduce changes to the absolute minimum required for your bug fix or feature addition. "It is not daily increase but daily decrease, hack away the unessential. The closer to the source, the less wastage there is." -Bruce Lee

@glenn-jocher
Copy link
Member

@SamSamhuns thanks for the PR! It looks like L102 serves no purpose?

@SamSamhuns
Copy link
Contributor Author

If you remove L102, the outputs will no longer have dynamic shapes. You can try commenting out L102 and exporting with --dynamic. The output_names parameter is matching the names given in the dynamic_axes parameter.

Output of python3 -c "import onnx; m = onnx.load('yolov5s.onnx'); print(m.graph.input); print(); print(m.graph.output)" after commenting out L102

[name: "images"
type {
  tensor_type {
    elem_type: 1
    shape {
      dim {
        dim_param: "batch"
      }
      dim {

        dim_value: 3
      }
      dim {
        dim_param: "height"
      }
      dim {
        dim_param: "width"
      }
    }
  }
}
]

[name: "672"
type {
  tensor_type {
    elem_type: 1
    shape {
      dim {
        dim_value: 1
      }
      dim {
        dim_value: 25200
      }
      dim {
        dim_value: 85
      }
    }
  }
} ..... so on for other outputs

@glenn-jocher
Copy link
Member

@SamSamhuns ah perfect, now I understand! I didn't see they were arguments since I was only looking at the diff before.

@glenn-jocher
Copy link
Member

@SamSamhuns if we pass the y and x keys in the output dictionaries does that break things?

{0: 'batch', 2: 'y', 3: 'x'}

@SamSamhuns
Copy link
Contributor Author

Unfortunately, if you pass in {0: 'batch', 2: 'y', 3: 'x'} for let's say the first output, you'll get a weird output shape of ["batch", 25200, "y"] for the first output.

As I see it, the non-batch axes of the output (all axes after the first one) need not be stated in the dynamic_axes parameter of the torch.onnx.export. You could use {0: 'batch', 1: 'y', 2: 'x'} for the first output which would give an output shape of ["batch", 'y, "x"] but this seems unnecessary and it will again not match the pattern for the other outputs which have 5 axes (one more than the first).

I'd expect the shapes should be the following for the respective export commands:

  1. python models/export.py
    INPUT = [1,3,640,640]
    OUTPUT = [[1, 25200, 85], [1,3,80,80,85], [1,3,40,40,85], [1,3,20,20,85]]
  1. python models/export.py --dynamic
    INPUT = [batch,3,height,width]
    OUTPUT = [[batch, 25200, 85], [batch,3,80,80,85], [batch,3,40,40,85], [batch,3,20,20,85]]

So if we add {0: 'batch', 2: 'y', 3: 'x'}, the shape would be OUTPUT = [[batch, 25200, y], [batch,3,80,80,85], [batch,3,40,40,85], [batch,3,20,20,85]]

And if we add {0: 'batch', 1: 'y', 2: 'x'}, the shape would be OUTPUT = [[batch, y, x], [batch,3,80,80,85], [batch,3,40,40,85], [batch,3,20,20,85]]

If you really need the x and y names for the axes of the shape somewhere later on, you can still use this

torch.onnx.export(model, img, f, verbose=False, opset_version=opt.opset_version, input_names=['images'],
                              training=torch.onnx.TrainingMode.TRAINING if opt.train else torch.onnx.TrainingMode.EVAL,
                              do_constant_folding=not opt.train,
                              output_names=['output0', 'output1', 'output2', 'output3'],
                              dynamic_axes={'images': {0: 'batch', 2: 'height', 3: 'width'},  # size(1,3,640,640)
                                            'output0': {0: 'batch', 1: 'y', 2: 'x'},
                                            'output1': {0: 'batch'},
                                            'output2': {0: 'batch'},
                                            'output3': {0: 'batch'}} if opt.dynamic else None)

@glenn-jocher
Copy link
Member

glenn-jocher commented Jun 4, 2021

@SamSamhuns Ok I see. I think the reason that the height, width, y, x axes are labelled as dynamic is because some workflows involve users running inference at different input image size (in addition to different batch sizes). I think maybe the reason the first output is the only one labelled as such is because this is the only one that should be used, it is the 3 concatenated outputs with grid applied. Except now that I look at it the y, x don't make any sense to apply to the first output, the only axes that should be dynamic on input shape change are dim 0 and 1, or batch and anchors.

For example:

    INPUT = [1,3,640,640]
    OUTPUT = [[1, 25200, 85], [1,3,80,80,85], [1,3,40,40,85], [1,3,20,20,85]]

    INPUT = [1,3,320,480]
    OUTPUT = [[1, 9450, 85], [1,3,40,60,85], [1,3,20,30,85], [1,3,10,15,85]]

@glenn-jocher
Copy link
Member

glenn-jocher commented Jun 4, 2021

@SamSamhuns could we do this to allow for dynamic batch and image height width?

torch.onnx.export(model, img, f, verbose=False, opset_version=opt.opset_version, input_names=['images'],
                              training=torch.onnx.TrainingMode.TRAINING if opt.train else torch.onnx.TrainingMode.EVAL,
                              do_constant_folding=not opt.train,
                              output_names=['output0', 'output1', 'output2', 'output3'],
                              dynamic_axes={'images': {0: 'batch', 2: 'height', 3: 'width'},  # shape(1,3,640,640)
                                            'output0': {0: 'batch', 1: 'anchors'},  # shape(1,25200,85)
                                            'output1': {0: 'batch', 2: 'y', 3: 'x'},  # shape(1,3,80,80,85)
                                            'output2': {0: 'batch', 2: 'y', 3: 'x'},  # shape(1,3,40,40,85) ... etc.
                                            'output3': {0: 'batch', 2: 'y', 3: 'x'},} if opt.dynamic else None)

@SamSamhuns
Copy link
Contributor Author

Hmm, I guess if you want users to only use the first output then the export can only contain the dynamic_axes for the first output

torch.onnx.export(model, img, f, verbose=False, opset_version=opt.opset_version, input_names=['images'],
                              training=torch.onnx.TrainingMode.TRAINING if opt.train else torch.onnx.TrainingMode.EVAL,
                              do_constant_folding=not opt.train,
                              output_names=['output'],
                              dynamic_axes={'images': {0: 'batch', 2: 'height', 3: 'width'},  # size(1,3,640,640)
                                            'output': {0: 'batch', 1: 'x'}} if opt.dynamic else None)

the x should be there though since the 2nd dim of the first output changes with the input shapes as you show it, so now the shapes for dynamic onnx model will be: (The 3 other outputs that do not have dynamic output shapes)

# for img-size 640 640

 INPUT = [batch,3,height,width]
 OUTPUT = [[batch, x, 85], [1,3,80,80,85], [1,3,40,40,85], [1,3,20,20,85]]

@glenn-jocher glenn-jocher changed the title Add output_names and dynamic_axes names for onnx export for dynamic output shape Add output_names argument for ONNX export with dynamic axes Jun 4, 2021
@glenn-jocher
Copy link
Member

glenn-jocher commented Jun 4, 2021

@SamSamhuns hey perfect! I'll go ahead and update your PR then with this then. Typically we call this 1D vector of output points 'anchors', as in 'the model outputs 25k anchors'.

            torch.onnx.export(model, img, f, verbose=False, opset_version=opt.opset_version,
                              training=torch.onnx.TrainingMode.TRAINING if opt.train else torch.onnx.TrainingMode.EVAL,
                              do_constant_folding=not opt.train,
                              input_names=['images'],
                              output_names=['output'],
                              dynamic_axes={'images': {0: 'batch', 2: 'height', 3: 'width'},  # shape(1,3,640,640)
                                            'output': {0: 'batch', 1: 'anchors'}  # shape(1,25200,85)
                                            } if opt.dynamic else None)

@glenn-jocher
Copy link
Member

@SamSamhuns all CI tests show ONNX export working correctly, and locally on MacOS I was able to use --dynamic also with no problems. Result looks like this.
Screenshot 2021-06-04 at 22 01 01

@glenn-jocher glenn-jocher merged commit 044daaf into ultralytics:develop Jun 4, 2021
@glenn-jocher
Copy link
Member

@SamSamhuns PR is merged! Thank you for your contributions to YOLOv5 πŸš€ and Vision AI ⭐!

glenn-jocher added a commit that referenced this pull request Jun 8, 2021
* update ci-testing.yml (#3322)

* update ci-testing.yml

* update greetings.yml

* bring back os matrix

* update ci-testing.yml (#3322)

* update ci-testing.yml

* update greetings.yml

* bring back os matrix

* Enable direct `--weights URL` definition (#3373)

* Enable direct `--weights URL` definition

@kalenmike this PR will enable direct --weights URL definition. Example use case:
```
python train.py --weights https://storage.googleapis.com/bucket/dir/model.pt
```

* cleanup

* bug fixes

* weights = attempt_download(weights)

* Update experimental.py

* Update hubconf.py

* return bug fix

* comment mirror

* min_bytes

* Update tutorial.ipynb (#3368)

add Open in Kaggle badge

* `cv2.imread(img, -1)` for IMREAD_UNCHANGED (#3379)

* Update datasets.py

* comment

Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>

* COCO evolution fix (#3388)

* COCO evolution fix

* cleanup

* update print

* print fix

* Create `is_pip()` function (#3391)

Returns `True` if file is part of pip package. Useful for contextual behavior modification.

```python
def is_pip():
    # Is file in a pip package?
    return 'site-packages' in Path(__file__).absolute().parts
```

* Revert "`cv2.imread(img, -1)` for IMREAD_UNCHANGED (#3379)" (#3395)

This reverts commit 21a9607.

* Update FLOPs description (#3422)

* Update README.md

* Changing FLOPS to FLOPs.

Co-authored-by: BuildTools <unconfigured@null.spigotmc.org>

* Parse URL authentication (#3424)

* Parse URL authentication

* urllib.parse.unquote()

* improved error handling

* improved error handling

* remove %3F

* update check_file()

* Add FLOPs title to table (#3453)

* Suppress jit trace warning + graph once (#3454)

* Suppress jit trace warning + graph once

Suppress harmless jit trace warning on TensorBoard add_graph call. Also fix multiple add_graph() calls bug, now only on batch 0.

* Update train.py

* Update MixUp augmentation `alpha=beta=32.0` (#3455)

Per VOC empirical results #3380 (comment) by @developer0hye

* Add `timeout()` class (#3460)

* Add `timeout()` class

* rearrange order

* Faster HSV augmentation (#3462)

remove datatype conversion process that can be skipped

* Add `check_git_status()` 5 second timeout (#3464)

* Add check_git_status() 5 second timeout

This should prevent the SSH Git bug that we were discussing @kalenmike

* cleanup

* replace timeout with check_output built-in timeout

* Improved `check_requirements()` offline-handling (#3466)

Improve robustness of `check_requirements()` function to offline environments (do not attempt pip installs when offline).

* Add `output_names` argument for ONNX export with dynamic axes (#3456)

* Add output names & dynamic axes for onnx export

Add output_names and dynamic_axes names for all outputs in torch.onnx.export. The first four outputs of the model will have names output0, output1, output2, output3

* use first output only + cleanup

Co-authored-by: Samridha Shrestha <samridha.shrestha@g42.ai>
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>

* Revert FP16 `test.py` and `detect.py` inference to FP32 default (#3423)

* fixed inference bug ,while use half precision

* replace --use-half with --half

* replace space and PEP8 in detect.py

* PEP8 detect.py

* update --half help comment

* Update test.py

* revert space

Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>

* Add additional links/resources to stale.yml message (#3467)

* Update stale.yml

* cleanup

* Update stale.yml

* reformat

* Update stale.yml HUB URL (#3468)

* Stale `github.actor` bug fix (#3483)

* Explicit `model.eval()` call `if opt.train=False` (#3475)

* call model.eval() when opt.train is False

call model.eval() when opt.train is False

* single-line if statement

* cleanup

Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>

* check_requirements() exclude `opencv-python` (#3495)

Fix for 3rd party or contrib versions of installed OpenCV as in #3494.

* Earlier `assert` for cpu and half option (#3508)

* early assert for cpu and half option

early assert for cpu and half option

* Modified comment

Modified comment

* Update tutorial.ipynb (#3510)

* Reduce test.py results spacing (#3511)

* Update README.md (#3512)

* Update README.md

Minor modifications

* 850 width

* Update greetings.yml

revert greeting change as PRs will now merge to master.

Co-authored-by: Piotr Skalski <SkalskiP@users.noreply.github.com>
Co-authored-by: SkalskiP <piotr.skalski92@gmail.com>
Co-authored-by: Peretz Cohen <pizzaz93@users.noreply.github.com>
Co-authored-by: tudoulei <34886368+tudoulei@users.noreply.github.com>
Co-authored-by: chocosaj <chocosaj@users.noreply.github.com>
Co-authored-by: BuildTools <unconfigured@null.spigotmc.org>
Co-authored-by: Yonghye Kwon <developer.0hye@gmail.com>
Co-authored-by: Sam_S <SamSamhuns@users.noreply.github.com>
Co-authored-by: Samridha Shrestha <samridha.shrestha@g42.ai>
Co-authored-by: edificewang <609552430@qq.com>
Lechtr pushed a commit to Lechtr/yolov5 that referenced this pull request Jul 20, 2021
* update ci-testing.yml (ultralytics#3322)

* update ci-testing.yml

* update greetings.yml

* bring back os matrix

* update ci-testing.yml (ultralytics#3322)

* update ci-testing.yml

* update greetings.yml

* bring back os matrix

* Enable direct `--weights URL` definition (ultralytics#3373)

* Enable direct `--weights URL` definition

@kalenmike this PR will enable direct --weights URL definition. Example use case:
```
python train.py --weights https://storage.googleapis.com/bucket/dir/model.pt
```

* cleanup

* bug fixes

* weights = attempt_download(weights)

* Update experimental.py

* Update hubconf.py

* return bug fix

* comment mirror

* min_bytes

* Update tutorial.ipynb (ultralytics#3368)

add Open in Kaggle badge

* `cv2.imread(img, -1)` for IMREAD_UNCHANGED (ultralytics#3379)

* Update datasets.py

* comment

Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>

* COCO evolution fix (ultralytics#3388)

* COCO evolution fix

* cleanup

* update print

* print fix

* Create `is_pip()` function (ultralytics#3391)

Returns `True` if file is part of pip package. Useful for contextual behavior modification.

```python
def is_pip():
    # Is file in a pip package?
    return 'site-packages' in Path(__file__).absolute().parts
```

* Revert "`cv2.imread(img, -1)` for IMREAD_UNCHANGED (ultralytics#3379)" (ultralytics#3395)

This reverts commit 21a9607.

* Update FLOPs description (ultralytics#3422)

* Update README.md

* Changing FLOPS to FLOPs.

Co-authored-by: BuildTools <unconfigured@null.spigotmc.org>

* Parse URL authentication (ultralytics#3424)

* Parse URL authentication

* urllib.parse.unquote()

* improved error handling

* improved error handling

* remove %3F

* update check_file()

* Add FLOPs title to table (ultralytics#3453)

* Suppress jit trace warning + graph once (ultralytics#3454)

* Suppress jit trace warning + graph once

Suppress harmless jit trace warning on TensorBoard add_graph call. Also fix multiple add_graph() calls bug, now only on batch 0.

* Update train.py

* Update MixUp augmentation `alpha=beta=32.0` (ultralytics#3455)

Per VOC empirical results ultralytics#3380 (comment) by @developer0hye

* Add `timeout()` class (ultralytics#3460)

* Add `timeout()` class

* rearrange order

* Faster HSV augmentation (ultralytics#3462)

remove datatype conversion process that can be skipped

* Add `check_git_status()` 5 second timeout (ultralytics#3464)

* Add check_git_status() 5 second timeout

This should prevent the SSH Git bug that we were discussing @kalenmike

* cleanup

* replace timeout with check_output built-in timeout

* Improved `check_requirements()` offline-handling (ultralytics#3466)

Improve robustness of `check_requirements()` function to offline environments (do not attempt pip installs when offline).

* Add `output_names` argument for ONNX export with dynamic axes (ultralytics#3456)

* Add output names & dynamic axes for onnx export

Add output_names and dynamic_axes names for all outputs in torch.onnx.export. The first four outputs of the model will have names output0, output1, output2, output3

* use first output only + cleanup

Co-authored-by: Samridha Shrestha <samridha.shrestha@g42.ai>
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>

* Revert FP16 `test.py` and `detect.py` inference to FP32 default (ultralytics#3423)

* fixed inference bug ,while use half precision

* replace --use-half with --half

* replace space and PEP8 in detect.py

* PEP8 detect.py

* update --half help comment

* Update test.py

* revert space

Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>

* Add additional links/resources to stale.yml message (ultralytics#3467)

* Update stale.yml

* cleanup

* Update stale.yml

* reformat

* Update stale.yml HUB URL (ultralytics#3468)

* Stale `github.actor` bug fix (ultralytics#3483)

* Explicit `model.eval()` call `if opt.train=False` (ultralytics#3475)

* call model.eval() when opt.train is False

call model.eval() when opt.train is False

* single-line if statement

* cleanup

Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>

* check_requirements() exclude `opencv-python` (ultralytics#3495)

Fix for 3rd party or contrib versions of installed OpenCV as in ultralytics#3494.

* Earlier `assert` for cpu and half option (ultralytics#3508)

* early assert for cpu and half option

early assert for cpu and half option

* Modified comment

Modified comment

* Update tutorial.ipynb (ultralytics#3510)

* Reduce test.py results spacing (ultralytics#3511)

* Update README.md (ultralytics#3512)

* Update README.md

Minor modifications

* 850 width

* Update greetings.yml

revert greeting change as PRs will now merge to master.

Co-authored-by: Piotr Skalski <SkalskiP@users.noreply.github.com>
Co-authored-by: SkalskiP <piotr.skalski92@gmail.com>
Co-authored-by: Peretz Cohen <pizzaz93@users.noreply.github.com>
Co-authored-by: tudoulei <34886368+tudoulei@users.noreply.github.com>
Co-authored-by: chocosaj <chocosaj@users.noreply.github.com>
Co-authored-by: BuildTools <unconfigured@null.spigotmc.org>
Co-authored-by: Yonghye Kwon <developer.0hye@gmail.com>
Co-authored-by: Sam_S <SamSamhuns@users.noreply.github.com>
Co-authored-by: Samridha Shrestha <samridha.shrestha@g42.ai>
Co-authored-by: edificewang <609552430@qq.com>

(cherry picked from commit f3c3d2c)
BjarneKuehl pushed a commit to fhkiel-mlaip/yolov5 that referenced this pull request Aug 26, 2022
…ytics#3456)

* Add output names & dynamic axes for onnx export

Add output_names and dynamic_axes names for all outputs in torch.onnx.export. The first four outputs of the model will have names output0, output1, output2, output3

* use first output only + cleanup

Co-authored-by: Samridha Shrestha <samridha.shrestha@g42.ai>
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>
BjarneKuehl pushed a commit to fhkiel-mlaip/yolov5 that referenced this pull request Aug 26, 2022
* update ci-testing.yml (ultralytics#3322)

* update ci-testing.yml

* update greetings.yml

* bring back os matrix

* update ci-testing.yml (ultralytics#3322)

* update ci-testing.yml

* update greetings.yml

* bring back os matrix

* Enable direct `--weights URL` definition (ultralytics#3373)

* Enable direct `--weights URL` definition

@kalenmike this PR will enable direct --weights URL definition. Example use case:
```
python train.py --weights https://storage.googleapis.com/bucket/dir/model.pt
```

* cleanup

* bug fixes

* weights = attempt_download(weights)

* Update experimental.py

* Update hubconf.py

* return bug fix

* comment mirror

* min_bytes

* Update tutorial.ipynb (ultralytics#3368)

add Open in Kaggle badge

* `cv2.imread(img, -1)` for IMREAD_UNCHANGED (ultralytics#3379)

* Update datasets.py

* comment

Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>

* COCO evolution fix (ultralytics#3388)

* COCO evolution fix

* cleanup

* update print

* print fix

* Create `is_pip()` function (ultralytics#3391)

Returns `True` if file is part of pip package. Useful for contextual behavior modification.

```python
def is_pip():
    # Is file in a pip package?
    return 'site-packages' in Path(__file__).absolute().parts
```

* Revert "`cv2.imread(img, -1)` for IMREAD_UNCHANGED (ultralytics#3379)" (ultralytics#3395)

This reverts commit 67db4b6.

* Update FLOPs description (ultralytics#3422)

* Update README.md

* Changing FLOPS to FLOPs.

Co-authored-by: BuildTools <unconfigured@null.spigotmc.org>

* Parse URL authentication (ultralytics#3424)

* Parse URL authentication

* urllib.parse.unquote()

* improved error handling

* improved error handling

* remove %3F

* update check_file()

* Add FLOPs title to table (ultralytics#3453)

* Suppress jit trace warning + graph once (ultralytics#3454)

* Suppress jit trace warning + graph once

Suppress harmless jit trace warning on TensorBoard add_graph call. Also fix multiple add_graph() calls bug, now only on batch 0.

* Update train.py

* Update MixUp augmentation `alpha=beta=32.0` (ultralytics#3455)

Per VOC empirical results ultralytics#3380 (comment) by @developer0hye

* Add `timeout()` class (ultralytics#3460)

* Add `timeout()` class

* rearrange order

* Faster HSV augmentation (ultralytics#3462)

remove datatype conversion process that can be skipped

* Add `check_git_status()` 5 second timeout (ultralytics#3464)

* Add check_git_status() 5 second timeout

This should prevent the SSH Git bug that we were discussing @kalenmike

* cleanup

* replace timeout with check_output built-in timeout

* Improved `check_requirements()` offline-handling (ultralytics#3466)

Improve robustness of `check_requirements()` function to offline environments (do not attempt pip installs when offline).

* Add `output_names` argument for ONNX export with dynamic axes (ultralytics#3456)

* Add output names & dynamic axes for onnx export

Add output_names and dynamic_axes names for all outputs in torch.onnx.export. The first four outputs of the model will have names output0, output1, output2, output3

* use first output only + cleanup

Co-authored-by: Samridha Shrestha <samridha.shrestha@g42.ai>
Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>

* Revert FP16 `test.py` and `detect.py` inference to FP32 default (ultralytics#3423)

* fixed inference bug ,while use half precision

* replace --use-half with --half

* replace space and PEP8 in detect.py

* PEP8 detect.py

* update --half help comment

* Update test.py

* revert space

Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>

* Add additional links/resources to stale.yml message (ultralytics#3467)

* Update stale.yml

* cleanup

* Update stale.yml

* reformat

* Update stale.yml HUB URL (ultralytics#3468)

* Stale `github.actor` bug fix (ultralytics#3483)

* Explicit `model.eval()` call `if opt.train=False` (ultralytics#3475)

* call model.eval() when opt.train is False

call model.eval() when opt.train is False

* single-line if statement

* cleanup

Co-authored-by: Glenn Jocher <glenn.jocher@ultralytics.com>

* check_requirements() exclude `opencv-python` (ultralytics#3495)

Fix for 3rd party or contrib versions of installed OpenCV as in ultralytics#3494.

* Earlier `assert` for cpu and half option (ultralytics#3508)

* early assert for cpu and half option

early assert for cpu and half option

* Modified comment

Modified comment

* Update tutorial.ipynb (ultralytics#3510)

* Reduce test.py results spacing (ultralytics#3511)

* Update README.md (ultralytics#3512)

* Update README.md

Minor modifications

* 850 width

* Update greetings.yml

revert greeting change as PRs will now merge to master.

Co-authored-by: Piotr Skalski <SkalskiP@users.noreply.github.com>
Co-authored-by: SkalskiP <piotr.skalski92@gmail.com>
Co-authored-by: Peretz Cohen <pizzaz93@users.noreply.github.com>
Co-authored-by: tudoulei <34886368+tudoulei@users.noreply.github.com>
Co-authored-by: chocosaj <chocosaj@users.noreply.github.com>
Co-authored-by: BuildTools <unconfigured@null.spigotmc.org>
Co-authored-by: Yonghye Kwon <developer.0hye@gmail.com>
Co-authored-by: Sam_S <SamSamhuns@users.noreply.github.com>
Co-authored-by: Samridha Shrestha <samridha.shrestha@g42.ai>
Co-authored-by: edificewang <609552430@qq.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Onnx export with python models/export.py --dynamic does not produce dynamic outputs
2 participants