Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix the CustomNonLinearClsHead when the batch_size is set to 1 #2569

Closed
wants to merge 1 commit into from

Conversation

sungmanc
Copy link
Contributor

Summary

This PR introduces a new trick to avoid an error that happens in the bs=1 case at the BN1d included at the CustomNonLinearClsHead. For now, just applying to the loss != IBLoss, need to consider further cases.

How to test

otx train src/otx/algorithms/classification/configs/mobilenet_v3_large_1_cls_incr/template.yaml --train-data-roots tests/assets/classification_dataset --val-data-roots tests/assets/classification_dataset params --learning_parameters.batch_size 1

Checklist

  • I have added unit tests to cover my changes.​
  • I have added integration tests to cover my changes.​
  • I have added e2e tests for validation.
  • I have added the description of my changes into CHANGELOG in my target branch (e.g., CHANGELOG in develop).​
  • I have updated the documentation in my target branch accordingly (e.g., documentation in develop).
  • I have linked related issues.

License

  • I submit my code changes under the same Apache License that covers the project.
    Feel free to contact the maintainers if that's a concern.
  • I have updated the license header for each file (see an example below).
# Copyright (C) 2023 Intel Corporation
# SPDX-License-Identifier: Apache-2.0

@sungmanc sungmanc added ALGO Any changes in OTX Algo Tasks implementation FIX defect fix labels Oct 20, 2023
@sungmanc sungmanc added this to the 1.6.0 milestone Oct 20, 2023
@sungmanc sungmanc requested a review from a team as a code owner October 20, 2023 07:24
@codecov
Copy link

codecov bot commented Oct 20, 2023

Codecov Report

Attention: 2 lines in your changes are missing coverage. Please review.

Comparison is base (6d44fc0) 81.51% compared to head (a8dedb9) 81.50%.

Additional details and impacted files
@@             Coverage Diff             @@
##           develop    #2569      +/-   ##
===========================================
- Coverage    81.51%   81.50%   -0.01%     
===========================================
  Files          516      516              
  Lines        38255    38259       +4     
===========================================
  Hits         31184    31184              
- Misses        7071     7075       +4     
Flag Coverage Δ
py310 81.50% <50.00%> (-0.01%) ⬇️
py38 81.48% <50.00%> (-0.01%) ⬇️
py39 81.48% <50.00%> (-0.01%) ⬇️

Flags with carried forward coverage won't be shown. Click here to find out more.

Files Coverage Δ
...ion/adapters/mmcls/models/heads/custom_cls_head.py 90.62% <50.00%> (-2.71%) ⬇️

... and 1 file with indirect coverage changes

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

Copy link
Contributor

@eunwoosh eunwoosh left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It just a question. It seems that error is raised due to dimension problem, is it right?
if it's right, how about changing dimension of tensor instead of concatenating same tensor?

@sungmanc
Copy link
Contributor Author

It just a question. It seems that error is raised due to dimension problem, is it right? if it's right, how about changing dimension of tensor instead of concatenating same tensor?

Right, however, if you mean just changing the shape of the tensor (i.e. [1, 960] --> [2, 480]. it could affect to the output since we used BatchNorm1d

@sungmanc
Copy link
Contributor Author

Discard this PR and will made PR for the release branch

@sungmanc sungmanc closed this Oct 24, 2023
@eunwoosh
Copy link
Contributor

Right, however, if you mean just changing the shape of the tensor (i.e. [1, 960] --> [2, 480]. it could affect to the output since we used BatchNorm1d

ah I thought that [960] needs to be changed to [1, 960], but it's already [1, 960]. I think I just misunderstood it.

@yunchu yunchu deleted the fix-bs1-bn1d branch May 7, 2024 00:40
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ALGO Any changes in OTX Algo Tasks implementation FIX defect fix
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants