-
Notifications
You must be signed in to change notification settings - Fork 7.1k
Upcast rotated box transforms #9175
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Upcast rotated box transforms #9175
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/vision/9175
Note: Links to docs will display an error until the docs builds have been completed. ❌ 3 New FailuresAs of commit 0ff4716 with merge base b818d32 ( NEW FAILURES - The following jobs have failed:
This comment was automatically generated by Dr. CI and updates every 15 minutes. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot for noticing the issue and for the fix @AntoineSimoulin . I left a comment below but I'll approve to unblock
@@ -451,6 +451,12 @@ def _parallelogram_to_bounding_boxes(parallelogram: torch.Tensor) -> torch.Tenso | |||
torch.Tensor: Tensor of same shape as input containing the rectangle coordinates. | |||
The output maintains the same dtype as the input. | |||
""" | |||
dtype = parallelogram.dtype | |||
acceptable_dtypes = [torch.float32] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we should consider float64 to also be an acceptable dtype? Otherwise we risk converting a float64 into a float32, which may be unnecessary.
acceptable_dtypes = [torch.float32] | |
acceptable_dtypes = [torch.float32, torch.float64] |
@@ -278,9 +251,11 @@ def _xyxyxyxy_to_xywhr(xyxyxyxy: torch.Tensor, inplace: bool) -> torch.Tensor: | |||
xyxyxyxy = xyxyxyxy.clone() | |||
|
|||
dtype = xyxyxyxy.dtype | |||
need_cast = not xyxyxyxy.is_floating_point() | |||
acceptable_dtypes = [torch.float32] # Ensure consistency between CPU and GPU. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
acceptable_dtypes = [torch.float32] # Ensure consistency between CPU and GPU. | |
acceptable_dtypes = [torch.float32, torch.float64] # Ensure consistency between CPU and GPU. |
Summary
This pull request addresses an issue where performing certain operations on rotated bounding boxes in float16 format could lead to NaN values due to overflow.
Issue Details
The functions _xyxyxyxy_to_xywhr and _parallelogram_to_bounding_boxes involve squaring operations on bounding box coordinates. When using float16 precision, boxes with absolute coordinate values above 256 can result in overflow, as the square of 257 ($257^2 = 66,049$ ) exceeds the maximum representable value for float16 ($65,504 = (2−2^{−10}) \times 2^{15}$ ). This leads to unreliable results and NaN values in subsequent computations.
Solution
To address this issue, this PR ensures that the floating point data is upcast to at least float32 for these operations. This increases the maximum representable value, allowing support for box coordinates up to$\sqrt{2^{31} - 1} \approx 46,341$ , which should be sufficient for all common computer vision applications.
Additional Improvements
Simplified code related to rotated bounding box conversion by removing unnecessary integer-to-float conversions.
Rotated Bounding Boxes should not be constructed with integer dtype, so these conversions were redundant.