Skip to content
This repository was archived by the owner on Oct 31, 2023. It is now read-only.

Overflow due to IntTensor in all_gather #796

Closed
caiqi opened this issue May 18, 2019 · 3 comments
Closed

Overflow due to IntTensor in all_gather #796

caiqi opened this issue May 18, 2019 · 3 comments

Comments

@caiqi
Copy link
Contributor

caiqi commented May 18, 2019

🐛 Bug

https://github.com/facebookresearch/maskrcnn-benchmark/blob/master/maskrcnn_benchmark/utils/comm.py#L66-L67 use IntTensor to store tensor size may cause an overflow. For example, a tensor of shape [50000, 2000, 6] (50k images with each have 2k bounding boxes) will cause torch.IntTensor([tensor.numel()]).to("cuda") equals -1894966895. It seems that LongTensor is better choice for storing tensor size?

@fmassa
Copy link
Contributor

fmassa commented May 19, 2019

Sure, changing Int to Long sounds good to me. Can you send a PR?

@caiqi
Copy link
Contributor Author

caiqi commented May 20, 2019

Sure, I have created a pull request: #799

@fmassa
Copy link
Contributor

fmassa commented May 20, 2019

Fixed via #799

@fmassa fmassa closed this as completed May 20, 2019
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants