-
Notifications
You must be signed in to change notification settings - Fork 7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
make the tensor continuous when passing numpy object to tensor #2483
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR @dddzg !
This makes sense.
PS: sorry for delay.
@dddzg could you please update your branch to the current master. Thanks ! |
@vfdev-5 Hi, thanks for your reply. I have updated the branch. |
Hi @dddzg could you please update your branch such that we see only your commit. Probably, merge instead of rebase. |
oops. Is there any chance I can make it right? |
@dddzg thanks, it is OK now ! |
@vfdev-5 It seems to be OK now with force-push? |
@dddzg looks good ! |
Codecov Report
@@ Coverage Diff @@
## master #2483 +/- ##
=======================================
Coverage 72.57% 72.57%
=======================================
Files 95 95
Lines 8249 8249
Branches 1309 1309
=======================================
Hits 5987 5987
Misses 1855 1855
Partials 407 407
Continue to review full report at Codecov.
|
anything else shall I do? |
@dddzg thanks again for your help ! |
…2483) Co-authored-by: vfdev <vfdev.5@gmail.com>
…2483) Co-authored-by: vfdev <vfdev.5@gmail.com>
I'm surprised that you decided to make contiguous the default. I would argue that it would be better to leave them non-contiguous in the general case, because |
FYI, our |
I think it was to keep it consistent with PIL input: https://github.com/dddzg/vision/blob/bf0ad73b9610fac8f117810a3e30c7389c727e4f/torchvision/transforms/functional.py#L100 |
Oh I see. Sounds good then. The |
If consistency is important, I would rather remove the But if you think this is the correct behavior, I don't feel too strongly about this. |
Ok, taking a step back... @fmassa @vfdev-5 Few thoughts:
|
I would actually do it the other way around, and let
The |
@moi90 I agree with you, but I think it is still very necessary to call For example, you are working on model inferencing in terms of single batch image (a very common usage).
If the tensor is not |
@dddzg when you say
Do you mean that the code will run much slower than if your input tensor was contiguous? |
If the result is different than with a contiguous tensor, this is very likely a bug.
This is exactly my point: Data is copied anyways when the data is assembled into a (contiguous) batch. No need for an additional copy before that just to make the data contiguous. |
Hi @moi90 , @fmassa . As for my experience, the model is not a standard pytorch model, it is a TRTModule. It is a pytorch style wrapper of TensorRT. So, the result is totally different than contiguous tensor. And this problem is very difficult to notice if the tensor is not contiguous.
I agree with you for batch training. |
@fmassa As you know I'm all up for simplifying |
@datumbox We decided to split My proposal above was to use @dddzg this seems to indicate that TRT can't handle non-contiguous inputs. In general, I would open an issue in |
I had an offline discussion with @fmassa and also did a small prototype to check if calling
If you want to read more about some of the inconsistencies of the |
Not sure if this falls in the realm of 'unexpected results', but wanted to flag #4529 which runs into process hangs specifically on the |
The to_tensor will return noncontiguous tensor if we pass the NumPy image (e.g. reading image by cv.imread). It is very common usage. If we pass a PIL object, however, it will be contiguous (https://github.com/pytorch/vision/blob/master/torchvision/transforms/functional.py#L93).
I think It is necessary to make them consistent.
Check the code
torch.from_numpy(np.random.rand(299,299,3).transpose(2,0,1)).is_contiguous()
it will return
False
.