This repository has been archived by the owner on Nov 17, 2023. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 6.8k
It is easy to crash MXNet when tensor goes larger #16560
Labels
Comments
@mxnet-label-bot Add [Bug, Large Tensor Support] |
We should raise an error message in the C++ side when we are going to create a large NDArray. |
Yes it is being tracked here #16570 |
Is this resolved now that #16570 is merged? |
@lanking520 assign @ChaiBapchya |
We can close this ticket. As the solution is to build with large tensor as the issue author pointed it out. An error message is already raised as part of #16570 if large array is created when large tensor support isn't enabled. |
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Description
When I use large tensor, it is easy to crash the MXNet kernel.
Using following python code to reproduce:
The error looks like an int32 overflow on shape.size.
Any easy way to fix this out? The only way I found out is to compile MXNet with USE_INT64_TENSOR_SIZE = ON, which is slower than the default one.
Environment info (Required)
mxnet 1.5.1 (pip3 install)
Package used (Python/R/Scala/Julia):
Python
Error Message:
The text was updated successfully, but these errors were encountered: