Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

sh pretrain.sh #13

Open
YePG opened this issue Dec 2, 2021 · 2 comments
Open

sh pretrain.sh #13

YePG opened this issue Dec 2, 2021 · 2 comments

Comments

@YePG
Copy link

YePG commented Dec 2, 2021

RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [90, 128, 1536]], which is output 0 of AddBackward0, is at version 2; expected version 1 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).

How to solve this problem?

@YePG
Copy link
Author

YePG commented Dec 2, 2021

q *= self.scaling
Traceback (most recent call last):
File "pretrain.py", line 277, in
main(args, 3)
File "pretrain.py", line 234, in main
loss.backward()
File "/home/pgye/anaconda3/envs/ypg/lib/python3.6/site-packages/torch/tensor.py", line 221, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "/home/pgye/anaconda3/envs/ypg/lib/python3.6/site-packages/torch/autograd/init.py", line 132, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.cuda.FloatTensor [90, 128, 1536]], which is output 0 of AddBackward0, is at version 2; expected version 1 instead. Hint: enable anomaly detection to find the operation that failed to compute its gradient, with torch.autograd.set_detect_anomaly(True).

@YePG
Copy link
Author

YePG commented Dec 2, 2021

ok ! I solve it by upgrade torch=1.8

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant