Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Memory leak when optimizing a pytorch module using hess or hessp. #29

Open
tatsuhiko-inoue opened this issue Sep 25, 2023 · 1 comment

Comments

@tatsuhiko-inoue
Copy link

Hello,

When I ran "examples/train_mnist_Minimizer.py", the following warning was output.

/home/user/.pyenv/versions/py38-pytorch/lib/python3.8/site-packages/torch/autograd/__init__.py:200: UserWarning: Using backward() with create_graph=True will create a reference cycle between the parameter and its gradient which can cause a memory leak. We recommend using autograd.grad when creating the graph to avoid this. If you have to use this function, make sure to reset the .grad fields of your parameters to None after use to break the cycle and avoid the leak. (Triggered internally at ../torch/csrc/autograd/engine.cpp:1151.)  Variable._execution_engine.run_backward(  # Calls into the C++ engine to run the backward pass

I ran a script to optimize iteratively a pytorch module, and torch.cuda.OutOfMemoryError is occured.

pytorch version which I used is 2.0.1 and I used CUDA.

@byfron
Copy link

byfron commented Oct 19, 2023

Found the same issue running a "dogleg" Minimizer for a non-linear least squares problem. Pytorch 1.13.1 with CUDA 11.6

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants