-
Notifications
You must be signed in to change notification settings - Fork 18.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Share convolution buffers to reduce memory usage #2016
base: master
Are you sure you want to change the base?
Conversation
Although this a simple and elegant change, maybe it will be worthy to consider a factory of temporary Blobs which could be shared, re-used across layers, or even across nets. |
For a more mannered take on sharing buffers see #2009. |
As a reminder, the static buffer causes this crash on exit:
this is a known side effect. |
Share convolution buffers to reduce memory usage
Share convolution buffers to reduce memory usage
Share convolution buffers to reduce memory usage
Share convolution buffers to reduce memory usage
i got the this problem. |
@shelhamer |
@naibaf7 that sounds promising, but I won't have a chance to review it until after the ECCV deadline 03/14, and we'll have to check it with the NVIDIA/Flickr parallelism too. Feel free to open the PR when ready all the same. Thanks. |
1dcfc3c
to
839b050
Compare
Rebased to |
Share convolution buffers to reduce memory usage * shelhamer/share-col-buffer: share columnation buffers for convolution to save memory
@shelhamer |
Share convolution buffers to reduce memory usage
Share convolution buffers to reduce memory usage
Share convolution buffers to reduce memory usage
This problem was recently solved for fast inference on the OpenCL branch. No layers are broken. |
This bug prevents other "atexit" handlers from running. It's sloppy to have an error and say "Hey, don't worry about it, it doesn't have side effects!" Is there a function that can be set to safely shut down caffe? That should be all that's required, right? |
Share the columnation buffer for im2col / col2im transformations across all Caffe convolution layers. The memory usage is now equal to the maximum buffer size instead of the sum over all layers. In particular this is useful for many-layered architectures like the VGG ILSVRC14 19 layer model.
Advice and Cautions:
All credit to @longjon who reshaped our world in #594 and suggested this patch in #520 (comment).
master
edition of #1291.Do not merge.