-
Notifications
You must be signed in to change notification settings - Fork 22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Amount of RAM required for deconvolution #89
Comments
I feel your pain! But to let you know, Stephan is currently working on several improvements that will drastically reduce the amount of RAM required. Some of them might come at a small penalty in runtime, but overall it should be a huge improvement because it can significantly reduce the amount of memory required. |
Thankfully someone else out there is experience the same problems! |
Hey, yes we've been "suffering" from this for the past year or
so. There is not really a workaround, because the Deconvolution
in the current implementation just requires that you hold all
input views in a box of the size of your bounding box in double
precision. This alone is already a huge amount of RAM when your
bounding box gets big. In addition there are a few more copies
of the bounding box, to hold results and stuff. With 4 views, I've
seen memory usage of more than 10 times the size of the bounding
box times 8 bytes (for double precision), more than 100GB for a
1024^3 bounding box.
Your best bet is to wait for Stephan to finish the "virtualimage"
branch in GIT. Thumbs up for his work!
|
Thanks for your help. |
Hi @spimager, I am trying to fix these problems, but my time is limited at the moment as I just started my own group. You can try to compile the virtual image branch, it might just work right now for you. In combination with HDF5, input images will not be loaded entirely anymore. |
Thanks, Stephan!
The log output was:
|
Hi, I am unfortunately currently on holidays but I will look at it as soon as possible! |
That cleared up the exception and everything seems to be working well again now, except I've noticed the estimated amount of required memory seems to have increased significantly. It was around 10TB and now it's up at 33TB. No need to interrupt your holiday for this, though! |
Hi, ignore the estimation of RAM, the code is not updated and hence totally wrong. |
I tried running the virtual image branch of the plugin, but it seems to be crashing. |
Just as a follow up, the deconvolution seems to hang on "Computing weight normalization for deconvolution" before crashing. |
Any ideas? |
We currently have a 40GB stack saved in h5 format consisting of 7 angles from 0-90˚ in increments of 15˚. We have successfully registered and fused the stack, however when we go to deconvolve it the estimated amount of RAM required is ~10TB! Even choosing a single angle to deconvolve yields a requirement of >4.8TB. Is this much RAM really required or has something gone awry here? 10TB is far beyond what our cluster currently has (~1.5TB).
The text was updated successfully, but these errors were encountered: