-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
622 write fails when we hit maximum disk space #625
622 write fails when we hit maximum disk space #625
Conversation
Now throws a GUI box to warn the user that the acquisition exceeds disk space, and stops the acquisition.
By using the debugger in Pycharm, I was able to look at the Zarr Group information and identify that the blosc compression is automatically implemented. This is why our file sizes have a discrepancy. Will leave as is.
Provides default value if delay matrix fails.
self.wait_until_done vs self.wait_until_done_delay The first is a boolean for if we should delay, the second is a float for how long to delay.
I think we can proceed with this. It prevents deeper problems which are more difficult to address from arising. If it considered too aggressive, I could implement a messagebox that warns the user and asks if they want to continue at their own risk, but this will be more involved. |
Let me know what you prefer @zacsimile. Should we want the option to have the user proceed with the acquisition after being warned, I would do |
I think we only need the message box if this commit prevents us from acquiring data in cases where there is sufficient free space. If this PR works on two different microscopes, I think we're clear to merge. Also need to fix a failing test on the Sutter filter wheel. |
I'll fix the test. We can test it out on CT-ASLM-V1, V2, and BT-MesoSPIM. |
Codecov Report
@@ Coverage Diff @@
## develop #625 +/- ##
===========================================
+ Coverage 48.06% 48.10% +0.03%
===========================================
Files 161 160 -1
Lines 16721 16702 -19
===========================================
- Hits 8037 8034 -3
+ Misses 8684 8668 -16
Flags with carried forward coverage won't be shown. Click here to find out more.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ran into different problems for each of our image saving formats (TIFF, OME-TIFF, HDF5, N5).
To circumvent all of these errors, some of which did not percolate up in a fashion that allowed us to handle them with try/except statements, now put a hard stop on the acquisition if the anticipated file format is larger than the disk space available. Message is passed from the model to the controller, and a GUI dialog pops up to inform the user of the problem.
One unique observation is that the N5 file size did not match the actual file size. After some investigation, the N5/Zarr library automatically implemented blosc compression.
Also added a bunch of numpydocs so that our Sphinx documentation continue to improve.