-
-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
dataset.to_netcdf not compressing correctly(?) #9783
Comments
Please add the output of This might work (Untested!) by keeping current encoding and adding zlib on top. encoding = {name: {**var.encoding, **comp} for name, var in dataset.data_vars.items()} |
With the crump I checked that yes, it contains both About your proposal, seems to work but I got an error when moving the encoding, looks like some of the
The encoding I was creating:
But now I will need to manage to make some of the new attributes available: Thank you so much for the quick reply, very much appreciated! |
I finally added a little hardcode to make the code work, I hope it doesn't bother anyone (I pop the dict keys myself) right after creating the encoding:
It seems to work, thank you very much again and hope that you have a nice weekend 🤗 |
@uriii3 Glad it helped and you figured out a working solution. |
If someone else finds this issue (not really an issue but a usage question), keep in mind that in the end the only keys that will work (or seem to coincide) are the ones in the variable. So better to do a dict that has this variables than to do a dict that exclude some that doesn't work (you might skip some). This code might be more robust:
|
Okey, last checking on my side (and maybe just nobody will encounter this, but):
Hope it helps someone if needed! |
What happened?
I tried different compression levels and without compression sometimes it works better than with actual compression activated.
What did you expect to happen?
That with compression everything should be with a lower size.
(you can obtain the dataset
complevel0.nc
through a library called copernicusmarine:copernicusmarine subset --dataset-id cmems_mod_glo_phy_my_0.083deg_P1D-m -v thetao -t 1993-01-01T00:00:00 -T 2020-12-31T23:59:59 -x -90 -X -85 -y -35 -Y -30 -z 0.49 -Z 1 -f complevel0.nc
, the zip file is to big to add it here).Minimal Complete Verifiable Example
MVCE confirmation
Relevant log output
With the one without compression being half the size of the ones compressed.
The text was updated successfully, but these errors were encountered: