Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

to_netcdf from subsetted Dataset with strings loaded from char array netCDF can sometimes fail #6352

Open
DocOtak opened this issue Mar 14, 2022 · 0 comments

Comments

@DocOtak
Copy link
Contributor

DocOtak commented Mar 14, 2022

What happened?

Not quite sure what to actually title this, so feel free to edit it.

I have some netcdf files modeled after the Argo _prof file format (CF Discrete sampling geometry incomplete multidimensional array representation). While working on splitting these into individual profiles, I would occasionally get exceptions thrown complaining about broadcasting. I eventually narrowed this down to some string variables we maintain for historic purposes. Depending on the row split apart, the string data in each cell could be shorter which would result in a stringN having some different N (e.g. string4 = 3 in the CDL). If while serializing, a different string variable is being encoded that actually has length 4, it would reuse the now incorrect string4 dim name.

The above situation seems to only occur when a netCDF file is read back into xarray and the char_dim_name encoding key is set.

What did you expect to happen?

Successful serialization to netCDF.

Minimal Complete Verifiable Example

# setup
import numpy as np
import xarray as xr

one_two = xr.DataArray(np.array(["a", "aa"], dtype="object"), dims=["dim0"])
two_two =  xr.DataArray(np.array(["aa", "aa"], dtype="object"), dims=["dim0"])
ds = xr.Dataset({"var0": one_two, "var1": two_two})
ds.var0.encoding["dtype"] = "S1"
ds.var1.encoding["dtype"] = "S1"
# need to write out and read back in
ds.to_netcdf("test.nc")

# only selecting the shorter string will fail
ds1 = xr.load_dataset("test.nc")
ds1[{"dim0": 1}].to_netcdf("ok.nc")
ds1[{"dim0": 0}].to_netcdf("error.nc")

# will work if the char dim name is removed from encoding of the now shorter arr
ds1 = xr.load_dataset("test.nc")
del ds1.var0.encoding["char_dim_name"]
ds1[{"dim0": 0}].to_netcdf("will_work.nc")

Relevant log output

---------------------------------------------------------------------------
IndexError                                Traceback (most recent call last)
/var/folders/y1/63dlf4614h5d2cgr5g1t_5lh0000gn/T/ipykernel_64155/447008818.py in <module>
      2 ds1 = xr.load_dataset("test.nc")
      3 ds1[{"dim0": 1}].to_netcdf("ok.nc")
----> 4 ds1[{"dim0": 0}].to_netcdf("error.nc")

~/.dotfiles/pyenv/versions/3.9.9/envs/jupyter/lib/python3.9/site-packages/xarray/core/dataset.py in to_netcdf(self, path, mode, format, group, engine, encoding, unlimited_dims, compute, invalid_netcdf)
   1899         from ..backends.api import to_netcdf
   1900 
-> 1901         return to_netcdf(
   1902             self,
   1903             path,

~/.dotfiles/pyenv/versions/3.9.9/envs/jupyter/lib/python3.9/site-packages/xarray/backends/api.py in to_netcdf(dataset, path_or_file, mode, format, group, engine, encoding, unlimited_dims, compute, multifile, invalid_netcdf)
   1070         # TODO: allow this work (setting up the file for writing array data)
   1071         # to be parallelized with dask
-> 1072         dump_to_store(
   1073             dataset, store, writer, encoding=encoding, unlimited_dims=unlimited_dims
   1074         )

~/.dotfiles/pyenv/versions/3.9.9/envs/jupyter/lib/python3.9/site-packages/xarray/backends/api.py in dump_to_store(dataset, store, writer, encoder, encoding, unlimited_dims)
   1117         variables, attrs = encoder(variables, attrs)
   1118 
-> 1119     store.store(variables, attrs, check_encoding, writer, unlimited_dims=unlimited_dims)
   1120 
   1121 

~/.dotfiles/pyenv/versions/3.9.9/envs/jupyter/lib/python3.9/site-packages/xarray/backends/common.py in store(self, variables, attributes, check_encoding_set, writer, unlimited_dims)
    263         self.set_attributes(attributes)
    264         self.set_dimensions(variables, unlimited_dims=unlimited_dims)
--> 265         self.set_variables(
    266             variables, check_encoding_set, writer, unlimited_dims=unlimited_dims
    267         )

~/.dotfiles/pyenv/versions/3.9.9/envs/jupyter/lib/python3.9/site-packages/xarray/backends/common.py in set_variables(self, variables, check_encoding_set, writer, unlimited_dims)
    305             )
    306 
--> 307             writer.add(source, target)
    308 
    309     def set_dimensions(self, variables, unlimited_dims=None):

~/.dotfiles/pyenv/versions/3.9.9/envs/jupyter/lib/python3.9/site-packages/xarray/backends/common.py in add(self, source, target, region)
    154                 target[region] = source
    155             else:
--> 156                 target[...] = source
    157 
    158     def sync(self, compute=True):

~/.dotfiles/pyenv/versions/3.9.9/envs/jupyter/lib/python3.9/site-packages/xarray/backends/netCDF4_.py in __setitem__(self, key, value)
     70         with self.datastore.lock:
     71             data = self.get_array(needs_lock=False)
---> 72             data[key] = value
     73             if self.datastore.autoclose:
     74                 self.datastore.close(needs_lock=False)

src/netCDF4/_netCDF4.pyx in netCDF4._netCDF4.Variable.__setitem__()

src/netCDF4/_netCDF4.pyx in netCDF4._netCDF4.Variable._put()

IndexError: size of data array does not conform to slice

Anything else we need to know?

I've been unable to recreate the specific error I'm getting in a minimal example. However, removing the char_dim_name encoding key does solve this.

When digging in the xarray issues, these looked maybe relevant: #2219 #2895

Actual traceback I get with my data
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
/var/folders/y1/63dlf4614h5d2cgr5g1t_5lh0000gn/T/ipykernel_64155/3328648456.py in <module>
----> 1 ds[{"N_PROF": 0}].to_netcdf("test.nc")

~/.dotfiles/pyenv/versions/3.9.9/envs/jupyter/lib/python3.9/site-packages/xarray/core/dataset.py in to_netcdf(self, path, mode, format, group, engine, encoding, unlimited_dims, compute, invalid_netcdf)
   1899         from ..backends.api import to_netcdf
   1900 
-> 1901         return to_netcdf(
   1902             self,
   1903             path,

~/.dotfiles/pyenv/versions/3.9.9/envs/jupyter/lib/python3.9/site-packages/xarray/backends/api.py in to_netcdf(dataset, path_or_file, mode, format, group, engine, encoding, unlimited_dims, compute, multifile, invalid_netcdf)
   1070         # TODO: allow this work (setting up the file for writing array data)
   1071         # to be parallelized with dask
-> 1072         dump_to_store(
   1073             dataset, store, writer, encoding=encoding, unlimited_dims=unlimited_dims
   1074         )

~/.dotfiles/pyenv/versions/3.9.9/envs/jupyter/lib/python3.9/site-packages/xarray/backends/api.py in dump_to_store(dataset, store, writer, encoder, encoding, unlimited_dims)
   1117         variables, attrs = encoder(variables, attrs)
   1118 
-> 1119     store.store(variables, attrs, check_encoding, writer, unlimited_dims=unlimited_dims)
   1120 
   1121 

~/.dotfiles/pyenv/versions/3.9.9/envs/jupyter/lib/python3.9/site-packages/xarray/backends/common.py in store(self, variables, attributes, check_encoding_set, writer, unlimited_dims)
    263         self.set_attributes(attributes)
    264         self.set_dimensions(variables, unlimited_dims=unlimited_dims)
--> 265         self.set_variables(
    266             variables, check_encoding_set, writer, unlimited_dims=unlimited_dims
    267         )

~/.dotfiles/pyenv/versions/3.9.9/envs/jupyter/lib/python3.9/site-packages/xarray/backends/common.py in set_variables(self, variables, check_encoding_set, writer, unlimited_dims)
    305             )
    306 
--> 307             writer.add(source, target)
    308 
    309     def set_dimensions(self, variables, unlimited_dims=None):

~/.dotfiles/pyenv/versions/3.9.9/envs/jupyter/lib/python3.9/site-packages/xarray/backends/common.py in add(self, source, target, region)
    154                 target[region] = source
    155             else:
--> 156                 target[...] = source
    157 
    158     def sync(self, compute=True):

~/.dotfiles/pyenv/versions/3.9.9/envs/jupyter/lib/python3.9/site-packages/xarray/backends/netCDF4_.py in __setitem__(self, key, value)
     70         with self.datastore.lock:
     71             data = self.get_array(needs_lock=False)
---> 72             data[key] = value
     73             if self.datastore.autoclose:
     74                 self.datastore.close(needs_lock=False)

src/netCDF4/_netCDF4.pyx in netCDF4._netCDF4.Variable.__setitem__()

~/.dotfiles/pyenv/versions/3.9.9/envs/jupyter/lib/python3.9/site-packages/netCDF4/utils.py in _StartCountStride(elem, shape, dimensions, grp, datashape, put, use_get_vars)
    354         fullslice = False
    355     if fullslice and datashape and put and not hasunlim:
--> 356         datashape = broadcasted_shape(shape, datashape)
    357 
    358     # pad datashape with zeros for dimensions not being sliced (issue #906)

~/.dotfiles/pyenv/versions/3.9.9/envs/jupyter/lib/python3.9/site-packages/netCDF4/utils.py in broadcasted_shape(shp1, shp2)
    962     a = as_strided(x, shape=shp1, strides=[0] * len(shp1))
    963     b = as_strided(x, shape=shp2, strides=[0] * len(shp2))
--> 964     return np.broadcast(a, b).shape

ValueError: shape mismatch: objects cannot be broadcast to a single shape.  Mismatch is between arg 0 with shape (5,) and arg 1 with shape (6,).

Environment

INSTALLED VERSIONS

commit: None
python: 3.9.9 (main, Jan 5 2022, 11:21:18)
[Clang 13.0.0 (clang-1300.0.29.30)]
python-bits: 64
OS: Darwin
OS-release: 21.3.0
machine: arm64
processor: arm
byteorder: little
LC_ALL: en_US.UTF-8
LANG: en_US.UTF-8
LOCALE: ('en_US', 'UTF-8')
libhdf5: 1.13.0
libnetcdf: 4.8.1

xarray: 2022.3.0
pandas: 1.3.5
numpy: 1.22.0
scipy: None
netCDF4: 1.5.8
pydap: None
h5netcdf: None
h5py: None
Nio: None
zarr: None
cftime: 1.5.2
nc_time_axis: None
PseudoNetCDF: None
rasterio: None
cfgrib: None
iris: None
bottleneck: None
dask: None
distributed: None
matplotlib: None
cartopy: None
seaborn: None
numbagg: None
fsspec: None
cupy: None
pint: 0.18
sparse: None
setuptools: 58.1.0
pip: 21.2.4
conda: None
pytest: 6.2.5
IPython: 7.31.0
sphinx: 4.4.0

@DocOtak DocOtak added bug needs triage Issue that has not been reviewed by xarray team member labels Mar 14, 2022
@dcherian dcherian added topic-backends and removed needs triage Issue that has not been reviewed by xarray team member labels Apr 9, 2022
macovskym added a commit to CAnBioNet/TkNA that referenced this issue Dec 9, 2023
The failure is a result of an xarray bug that can occur after subsetting
data that was itself loaded from netcdf.
See pydata/xarray#6352 for the issue and
pydata/xarray#7689 for the fix used to create
the workaround.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants