Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multi-scale datasets and custom indexes #5376

Open
benbovy opened this issue May 26, 2021 · 6 comments
Open

Multi-scale datasets and custom indexes #5376

benbovy opened this issue May 26, 2021 · 6 comments

Comments

@benbovy
Copy link
Member

benbovy commented May 26, 2021

I've been wondering if:

  • multi-scale datasets are generic enough to implement some related functionality in Xarray, e.g., as new Dataset and/or DataArray method(s)
  • we could leverage custom indexes for that (see the design notes)

I'm thinking of an API that would look like this:

# lazily load a big n-d image (full resolution) as a xarray.Dataset
xyz_dataset = ...

# set a new index for the x/y/z coordinates
# (`reduction` and `pre_compute_scales` are optional and passed
# as arguments to `ImagePyramidIndex`)
xyz_dataset.set_index(
    ('x', 'y', 'z'),
    ImagePyramidIndex,
    reduction=np.mean,
    pre_compute_scales=(2, 2),
)

# get a slice (ImagePyramidIndex will be used to dynamically scale the data
# or load the right pre-computed dataset)
xyz_slice = xyz_dataset.sel_and_rescale(x=slice(...), y=slice(...), z=slice(...))

where ImagePyramidIndex is not a "common" index, i.e., it cannot be used directly with Xarray's .sel() nor for data alignment. Using an index here might still make sense for such data extraction and resampling operation IMHO. We could extend the xarray.Index API to handle multi-scale datasets, so that ImagePyramidIndex could either do the scaling dynamically (maybe using a cache) or just lazily load pre-computed data, e.g., from a NGFF / OME-Zarr dataset... Both the implementation and functionality can be pretty flexible. Custom options may be passed through the Xarray API either when creating the index or when extracting a data slice.

A hierarchical structure of xarray.Dataset objects is already discussed in #4118 for multi-scale datasets, but I'm wondering if using indexes could be an alternative approach (it could also be complementary, i.e., ImagePyramidIndex could rely on such hierarchical structure under the hood).

I'd see some advantages of the index approach, although this is the perspective from a naive user who is not working with multi-scale datasets:

  • it is flexible: the scaling may be done dynamically without having to store the results in a hierarchical collection with some predefined discrete levels
  • we don't need to expose anything other than a simple xarray.Dataset + a "black-box" index in which we abstract away all the implementation details. The API example shown above seems more intuitive to me than having to deal directly with Dataset groups.
  • Xarray will provide a plugin system for 3rd party indexes, allowing for more ImagePyramidIndex variants. Xarray already provides an extension mechanism (accessors) for methods like sel_and_rescale in the example above...

That said, I'd also see the benefits of exposing Dataset groups more transparently to users (in case those are loaded from a store that supports it).

cc @thewtex @joshmoore @d-v-b

@joshmoore
Copy link

I don't think I am familiar enough to really judge between the suggestions, @benbovy, but I'm intrigued. I think there's certainly something to be won just by having a data structure which says these arrays/datasets represent a multiscale series. One real benefit though will be when access of that structure can simplify the client code needed to interactively load that data, e.g. with prefetching.

@benbovy
Copy link
Member Author

benbovy commented May 28, 2021

I think there's certainly something to be won just by having a data structure which says these arrays/datasets represent a multiscale series.

I agree, but I'm wondering whether the multiscale series couldn't be also viewed as something that can be abstracted away, i.e., the original dataset (level 0) is the "real" dataset while all other levels are some derived datasets that are convenient for some specific applications (e.g., visualization) but not very useful for general use.

Having a single xarray.Dataset with a custom index (+ custom Dataset extension) taking care of all the multiscale stuff may have benefits too. For example, it would be pretty straightforward reusing a tool like https://github.com/xarray-contrib/xpublish to interactively (pre)fetch data to web-based clients (via some custom API endpoints). More generally, I guess it's easier to integrate with some existing tools built on top of Xarray vs. adding support for a new data structure.

Some related questions (out of curiosity):

  • Are there cases in practice where on-demand downsampling computation would be preferred over pre-computing and storing all pyramid levels for the full dataset? I admit it's probably a very naive question since most workflows on the client side would likely start by loading the top level (lowest resolution) dataset at full extent, which would require pre-computing the whole thing?
  • Are there cases where it makes sense to pre-compute all the the pyramid levels in-memory (could be, e.g., chunked dask arrays persisted on a distributed cluster) without the need to store them?

@d-v-b
Copy link
Contributor

d-v-b commented May 28, 2021

Are there cases in practice where on-demand downsampling computation would be preferred over pre-computing and storing all pyramid levels for the full dataset? I admit it's probably a very naive question since most workflows on the client side would likely start by loading the top level (lowest resolution) dataset at full extent, which would require pre-computing the whole thing?

I'm not sure when dynamic downsampling would be preferred over loading previously downsampled images from disk. In my usage, the application consuming the multiresolution images is an interactive data visualization tool and the goal is to minimize latency / maximize responsiveness of the visualization, and this would be difficult if the multiresolution images were generated dynamically from the full image -- under a dynamic scheme the lowest resolution image, i.e. the one that should be fastest to load, would instead require the most I/O and compute to generate....

Are there cases where it makes sense to pre-compute all the the pyramid levels in-memory (could be, e.g., chunked dask arrays persisted on a distributed cluster) without the need to store them?

Although I do not do this today, I can think of a lot of uses for this functionality -- an data processing pipeline could expose intermediate data over http via xpublish, but this would require a good caching layer to prevent re-computing the same region of the data repeatedly.

@thewtex
Copy link
Contributor

thewtex commented Jun 1, 2021

@benbovy I also agree that a data structure that encapsulates a scale into a nice API, where you set the scale currently desired, and the same Xarray Dataset/DataArray API is available, and that scale can optionally be lazily be loaded. Maybe an Index as proposed could be a good API, but I do not have a good enough understanding of how the interface is used in general. What would be other examples like ImagePyramidIndex, outside of the multi-scale context? Should something like Scale be used instead?

Regarding dynamic multi-scale, etc., one use case of interest is where you are interactively processing a larger-then memory dataset, and want to visualize the result over a limited domain on an intermediate scale.

@shoyer
Copy link
Member

shoyer commented Jun 2, 2021

I do think multi-scale datasets are common enough across different scientific fields (remote sensing, bio-imaging, simulation output, etc) that this could be worth considering.

@benbovy
Copy link
Member Author

benbovy commented Jun 2, 2021

What would be other examples like ImagePyramidIndex, outside of the multi-scale context?

There can be many examples like spatial indexes, complex grid indexes (select cell centers/faces of a staggered grid), distributed indexes, etc. Some of them are illustrated in a presentation I gave a couple of weeks ago (slides here). Although all those examples actually do data indexing.

In the multi-scale context, I admit that the name "index" may sound confusing since an ImagePyramidIndex would not really perform any data indexing based on some coordinate labels. Perhaps ImageRescaler would be a better name?

Such ImageRescaler might still fit well the broad purpose Xarray indexes IMHO since it would enable efficient data visualization through extraction and resampling.

The goal with Xarray custom indexes is to allow (many) kinds of objects with a scope possibly much more narrow than, e.g., pandas.Index, and that could possibly be reused in a broader range of operations like data selection, resampling, alignment, etc. Xarray indexes will be explicitly part of Xarray's Dataset/DataArray data model alongside data variables, coordinates and attributes, but unlike the latter they're not intended to wrap any (meta)data. Instead, they could wrap any structure or object that may be built from the (meta)data and that would enable efficient operations on the data (a-priori based on coordinate labels, although in some contexts like multi-scale this might be more accessory?).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants