Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hypothesis tests for roundtrip to & from pandas #3285

Merged
merged 17 commits into from
Oct 30, 2019
Merged
Show file tree
Hide file tree
Changes from 7 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions properties/conftest.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
from hypothesis import settings
dcherian marked this conversation as resolved.
Show resolved Hide resolved

# Run for a while - arrays are a bigger search space than usual
settings.register_profile("ci", deadline=None)

This comment was marked as resolved.

settings.load_profile("ci")
7 changes: 1 addition & 6 deletions properties/test_encode_decode.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,15 +6,10 @@
"""
import hypothesis.extra.numpy as npst
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These may need to be guarded too using pytest.importorskip perhaps? @max-sixty what do you think?

Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Aha, I was being distracted by the other errors around the real one. Let's see if the latest commit helps.

import hypothesis.strategies as st
from hypothesis import given, settings
from hypothesis import given

import xarray as xr

# Run for a while - arrays are a bigger search space than usual
settings.register_profile("ci", deadline=None)
settings.load_profile("ci")


an_array = npst.arrays(
dtype=st.one_of(
npst.unsigned_integer_dtypes(), npst.integer_dtypes(), npst.floating_dtypes()
Expand Down
91 changes: 91 additions & 0 deletions properties/test_pandas_roundtrip.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,91 @@
"""
Property-based tests for roundtripping between xarray and pandas objects.
"""
import hypothesis.extra.numpy as npst
import hypothesis.extra.pandas as pdst
import hypothesis.strategies as st
from hypothesis import given

import numpy as np
import pandas as pd
import xarray as xr

numeric_dtypes = st.one_of(
npst.unsigned_integer_dtypes(), npst.integer_dtypes(), npst.floating_dtypes()
)

numeric_series = numeric_dtypes.flatmap(lambda dt: pdst.series(dtype=dt))

an_array = npst.arrays(
dtype=numeric_dtypes,
shape=npst.array_shapes(max_dims=2), # can only convert 1D/2D to pandas
)


@st.composite
def datasets_1d_vars(draw):
"""Generate datasets with only 1D variables

Suitable for converting to pandas dataframes.
"""
n_vars = draw(st.integers(min_value=1, max_value=3))
n_entries = draw(st.integers(min_value=0, max_value=100))
dims = ("rows",)
vars = {}
for _ in range(n_vars):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This pattern - draw a number, then draw that many elements - is tempting but tends to be inefficient when Hypothesis tries to minimse any failures.

The alternative, which we recommend, is to generate collections using the st.lists() strategy - that way Hypothesis will be able to operate in terms of elements of the list.

In this case it's probably only worth doing so for either the vars or entries dimension, and keep the other as-is. If you're keen to do both, it's complicated enough that I'd just fall back on the Hypothesis pandas extension and .map(pd.DataFrame.to_xarray) over the result 😅

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure how to do this, because in both dimension I want to generate multiple things of the same length - same number of names and arrays for the vars dimension, same number of entries in each array for the entries dimension. If I naively generate lists, they'll have different lengths.

Is it better to generate one such things with the lists strategy, and then make the others to match its length, rather than generating a number to use as the length for all of them? Or is there some overall cleverer way that I'm not seeing?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You could draw indices before the loop, then draw a list of (name, array) tuples. Or in this case you could use st.dictionaries() to, well, generate a list of key-value tuples internally.

The other nice trick would be to draw your index first, and use it's length - deleting elements from that will be slightly more efficient than shrinking the n_elements parameter.

Putting it all togther, I'd write

idx = draw(pdst.indexes(dtype="u8", min_size=0, max_size=100))
vars_strat = st.dictionaries(
    keys=st.text(),
    values=npst.arrays(dtype=numeric_dtypes, shape=len(idx)).map(partial(xr.Variable, ("rows",))),
    min_size=1,
    max_size=3,
)
return xr.Dataset(draw(vars_strat), coords={"rows": idx})

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks, that does look neater!

name = draw(st.text(min_size=0))
dt = draw(numeric_dtypes)
arr = draw(npst.arrays(dtype=dt, shape=(n_entries,)))
vars[name] = xr.Variable(dims, arr)

coords = {
dims[0]: draw(pdst.indexes(dtype="u8", min_size=n_entries, max_size=n_entries))
}

return xr.Dataset(vars, coords=coords)


@given(st.data(), an_array)
def test_roundtrip_dataarray(data, arr):
names = data.draw(
st.lists(st.text(), min_size=arr.ndim, max_size=arr.ndim, unique=True).map(
tuple
)
)
coords = {name: np.arange(n) for (name, n) in zip(names, arr.shape)}
original = xr.DataArray(arr, dims=names, coords=coords)
roundtripped = xr.DataArray(original.to_pandas())
xr.testing.assert_identical(original, roundtripped)


@given(datasets_1d_vars())
def test_roundtrip_dataset(dataset):
df = dataset.to_dataframe()
assert isinstance(df, pd.DataFrame)
roundtripped = xr.Dataset(df)
xr.testing.assert_identical(dataset, roundtripped)


@given(numeric_series, st.text())
def test_roundtrip_pandas_series(ser, ix_name):
# Need to name the index, otherwise Xarray calls it 'dim_0'.
ser.index.name = ix_name
arr = xr.DataArray(ser)
roundtripped = arr.to_pandas()
pd.testing.assert_series_equal(ser, roundtripped)


# Dataframes with columns of all the same dtype - for roundtrip to DataArray
numeric_homogeneous_dataframe = numeric_dtypes.flatmap(
lambda dt: pdst.data_frames(columns=pdst.columns(["a", "b", "c"], dtype=dt))
)


@given(numeric_homogeneous_dataframe)
def test_roundtrip_pandas_dataframe(df):
# Need to name the indexes, otherwise Xarray names them 'dim_0', 'dim_1'.
df.index.name = "rows"
df.columns.name = "cols"
arr = xr.DataArray(df)
roundtripped = arr.to_pandas()
pd.testing.assert_frame_equal(df, roundtripped)

This comment was marked as resolved.