Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pandas.testing.assert_series_equal precision defaults changed in pandas 1.1.0 #1018

Closed
kandersolar opened this issue Aug 4, 2020 · 7 comments · Fixed by #1021
Closed

pandas.testing.assert_series_equal precision defaults changed in pandas 1.1.0 #1018

kandersolar opened this issue Aug 4, 2020 · 7 comments · Fixed by #1021

Comments

@kandersolar
Copy link
Member

Pandas 1.1.0 changed the default precision tolerances in the assert_series_equal and similar functions: pandas-dev/pandas#30562. This is causing test failures on the Azure checks that use the most recent pandas version, e.g.: https://dev.azure.com/solararbiter/pvlib%20python/_build/results?buildId=3877&view=logs&j=ee50eb0a-7467-5494-0f35-e3b863355bb0&t=2efe8f58-8c60-5d45-8189-b00aa6aac1e4&l=209

I think maybe the best way to fix this is to wrap the pandas functions with our own with some version-checking logic. Or we could rework all the failing tests to use more precise values, or bump the minimum pandas version to 1.1.0, neither of which seems like a good option to me.

@cwhanse
Copy link
Member

cwhanse commented Aug 4, 2020

Oh boy. I think that we should eventually rework the tests to use the new atol and rtol arguments but that's no small effort. We could override the defaults with less precise tolerances. And prohibit pandas >=1.1.0 until that work is done.

@wholmgren
Copy link
Member

I think maybe the best way to fix this is to wrap the pandas functions with our own with some version-checking logic.

I agree. It sounds doable with something like

# conftest.py

def assert_series_equal(a, b, **kwargs):
    if pd_version >= 1.1.0:
        kwargs.pop('check_less_precise')
    else:        
        kwargs.pop('rtol')
        kwargs.pop('atol')
    pd.assert_series_equal(a, b, **kwargs)

and similar for assert_frame_equal.

It's definitely not a small effort but I'm guessing that doing it right the first time will be better in the long run. This is going to be with us for a couple of years.

I'm -1 on prohibiting the latest pandas for something that only effects the test suite.

@cwhanse
Copy link
Member

cwhanse commented Aug 4, 2020

I think maybe the best way to fix this is to wrap the pandas functions with our own with some version-checking logic.

I agree. It sounds doable with something like

I am OK with this, thanks for explaining with the example.

@kandersolar
Copy link
Member Author

This appears to get all but a handful of the tests passing with pandas 1.1.0:

from pkg_resources import parse_version

def _check_pandas_assert_kwargs(kwargs):
    if parse_version(pd.__version__) >= parse_version('1.1.0'):
        # 1e-3 and 1e-5 roughly approximate the behavior of check_less_precise
        if kwargs.pop('check_less_precise', False):
            kwargs['atol'] = 1e-3
            kwargs['rtol'] = 1e-3
        else:
            kwargs['atol'] = 1e-5
            kwargs['rtol'] = 1e-5
    else:
        kwargs.pop('rtol', None)
        kwargs.pop('atol', None)
    return kwargs

def assert_series_equal(left, right, **kwargs):
    kwargs = _check_pandas_assert_kwargs(kwargs)
    pd.testing.assert_series_equal(left, right, **kwargs)

# and similarly for the two other assert functions

It's definitely not a small effort but I'm guessing that doing it right the first time will be better in the long run. This is going to be with us for a couple of years.

Could you expand on what "doing it right" means? I'm not sure whether this refers to reworking the tests to use more precise expected values or to specify appropriate rtol/atol for each test, or perhaps something else.

@wholmgren
Copy link
Member

@kanderso-nrel looks good!

I think there are a few tests that specify a different tolerance. We could change them or we could make your function a little more flexible.

@kandersolar
Copy link
Member Author

I think there are only three tests* that don't pass with this function, all of which have fairly large error between the actual and expected values. If I've bisected the git history correctly, it was the "disable perez enhancement" PR that introduced the difference, but the test precisions were lowered rather than changing the expected values: https://github.com/pvlib/pvlib-python/pull/459/files#diff-1cade70498bbcfab4710eb31ae0c9e21R424-R481

I'd say let's just update the expected values for those three tests -- no need to preserve that difference, right? If so I'll open a PR with those values updated and the above function implemented and we'll see what Azure thinks about it.

*gremlins are preventing me from running the bifacial tests at the moment, so not sure about them

@wholmgren
Copy link
Member

Right. Fine with me if the tests are assert ac.iloc[0] > 0; assert ac.iloc[1] <= 0 - model chain tests don't need precision.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants