-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
pandas.testing.assert_series_equal precision defaults changed in pandas 1.1.0 #1018
Comments
Oh boy. I think that we should eventually rework the tests to use the new |
I agree. It sounds doable with something like # conftest.py
def assert_series_equal(a, b, **kwargs):
if pd_version >= 1.1.0:
kwargs.pop('check_less_precise')
else:
kwargs.pop('rtol')
kwargs.pop('atol')
pd.assert_series_equal(a, b, **kwargs) and similar for assert_frame_equal. It's definitely not a small effort but I'm guessing that doing it right the first time will be better in the long run. This is going to be with us for a couple of years. I'm -1 on prohibiting the latest pandas for something that only effects the test suite. |
I am OK with this, thanks for explaining with the example. |
This appears to get all but a handful of the tests passing with pandas 1.1.0: from pkg_resources import parse_version
def _check_pandas_assert_kwargs(kwargs):
if parse_version(pd.__version__) >= parse_version('1.1.0'):
# 1e-3 and 1e-5 roughly approximate the behavior of check_less_precise
if kwargs.pop('check_less_precise', False):
kwargs['atol'] = 1e-3
kwargs['rtol'] = 1e-3
else:
kwargs['atol'] = 1e-5
kwargs['rtol'] = 1e-5
else:
kwargs.pop('rtol', None)
kwargs.pop('atol', None)
return kwargs
def assert_series_equal(left, right, **kwargs):
kwargs = _check_pandas_assert_kwargs(kwargs)
pd.testing.assert_series_equal(left, right, **kwargs)
# and similarly for the two other assert functions
Could you expand on what "doing it right" means? I'm not sure whether this refers to reworking the tests to use more precise expected values or to specify appropriate rtol/atol for each test, or perhaps something else. |
@kanderso-nrel looks good! I think there are a few tests that specify a different tolerance. We could change them or we could make your function a little more flexible. |
I think there are only three tests* that don't pass with this function, all of which have fairly large error between the actual and expected values. If I've bisected the git history correctly, it was the "disable perez enhancement" PR that introduced the difference, but the test precisions were lowered rather than changing the expected values: https://github.com/pvlib/pvlib-python/pull/459/files#diff-1cade70498bbcfab4710eb31ae0c9e21R424-R481 I'd say let's just update the expected values for those three tests -- no need to preserve that difference, right? If so I'll open a PR with those values updated and the above function implemented and we'll see what Azure thinks about it. *gremlins are preventing me from running the bifacial tests at the moment, so not sure about them |
Right. Fine with me if the tests are |
Pandas 1.1.0 changed the default precision tolerances in the
assert_series_equal
and similar functions: pandas-dev/pandas#30562. This is causing test failures on the Azure checks that use the most recent pandas version, e.g.: https://dev.azure.com/solararbiter/pvlib%20python/_build/results?buildId=3877&view=logs&j=ee50eb0a-7467-5494-0f35-e3b863355bb0&t=2efe8f58-8c60-5d45-8189-b00aa6aac1e4&l=209I think maybe the best way to fix this is to wrap the pandas functions with our own with some version-checking logic. Or we could rework all the failing tests to use more precise values, or bump the minimum pandas version to 1.1.0, neither of which seems like a good option to me.
The text was updated successfully, but these errors were encountered: