fix benchmarks compatibility with newer pytest-cases #14764
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Description
Reverts changes from #14756.
cudf
's tests to be compatible with the latestpytest-cases
(version 3.8.2)pytest-cases>=3.8.2
on that project to be sure older versions aren't usedChecklist
Notes for Reviewers
The fix here was to stop using
pytest-cases
's automatic collection of cases, and to instead explicitly tell it where to look.Per the
pytest_cases.get_all_cases()
docs (docs link)Since there are so few uses of
pytest_cases
in this project, I think explicitly passing module names is preferable to adjusting the repo to work with the new patterns introduced in smarie/python-pytest-cases#320.For clarity, I'm also proposing renaming files that only contain test cases such that they begin with
cases_*
, but this isn't strictly necessary... happy to revert that if you'd like.How I tested this
following CONTRIBUTING.md (click me)
Based on the advice in https://docs.rapids.ai/resources/reproducing-ci/, pointed build scripts at local output directories.
Then re-generated the dependency files.
Then created a conda environment to run tests in and built the library.
conda env create \ --name cudf-dev \ --file ./conda/environments/all_cuda-120_arch-x86_64.yaml source activate cudf-dev ./ci/build.sh libcudf cudf
Then ran the benchmarks, following
cudf/ci/test_python_cudf.sh
Line 30 in 726a7f3
Saw those tests succeed with my changes, and fail like #14712 (comment) without them.
I also tried adding a
raise RuntimeError()
to the test cases in e.g.cases_dataframe.py
, just to convince myself that the benchmark cases were actually being successfully collected and run.