-
Notifications
You must be signed in to change notification settings - Fork 38
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Internal: Improve use case testing #685
Comments
It will be overwhelming to have a use case in the repository to every possible configuration of MET tools, so we should consider creating tests that cover more cases besides the use cases. For example, we could use ensure that the wrappers are able to generate the MET unit tests: https://github.com/dtcenter/MET/blob/main_v9.1/test/xml/unit_python.xml |
Update on this issue:
|
This issue is very general and testing could endlessly be improved. I think that the remaining task from this issue that would be useful is split up pytests so that they can run in parallel. There may be a way to group tests through pytest so it is easier to run them in separate jobs in GHA. Once that is completed to remove this bottleneck in the testing workflow, this issue could be closed. |
…oups of tests in automation to speed things up. removed unused imports from tests and removed parenthesis from assert statements when they are not needed
…est' to does not start with 'pytest' to allow groups of pytests
Update on improving testing by making pytests run faster.
|
Another interesting note is the pytests appear to run much slower when they are called within a large group of tests. For example, the grid_stat tests take about 10 seconds to run when only those tests are run, but they take more than twice as long when run with all of the tests. |
Due to the overhead of setting up each job in the current implementation of the test suite, it appears that splitting up the pytests into separate jobs does not provide the desired benefit. It also takes away from the number of jobs that can be started to run when the full suite of use case tests are also run, which would likely slow down total time. It does appear that splitting the pytests into groups via markers and running each group serially in a single job does speed up execution considerably. The 'pytests' job previously took ~28m to run. Splitting the tests into 3 groups takes <11m. Comparing that to running pytests in 3 separate jobs, the longest job in my test took just under 10m. It appears that our effort is better spent splitting the pytests into smaller groups to run within a single job. In case we do decide we need to split the pytests into different jobs, the following code snippets can be used to recreate this functionality: In .github/jobs/get_use_cases_to_run.sh, add the following if $run_unit_tests is true:
In .github/actions/run_tests/entrypoint.sh, instead of looping over the pytest_groups.txt file to add commands for each marker, parse out the marker info and call pytest once:
The checks using the 'status' variable are no longer needed in this case, so that can be removed as well. |
… this logic is desired in the future)
* per #685, added custom pytest markers for many tests so we can run groups of tests in automation to speed things up. removed unused imports from tests and removed parenthesis from assert statements when they are not needed * per #685, change logic to check if test category is not equal to 'pytest' to does not start with 'pytest' to allow groups of pytests * per #685, run pytests with markers to subset tests into groups * fixed check if string starts with pytests * added missing pytest marker name * added logic to support running all pytests that do not match a given marker with the 'not <marker>' syntax * change pytest group to wrapper because the test expects another test to have run prior to running * fix 'not' logic by adding quotation marks around value * another approach to fixing not functionality for tests * added util marker to more tests * fixed typo in not logic * added util marker to more tests again * fixed logic to split string * marked rest of util tests with util marker * fixed another typo in string splitting logic * tested change that should properly split strings * moved wrapper tests into wrapper directory * changed marker for plotting tests * added plotting marker * improved logic for removing underscore after 'not' and around 'or' to specify more complex marker strings * test running group of 3 markers * fixed path the broke when test file was moved into a lower directory * Changed StatAnalysis tests to use plotting marker because some of the tests involve plotting but the other StatAnalysis tests produce output that are used in the plotting tests * changed some tests from marker 'wrapper' to 'wrapper_a' to split up some of these tests into separate runs * test to see if running pytests in single job but split into groups by marker will improve the timing enough * fixed typos in new logic * removed code that is no longer needed (added comment in issue #685 if this logic is desired in the future) * per #685, divided pytests into smaller groups * added a test that will fail to test that the entire pytest job will fail as expected * add error message if any pytests failed to help reviewer search for failed tests * removed failing test after confirming that entire pytest job properly reports an error if any test fails * turn on single use case group to make sure logic to build matrix of test jobs to run still works as expected * turn off use case after confirming tests were created properly * added documentation to contributor's guide to describe changes to unit test logic * added note about adding new pytest markers
The pytests are now run in groups and PR #1992 includes updates to log formatting so that the GitHub Actions log output is easier to navigate. Closing this issue. |
We have discussed a few ideas to improve the use case tests that are run that are very time consuming. We should generate a testing strategy document to outline what improvements can be made and what we desire to have implemented. Once we have a solid plan, create sub-issues.
Describe the New Feature
Initial brainstorming ideas include:
Acceptance Testing
List input data types and sources.
Describe tests required for new functionality.
Time Estimate
Many hours - sub-issues are needed
Sub-Issues
Consider breaking the new feature down into sub-issues.
Relevant Deadlines
None
Funding Source
None
Define the Metadata
Assignee
Labels
Projects and Milestone
Define Related Issue(s)
Document for testing could be used as a guide to set up testing for other project.
Consider the impact to the other METplus components.
New Feature Checklist
See the METplus Workflow for details.
Branch name:
feature_<Issue Number>_<Description>
Pull request:
feature <Issue Number> <Description>
Select: Reviewer(s), Project(s), Milestone, and Linked issues
The text was updated successfully, but these errors were encountered: