Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP: Fix handling of semicolon and backslash characters in CMake test discovery #2676

Merged
merged 1 commit into from
Jun 14, 2023

Conversation

robinchrist
Copy link
Contributor

Description

This PR fixes the handling of semicolon and backslash characters in test names in the CMake test discovery

GitHub Issues

#2674

TODO:

How can we test this? I believe introducing a new "category" of tests that actually makes use of catch_discover_tests would be the best way?

@codecov
Copy link

codecov bot commented Apr 19, 2023

Codecov Report

Merging #2676 (2c9efc5) into devel (c836314) will increase coverage by 0.05%.
The diff coverage is n/a.

❗ Current head 2c9efc5 differs from pull request most recent head dc076e3. Consider uploading reports for the commit dc076e3 to get more accurate results

@@            Coverage Diff             @@
##            devel    #2676      +/-   ##
==========================================
+ Coverage   91.19%   91.24%   +0.05%     
==========================================
  Files         192      192              
  Lines        7843     7808      -35     
==========================================
- Hits         7152     7124      -28     
+ Misses        691      684       -7     

@horenmar horenmar added the Extras Touches utility scripts outside of Catch2 proper, e.g. CMake integration. label Apr 19, 2023
@horenmar
Copy link
Member

How can we test this? I believe introducing a new "category" of tests that actually makes use of catch_discover_tests would be the best way

Yeah, so far these CMake scripts are untested, because it was always more work than someone was willing to do to get them tested. Especially if the various options are to be covered as well. Even limiting ourselves to just the test parsing will be kinda annoying.

One of the issues is that it will need to be checked in a separately compiled CMake project (to avoid the new tests spamming the current CTest tests), and the check will have to be done by a separate (Python) script. We should check that the various problematic test names are both registered & invoked properly -- first thing I can think of is to have each TEST_CASE write out a specific string and then checking that the output from ctest --verbose contains all the outputs it should.

@robinchrist
Copy link
Contributor Author

robinchrist commented Apr 19, 2023

first thing I can think of is to have each TEST_CASE write out a specific string and then checking that the output from ctest --verbose contains all the outputs it should.

Hmm... Is that really necessary?
Given that test cases won't run to begin with if something about the name handling is broken, wouldn't it be easier to just write a couple of test cases with SUCCEED(), run ctest and check that none of them fail?

As I've mentioned in the issue, there were a lot of false positives (failing test cases) before I fixed the bug.
This worked due to the way ctest runs the test: The binary to be tested is invoked for every test case with the <test name> argument - So if the test name wasn't handled properly, the test binary will just not run any test (No tests ran) and ctest will count the test as failed.

@horenmar
Copy link
Member

Hmm... Is that really necessary?

Technically no, but that is where my testing paranoia has gotten to over time 😃 See e.g. the various reporter tests that fail if the test-specific reporter does not report as used, so that there is a loud error if the test reporter is not selected for whatever reason.

One example:

set_tests_properties(
    Reporters::CapturedStdOutInEvents
  PROPERTIES
    PASS_REGULAR_EXPRESSION "X27 - TestReporter constructed"
    FAIL_REGULAR_EXPRESSION "X27 ERROR"
)

My issue with plain SUCCEED & check that all tests fail is simple: what if a test fails to register what so ever?

This issue can be solved by counting the passed tests and checking that number against an expected number of tests.

Can this fail? Yes, if the script breaks in a way that causes test spec to match multiple tests, e.g. bad escaping for * we would not know, because CTest would report the right number of tests.

...

I am honestly willing to merge the changes as they are right now, so I will obviously also merge the PR if it contains a weaker level of checking than I would like, but I would prefer more strict checks, unless they eventually prove to be too annoying to be maintained.

@robinchrist
Copy link
Contributor Author

Maybe we can split the work?
I'm not familiar enough with the project structure to create the infrastructure for tests as you described them, but I could create test cases once the infrastructure is in place?

@horenmar horenmar added the Finish before break Things to finish before I am taking a long break label Jun 12, 2023
@horenmar
Copy link
Member

Sorry for taking so long, but I finally had enough time to work on this. There is now a test scaffolding for the catch_discover_tests functionality, and tomorrow I'll rebase your PR on current devel and merge it.

@horenmar horenmar merged commit 42ee66b into catchorg:devel Jun 14, 2023
@horenmar horenmar removed the Finish before break Things to finish before I am taking a long break label Jun 14, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Extras Touches utility scripts outside of Catch2 proper, e.g. CMake integration.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants