Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Setup Codecoverage CI #993

Closed
seanpmorgan opened this issue Jan 31, 2020 · 3 comments
Closed

Setup Codecoverage CI #993

seanpmorgan opened this issue Jan 31, 2020 · 3 comments
Labels
blocked Pending something elses completion discussion needed test-cases Related to Addons tests

Comments

@seanpmorgan
Copy link
Member

Given the number of contributions and individual maintainers IMO it would be a good idea to add a code coverage check so we can see if new additions are decreasing coverage and need improved testing.

Thoughts / Suggestions for tooling?

@seanpmorgan seanpmorgan added discussion needed test-cases Related to Addons tests labels Jan 31, 2020
@gabrieldemarmiesse
Copy link
Member

I'm not a fan of the idea. I've seen it several times used in different repository. In theory the tool is useful, in practice, well...

The issue is that the percentage of code covered by the tests depends on the codebase. Let's say you do a change to improve (make a longer) error message, if this error is not hit in the tests, well, the CI will fail. Even if it wasn't in the scope of the PR to add the test (the PR is focused on a better error message).

Other situations when a codecoverage CI will fail when it should not:

  • Refactoring
  • Improve obvious checks of user input
  • Adding tooling

Code coverage is also not a good proxy for the quality of the tests. For example it won't catch a user who adds a feature and doesn't bother changing the default inputs of the function.

Even if we add a threashold (for example dropping by 0.1% the coverage is ok) then people making small pull requests won't be flagged when they should, and in the end it adds up...

In all honesty, I've never seen a repo where when codecoverage was implemented and it was helpful, except for the ones where it's completely silent. I'd rather go toward other good ideas like automatic checks : #992 #989 , flake8 and other tools that we can trust will do a good job in whatever situation.
Let's give better tooling to make it easy for devs to write good tests with many configurations.

I'd be ok if it's completely silent, meaning coverage appearing in the logs, file by file, for the people who care, but no check depending on it in the CI and no bot to post on the pull request for useless notifications.

Sorry about my rant here 😓 I had too many failed CI, blocked pull requests and useless notifications.

@gabrieldemarmiesse
Copy link
Member

I looked into how to implement a minimal code coverage for Python with Bazel with just a small report that we could get by running a command locally. Some findings:

bazelbuild/rules_python#43
bazelbuild/bazel#10660
https://groups.google.com/forum/#!msg/bazel-discuss/LC97zBrMG84/cfVNdus-AwAJ

Long story short: If we run unit tests with Bazel, we can't, except if someone is very motivated to make a pull request to the Bazel repository.

@gabrieldemarmiesse gabrieldemarmiesse added the blocked Pending something elses completion label Feb 5, 2020
@seanpmorgan
Copy link
Member Author

Thanks for the info! I've subscribed to those issues and will keep an eye on them. I think you made a number of good points about its value anyway though so closing this until further information is available.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
blocked Pending something elses completion discussion needed test-cases Related to Addons tests
Projects
None yet
Development

No branches or pull requests

2 participants