-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ci: fix a test and update CI to catch errors regularly #199
Conversation
Saransh-cpp
commented
May 31, 2022
•
edited
Loading
edited
- Fixed a numerical error in tests (no idea how it came into existence suddenly, a change in the upstream repositories?)
- Added a cron job to CI
- Added workflow_dispatch to CI
- CI now cancels or skips intermediate builds
- Added steps to generate and upload coverage report
* Fix a numerical error in tests * Add a cron job to CI * Add workflow_dispatch to CI * Allow CI to cancel or skip intermediate builds
Should I check the python version and assert different values based on it? Or is there a better way to fix this?😅 |
8b740da
to
d8a6d36
Compare
Codecov Report
@@ Coverage Diff @@
## main #199 +/- ##
=======================================
Coverage ? 82.44%
=======================================
Files ? 96
Lines ? 10386
Branches ? 0
=======================================
Hits ? 8563
Misses ? 1823
Partials ? 0 Continue to review full report at Codecov.
|
The coverage currently is not visible on Codecov. The link gives a "Sorry, we couldn't find that page" error, but the cause of this might be the absence of a reference (main branch) coverage report? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Checkmark from me; adding Codecov is a good idea. I'm not sure that the tests need to run every day, but I'm not against it.
.github/workflows/ci.yml
Outdated
schedule: | ||
- cron: '0 2 * * *' # Daily at 2 AM UTC |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does it really need to run every day? What, outside of this repo, changes in a way that we want to be alerted about?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Now that I think of it, maybe not. I added a cron job as this particular failure was detected by the pre-commit
bot, and it would have gone unnoticed if there were no PRs, but such errors will not result in vector not functioning for users. Should I revert this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If there's not a reason for it, yeah. No reason to use GitHub's compute time needlessly.
Should I (squash and) merge this PR? (A general query - should I merge PRs after they are approved, or will it be performed by the maintainers? Also, should I wait for both approvals, or will one suffice to make it ready for merging?) |
I'm fine with one approval. I'd say "anyone can merge once we get one approval (as long as the approver is not the PR author)". Really small PRs like a tiny bug fix can be merged without review if you'd like. |
(This is the sort of thing I like, so I'm reviewing it too now) |
setup.py
Outdated
@@ -10,6 +10,7 @@ | |||
"awkward": ["awkward>=1.2.0"], | |||
"test": [ | |||
"pytest>=4.6", | |||
"pytest-cov==3.0.0", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This shouldn't be pinned this tightly. Maybe cap at <4; leaving uncapped is fine too. Also we should go ahead and make pytest>=6 while we are at it. (In fact, given
Line 11 in c826c63
[tool.pytest.ini_options] |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for linking the blog! Amazing read! Never thought about capping so much :)
tests/backends/test_awkward.py
Outdated
} | ||
], | ||
] | ||
assert out.tolist()[0][0] == pytest.approx( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wondering if I knew approx
worked on a dict. :) Anyway, this now doesn't test completeness, this would now pass if there was one more item. Maybe
(a, b, c), = out.to_list()
, then check a
, b
and c
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry, I think I read that wrong, I think it's (a,), b, (c,) =
. Or you could even do (a,), (), (c,) =
which will check the emptiness of the second list for you.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh yes, thanks!
a10afbd
to
77b81df
Compare
77b81df
to
c721655
Compare