-
Notifications
You must be signed in to change notification settings - Fork 9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ci: print log on test failure and clean cache #262
Conversation
Current dependencies on/for this PR: This stack of pull requests is managed by Graphite. |
8e8ed0e
to
d2ef598
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does cleaning the cargo cache not cause a complete rebuild from scratch?
Our CI times will go through the roof unless I misunderstand. Though it seems unavoidable without setting up something like sccache
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I understand the potential benefit of it in case of a normal GHA. There, each runner starts without any state and will have to download, uncompress the cache. Whenever it is done with it, it will update the cache, compress it and upload. Reducing the cache size and storing only necessary things that are slow to download or rebuild can indeed improve the CI performance.
It doesn't apply to our situation: we do not rely on the GHA provided machines and we are using our own. As such, we have plenty of cache which is available immediatelly.
What am I missing?
yes, but sharing the cache seems to cause another problem: crates like ikura-shim are not recompiled between runs. Yesterday, while running CI on #251, I encountered the shim version of #260. It seems that a shared $CARGO_HOME may be preventing cargo from recognizing and using the new binary versions, sticking to the old ones instead. I need a method to instruct cargo to recompile only the binaries in our repository while using the cached crates. Following this guide, I first tested cargo-cache, which offers a seamless way to clear cache on CI with the Initially uncertain about its impact:
After conducting the test, I found that CI times remained consistent at 5 minutes, regardless of using I aim to understand where the local workspace stores incremental build data and how it influences binary recompilation (deleting items from git checkouts may not provide any benefits) |
d2ef598
to
adb8339
Compare
Ok, this is a bit more clear, however I am still not convinced in that it's a good solution. Specifically, I don't understand exactly the reason of failure and how usage of TBH, I have a suspicion that you are on the wrong path. My reasoning is based on the following:
So it doesn't look like this is the root cause. Maybe, removing those caches somehow invalidates the build and triggers rebuilding, or something along those lines?
it shouldn't really matter whether it's local workspace or anything else, or incremental or not, the compiled artifacts are stored under |
I agree, initially I thought
Yes, probably that's what happens. Yesterday, it worked only by chance. On the second push, it stopped working. I will close this PR, fix the last things for #251, and then go back to this problem |
cleaning cargo cache between runs is necessary to avoid using binaries from different prs. The main function of cargo-cache does is removing git repos that have been checked out
cargo install cargo-cache --no-default-features --features ci-autoclean
should only be done in the gha docker file. Currently, it is also being executed in the GitHub workflow because the runner needs to be manually updated to make these changes. The command will be removed from the workflow in a follow-up PR once the runner is updated