-
Notifications
You must be signed in to change notification settings - Fork 278
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Experiment report for 2021-08-23-cloning stuck for days on 15 min #1237
Comments
@jonathanmetzman thoughts ? |
i see weird error
|
requesting new experiment in 2021-08-31-cloning |
thank you |
fyi, that experiment is still building and has some fuzzer build failures, so it is retrying, i expect the experiment to start sometime later in day. fuzzbench is not broken as another experiment from @vanhauser-thc is successfully running - https://www.fuzzbench.com/reports/experimental/2021-09-01-aflpp/index.html . Lets keep an eye and file issue if needed in next 2 days. only thing that might break is the big size of the experiment.
|
Depending on the number of concurrent builds and the scheduling of builds it may take some hours to build, especially the Thank you for your availability |
So it seems that the experiment 2021-08-31-cloning is still stuck at 15min for something like 2 days. Since #1225 should be merged, my guess is unless something in the infrastructure may have broken, our high memory consumption may be the problem. For example |
I think this experiment was run in the end. |
Yes, thank you for closing the issue |
It seems that the report for the experiment 2021-08-23-cloning is stuck on 15m for something like 3 days.
We have two hypotheses on what could have gone wrong:
Upgrade Ubuntu version oss-fuzz#6180 seems to have upgraded the Ubuntu version for gcr.io/oss-fuzz-base/base-builder images, and we were merged just before Pin images to specific base-builder and base-clang build. #1225 which was preventing the upgrade. This race may have caused some inconsistencies or broke our builds.
We merged the fix in our fork now, should we resubmit the PR?
The high memory requirements for our experiment somehow broke or frozen the containers that should run our fuzzers. In that case, we would resubmit the PR with lower memory requirements.
Let us know how should we proceed, and thank you for all your work
The text was updated successfully, but these errors were encountered: