-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cleanup usage of kubernetes-release-pull in kubernetes presubmits #18789
Comments
we should test this in a canary just because this stuff is old and brittle and I can't remember why we were doing this anymore 🙃 |
the local path test-infra/kubetest/extract_k8s.go Line 450 in c4628a3
which shouldn't be needed since we already have them under |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
still worth doing? |
/remove-lifecycle rotten |
sadly looks like those gcs links have been gced. and our presubmit jobs are configured to testing out in the canary job: #20427 |
/milestone v1.21 |
We have a succesful run at https://prow.k8s.io/view/gcs/kubernetes-jenkins/pr-logs/directory/pull-kubernetes-e2e-gce-no-stage/1352340847076577280 not sure why the total test duration is higher as compared to but we atleast saved 154 seconds of stage time (which should be the only delta here) as compared to and 1.84 GiB of unnecessary GCS uploads
|
/priority important-soon |
I'm not sure we ever got no-stage working? It's hard to follow at this point. |
#28176 renamed the test job, testing in kubernetes/kubernetes#126563 |
It does, it will stage to a generated bucket under the rented boskos project (which the boskos janitors should clean up if they don't already), so we can carefully start dropping these I think ... very belatedly. |
beginning bulk migration in #33259, starting with a subset of optional, non-blocking, not always_run jobs We have to drop both You can see sample runs in kubernetes/kubernetes#126563 Inspect these logs: |
If anyone wants to help:
NOTE: spiffxp and amwat don't work on Kubernetes anymore. I'm taking over this problem. |
#33278 does everything but the one remaining PR blocking job, for which we'll wait a bit and check some more things |
Once we have test results we can do #33280, and then I'll delete the bucket |
This is done, I just need to follow-up with eliminating that bucket. |
Done! |
What should be cleaned up or changed:
We stage builds to
gs://kubernetes-release-pull
in almost every presubmit job.But from what I can tell nothing is actually consuming those builds since the jobs also use
extract=local
.It's a non-trivial overhead to upload the release tars in every presubmit and we should remove all the non-required usages.
Provide any links for context:
https://cs.k8s.io/?q=kubernetes-release-pull&i=nope&files=&repos=
test-infra/kubetest/extract_k8s.go
Lines 449 to 466 in c4628a3
Random GCE provider job: https://prow.k8s.io/view/gcs/kubernetes-jenkins/pr-logs/directory/pull-kubernetes-e2e-gce/1293275406807339008#1:build-log.txt%3A903
/cc @spiffxp @BenTheElder @MushuEE
EDIT(@spiffxp): I made a list of the offending jobs going off the criteria
--extract=local
and--stage=gs://kubernetes-release-pull/*
job@branch
job
--provider=aws
jobs (kops), it remains to be seen whether they need--stage
or notEDIT(@BenTheElder): I removed the outdated checklist and instead i'm going to provide a search: https://github.com/search?q=repo%3Akubernetes%2Ftest-infra+%22--stage%3Dgs%3A%2F%2Fkubernetes-release-pull%22&type=code
The text was updated successfully, but these errors were encountered: