-
Notifications
You must be signed in to change notification settings - Fork 2.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ci-kubernetes-e2e-gce-scale-performance is continuously testing the same, stale k8s version since 10-29 #19838
Comments
#19839 for showing right version on testgrid |
Thanks, Maciek. Great finding! We discovered only by a sheer luck. If Maciek wasn't debugging some other issue today, we might have missed that change for weeks and it would render our scale tests useless. |
/assign @justaugustus Stephen - as you mentioned on slack - I'm assigning it to you] |
PR opened to revert the change: #19841 |
FYI @kubernetes/ci-signal |
It's fixed at least for scalability tests - closing. Thanks for fixing Stephen! /close |
@wojtek-t: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Thanks for reporting back on your side, @wojtek-t! For completeness, dropping a snippet of the PR description in from #19841:
I'll plan to send a note out to the broader community early next week (by then, the remaining changes should have died down). /sig release testing scalability |
What happened:
Starting from
https://prow.k8s.io/view/gcs/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-scale-performance/1321859659652403200
all ci-kubernetes-e2e-gce-scale-performance runs are using stale k8s version v1.20.0-beta.0.54+2729b8e3751434.
Moreover, the commit number on https://k8s-testgrid.appspot.com/sig-scalability-gce#gce-master-scale-performance seems to be changing and it doesn't match the actual version that is being tested:
What you expected to happen:
How to reproduce it (as minimally and precisely as possible):
Please provide links to example occurrences, if any:
e.g. the latest run https://prow.k8s.io/view/gcs/kubernetes-jenkins/logs/ci-kubernetes-e2e-gce-scale-performance/1323762317799723008
Anything else we need to know?:
#19660 is most likely a culprit which moves fast builds from kubernetes-release-dev which is used by the job to k8s-release-dev.
/cc @mm4tt
/cc @wojtek-t
/cc @cpanato
/cc @justaugustus
The text was updated successfully, but these errors were encountered: