-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
🐛 Fix TestReconcileMachinePoolScaleToFromZero flakes #9745
Conversation
Welcome @loktev-d! |
Hi @loktev-d. Thanks for your PR. I'm waiting for a kubernetes-sigs member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/ok-to-test |
LGTM label has been added. Git tree hash: f639ae756554f420b207318eceddd5ff090ae953
|
Also removed namespace from cleanup because envtest does not support namespace deletion and the test was failing. |
/retest |
/lgtm I think we want to keep running the flaky test several times once you squash to ensure that we have a consistent green signal. |
LGTM label has been added. Git tree hash: 2044bf3dac08871aed71ff16607b60e5973c585f
|
In general envtest supports deleting Namespaces. Do you know why exactly it fails? |
@sbueringer Here is the error https://prow.k8s.io/view/gs/kubernetes-jenkins/pr-logs/pull/kubernetes-sigs_cluster-api/9745/pull-cluster-api-test-main/1729532220189708288 Kubebuilder doc (https://book.kubebuilder.io/reference/envtest.html) says
and removing the namespace helped, so I thought it was the cause. |
Sorry my bad, I was wrong. I thought it works, because we are doing it in similar tests. But they are probably using the fake client and not envtest. Makes sense that namespaces are getting stuck in Terminating when there is no controller-manager running removing the Kubernetes finalizer. /lgtm Can you please squash the commits? I'll merge afterwards |
I think we're fine. Tests are green and while we don't 100% know if it resolves the flake, the PR is definitely an improvement and we can check in CI after merge if the flake still occurs. |
Replace env.Create and env.Cleanup with env.CreateAndWait and env.CleanupAndWait, remove namespace from cleanup.
@sbueringer Done |
Thank you very much!!! /approve Let's try to cherry-pick |
/cherry-pick release-1.6 |
@sbueringer: once the present PR merges, I will cherry-pick it on top of release-1.6 in a new PR and assign it to you. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: sbueringer The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/cherry-pick release-1.5 |
@sbueringer: once the present PR merges, I will cherry-pick it on top of release-1.5 in a new PR and assign it to you. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@sbueringer: new pull request created: #9822 In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@sbueringer: new pull request created: #9823 In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
What this PR does / why we need it:
I've managed to reproduce the flakes on my local machine, it seems that the test flakes due to some random cache sync delays when creating a node with envtest, CAPI Testing documentation recommends using
CreateAndWait
for such cases.Which issue(s) this PR fixes:
Fixes #8993
/area machinepool