-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
flaking unit test in TestReconcileMachinePoolMachines
#11070
Comments
/area machinepool |
Yup. I saw a bunch of flakes around MachinePool unit tests as well /triage accepted /help |
@sbueringer: GuidelinesPlease ensure that the issue body includes answers to the following questions:
For more details on the requirements of such an issue, please see here and ensure that they are met. If this request no longer meets these requirements, the label can be removed In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
/assign cahillsf cannot reproduce this issue locally, have opened a draft that seems to use preferred methods in this unit test, see PR for details. hopefully this will improve the stability of this test |
Would be great if some folks familiar with Machine Pools / MachinePool Machines can review #11124 (cc @Jont828 @willie-yao) |
/reopen I assume we want to keep this issue open for now as we're not sure if the PR will fix all flakes |
@sbueringer: Reopened this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
Yep sounds good, will track the test and revisit edit: adding k8s-triage link https://storage.googleapis.com/k8s-triage/index.html?text=TestReconcileMachinePoolMachines&job=.*cluster-api-(test%7Ce2e)-(mink8s-)*main |
revisiting this, test hasn't flaked since if we update the date for today the failures are out of the default lookback window: https://storage.googleapis.com/k8s-triage/index.html?date=2024-09-18&text=TestReconcileMachinePoolMachines&job=.*cluster-api-(test%7Ce2e)-(mink8s-)*main not sure how long we want to wait before closing out this issue @sbueringer ? |
I think we can close the issue, the flake was pretty frequent before, so I think we have enough data to be sure it's fixed. Thx for fixing this flake! /close |
@sbueringer: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
Which jobs are flaking?
these failures are apparent in
periodic-cluster-api-test-mink8s-main
andperiodic-cluster-api-test-main
Which tests are flaking?
TestReconcileMachinePoolMachines/Reconcile_MachinePool_Machines/Should_create_two_machines_if_two_infra_machines_exist
Since when has it been flaking?
at least since 20214-07-06: https://storage.googleapis.com/k8s-triage/index.html?date=2024-07-20&text=TestReconcileMachinePoolMachines%2FReconcile_MachinePool_Machines%2FShould_create_two_machines_if_two_infra_machines_exist&job=.*cluster-api.*(test%7Ce2e)-(mink8s-)*main&xjob=.*-provider-.*
Testgrid link
https://prow.k8s.io/view/gs/kubernetes-jenkins/logs/periodic-cluster-api-test-mink8s-main/1824877164462346240
Reason for failure (if possible)
No response
Anything else we need to know?
No response
Label(s) to be applied
/kind flake
One or more /area label. See https://github.com/kubernetes-sigs/cluster-api/labels?q=area for the list of labels.
The text was updated successfully, but these errors were encountered: