-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
🌱 test: add PreWaitForControlplaneToBeUpgraded to ClusterUpgradeConformanceSpec #11145
🌱 test: add PreWaitForControlplaneToBeUpgraded to ClusterUpgradeConformanceSpec #11145
Conversation
/test help |
@chrischdi: The specified target(s) for
The following commands are available to trigger optional jobs:
Use
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
/test pull-cluster-api-e2e-main |
/assign @sbueringer @fabriziopandini |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for implementing this test! just a few nits from my side
test/e2e/cluster_upgrade_test.go
Outdated
Expect(managementClusterProxy.GetClient().Get(ctx, client.ObjectKeyFromObject(cluster), cluster)).To(Succeed()) | ||
|
||
// This replaces the WaitForControlPlaneMachinesToBeUpgraded function and additionally: | ||
// * checks that kube-proxy is healthy |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What about adding a not explaining that we are doing this test in order to ensure that non static pods remain healthy on CP machines during upgrade
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes please
var upgraded int64 | ||
deletingMachinesWithPreDrainHook := []clusterv1.Machine{} | ||
for _, m := range machines { | ||
if *m.Spec.Version == cluster.Spec.Topology.Version && conditions.IsTrue(&m, clusterv1.MachineNodeHealthyCondition) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
q: why are we checking clusterv1.MachineNodeHealthyCondition? as far as I remember it only checks for a dummy condition to not exists, so if I'm not wrong it doesn't really give added value 🤔
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wait what. For a dummy condition to not exist?
// MachineNodeHealthyCondition provides info about the operational state of the Kubernetes node hosted on the machine by summarizing node conditions.
// If the conditions defined in a Kubernetes node (i.e., NodeReady, NodeMemoryPressure, NodeDiskPressure, NodePIDPressure, and NodeNetworkUnavailable) are in a healthy state, it will be set to True.
MachineNodeHealthyCondition ConditionType = "NodeHealthy"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
jep, I was expecting this to check this as mentioned in the comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is how we are configuring MHC in E2E tests:
machineHealthCheck:
maxUnhealthy: 100%
unhealthyConditions:
- type: e2e.remediation.condition
status: "False"
timeout: 20s
So (in E2E tests only) MHC is testing for a dummy e2e.remediation.condition
, not for NodeReady, NodeMemoryPressure etc
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But this is the MachineNodeHealthyCondition not the MachineHealthCheckSucceeded condition
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is set to true here:
conditions.MarkTrue(machine, clusterv1.MachineNodeHealthyCondition) |
This should have nothing to do with MHC's
test/e2e/cluster_upgrade_test.go
Outdated
} | ||
} | ||
|
||
// Check if the expected number of kube-proxy pods exist and all of them are healthy. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
might be let's add a not specifying that we are checking kube proxy both on old and new CP nodes, as well as on workers (across the entire cluster)
/test pull-cluster-api-e2e-main |
…s and kube-proxy being healthy via a pre-drain hook
ad2be97
to
32152a3
Compare
/test pull-cluster-api-e2e-main rebase |
/test pull-cluster-api-e2e-main |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Last nits from my side
/assign @fabriziopandini |
/test pull-cluster-api-e2e-main |
flake /retest |
Thx!! Really nice improvement /lgtm |
LGTM label has been added. Git tree hash: 216aa8bf9643340fc73cd3b02f603ca74ebb80ab
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great work!
/lgtm
/approve
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: fabriziopandini The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
/cherry-pick release-1.8 |
This additional validation found this issue: #11296 Let's also add it to release-1.8 |
@sbueringer: #11145 failed to apply on top of branch "release-1.8":
In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
I'll do a manual cherry-pick |
…rmanceSpec (kubernetes-sigs#11145) * test: add PreWaitForControlplaneToBeUpgraded to ClusterUpgradeConformanceSpec * test: add template for kcp-pre-drain * test: adjust multi-controlplane quickstart test to check for all nodes and kube-proxy being healthy via a pre-drain hook * lint fix * Review fixes * review fixes * review fixes * review fix
What this PR does / why we need it:
Implements additional checks to ensure the cluster is operational during an update.
Which issue(s) this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close the issue(s) when PR gets merged):Fixes #
/area e2e-testing