-
Notifications
You must be signed in to change notification settings - Fork 179
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[WIP] Deploy CSI driver as a deployment and support multi master & leader e… #72
[WIP] Deploy CSI driver as a deployment and support multi master & leader e… #72
Conversation
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: chethanv28 The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Hi @chethanv28. Thanks for your PR. I'm waiting for a kubernetes-sigs or kubernetes member to verify that this patch is reasonable to test. If it is, they should reply with Once the patch is verified, the new status will be reflected by the I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@@ -29,9 +26,11 @@ spec: | |||
- name: csi-attacher | |||
image: quay.io/k8scsi/csi-attacher:v1.1.1 | |||
args: | |||
- "--v=4" | |||
- "--v=5" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
log level 4
should be sufficient. for attacher and provisioner container.
replicas: 1 | ||
updateStrategy: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
updateStrategy
-> RollingUpdate
will be useful later for upgrade. Can you keep that?
Shouldn't we add new files at |
} | ||
|
||
if !*enableLeaderElection { | ||
run(context.TODO()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In main()
it is generally expected that you will use context.Background()
since there is no higher level to provide one. context.TODO()
is intended for places where a context should be passed in, but the plumbing hasn't been done yet.
} else { | ||
k8sClient, err := k8s.NewClient() | ||
if err != nil { | ||
klog.Errorf("Creating Kubernetes client failed. Err: %v", err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
suggestion: you can use klog.Exitf()
instead of Errorf
followed by Exit
le := leaderelection.NewLeaderElection(k8sClient, lockName, run) | ||
|
||
if err := le.Run(); err != nil { | ||
klog.Fatalf("Error initializing leader election: %v", err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
suggestion: Fatalf()
prints a stack trace, and Exitf()
does not. Exitf()
may be more appropriate here.
klog.Errorf("Error initializing Metadata Syncer") | ||
os.Exit(1) | ||
run := func(ctx context.Context) { | ||
if err := metadataSyncer.Init(); err != nil { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is metadataSyncer.Init()
a blocking call that runs the syncer forever?
run := func(ctx context.Context) { | ||
if err := metadataSyncer.Init(); err != nil { | ||
klog.Errorf("Error initializing Metadata Syncer") | ||
os.Exit(1) | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The context that gets passed into your run func will be canceled when this process loses leadership. It looks like you don't pass that into metadataSyncer.Init
so we will have no way to tear things down if we lose leadership.
In most core k8s components, we react to loss of leadership with os.Exit()
so that the container gets restarted cleanly.
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@chethanv28: PR needs rebase. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Required change for to enable leader election and HA for vsphere csi driver is included in #151 |
What this PR does / why we need it:
This change is to enable leader election for syncer by using the implementation provided by csi (https://github.com/kubernetes-csi/external-provisioner/blob/master//vendor/github.com/kubernetes-csi/csi-lib-utils/leaderelection/leader_election.go
This change includes rbac and deployment yaml changes to enable leader election for vsphere-syncer, external-provisioner, and external-attacher.
With this change we will have HA enabled for the CSI driver.
Special notes for your reviewer:
Release note:
Test Results (Sanity):