-
Notifications
You must be signed in to change notification settings - Fork 39
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature/migrate to kubebuilder v3 #27
Feature/migrate to kubebuilder v3 #27
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This will also need changes in the documentation, helm charts.
Thanks for taking this @itamar-marom . Also to test your PR what k8s version are you testing on ? would be good to note and add it the compatibility matrix. |
Hi, @AdheipSingh Thanks for already looking at the PR. It is still not ready (that's why the WIP) and I don't want you to waste your time :), I'll ping you when it's ready. I'll reply to your comments, thank you! |
I'm currently dealing with this issue: kubernetes-sigs/kubebuilder#2026 - this is why it takes more time. |
Let me know if you hit any other blocker, happy to brain storm together. |
I fixed it. Wrote about it here |
Doing some more testing and will update you tomorrow. Also I need to change the docs (installation). |
added last newline Co-authored-by: Cyril Corbon <corboncyril@gmail.com>
newline at the end Co-authored-by: Cyril Corbon <corboncyril@gmail.com>
change permissions to events Co-authored-by: Cyril Corbon <corboncyril@gmail.com>
{{- if .Values.rbac.create }} | ||
{{- $env := .Values.env }} | ||
{{- if and ($env.WATCH_NAMESPACE) (ne $env.WATCH_NAMESPACE "default") }} | ||
# Split WATCH_NAMESPACE by commas and loop on them | ||
{{- $watchedNamespaces := (split "," $env.WATCH_NAMESPACE) -}} | ||
{{- range $watchedNamespaces }} | ||
--- |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@itamar-marom there is a conditional check, which is needed to scope the operator. Operator can watch multiple namespace, single namespace or all namespaces. On this either a role or clusterrole is created.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, I understand. Fixed it but change it a little bit so there will be not extra role druid-operator-manager-role-sc
- In the case of WATCH_NAMESPACES, the default ClusterRole will be created with only storageclass permissions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@itamar-marom i will give this another look, also is this role change backward compatible ? do we need to change anything on existing operator roles.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Generally, the role permissions are the same, also naming wise if you look at the chart, and the role.yaml under the deploy
directory.
This is a bit tricky since it really depends on how the user deployed the operator. In the case of helm, it will be responsible to change the rbac to the correct one. In the case of regular deployment, the user should notice that the names are equal between the old and the new deployment. By default, the old deployment name is druid-operator
and the new one is druid-operator-controller-manager
. Also, the namespace is different by default, Everything else is the same.
Looking at this again, I will make the migration documentation clearer.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I detailed more in the migration docs. Eventually, it is important to check that the naming of the objects is the same in both deployments.
cc @cintoSunny @harinirajendran if you can help in reviewing this too. |
LGTM with the PR, ill do some final review and tests and wait if other want to comment or review. |
Fixes #24
Description
Currently, the project is not following Kubebuilder's native structure. This customization can be a real pain for new developers who are familiar with Kubebuilder, and also make new API creation difficult.
This PR updates Kubebuilder to v3 including back to Kubebulder's native structure.
Following the migration guide
This PR is something that has to be done before this project gains more users and becomes bigger. This project is a mutation of the Operator SDK and the Kubebuilder project.
In this MR I'm migrating the project to Kubebuilder while upgrading Kubebuilder to v3 in order to eliminate the pain of migrating at another time.
Something that is notable here is that autoscaling/v2alpha2 is now deprecated in new clusters (For example, update your kind cluster and try) - Therefore, the project is now using autoscaling/v2 HPAs.
The deployment also had a big change since Kubebuilder deploys more resources that benefit us and help us to manage the controller much more easily.
I will note the Kubebuilder is a Kubernetes community project and it is becoming the de-facto for building controllers - we must follow the guidelines.
Tested on:
This PR has:
Key changed/added files in this PR
deploy
structure that changes toconfig
and deploys more resources by Kubebuilder best practices.Makefile
file changed to support the changes.main.go
and theReconcile
function went through some little changes.config
directory