-
Notifications
You must be signed in to change notification settings - Fork 78
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[UPDATE] use offical image from ms #376
Conversation
@HappyTobi Thank you for your contribution. |
Thank you @HappyTobi for your contribution. Before I can start building your PR, a member of the organization must set the required label(s) {'reviewed/ok-to-test'}. Once started, you can check the build status in the PR checks section below. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @HappyTobi!
Are we certain that v1.1.1
is fully compatible with the release-v1.21
and release-v1.22
branches of the in-tree Azure provider?
Also, can you please provide a proper release note?
charts/images.yaml
Outdated
sourceRepository: https://github.com/kubernetes-sigs/cloud-provider-azure | ||
repository: mcr.microsoft.com/oss/kubernetes/azure-cloud-controller-manager | ||
tag: "v1.1.1" | ||
targetVersion: "1.21.x" | ||
- name: cloud-controller-manager | ||
sourceRepository: github.com/gardener/cloud-provider-azure | ||
repository: eu.gcr.io/gardener-project/kubernetes/cloud-provider-azure | ||
tag: "v1.22.0" | ||
sourceRepository: https://github.com/kubernetes-sigs/cloud-provider-azure | ||
repository: mcr.microsoft.com/oss/kubernetes/azure-cloud-controller-manager | ||
tag: "v1.1.1" | ||
targetVersion: ">= 1.22" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sourceRepository: https://github.com/kubernetes-sigs/cloud-provider-azure | |
repository: mcr.microsoft.com/oss/kubernetes/azure-cloud-controller-manager | |
tag: "v1.1.1" | |
targetVersion: "1.21.x" | |
- name: cloud-controller-manager | |
sourceRepository: github.com/gardener/cloud-provider-azure | |
repository: eu.gcr.io/gardener-project/kubernetes/cloud-provider-azure | |
tag: "v1.22.0" | |
sourceRepository: https://github.com/kubernetes-sigs/cloud-provider-azure | |
repository: mcr.microsoft.com/oss/kubernetes/azure-cloud-controller-manager | |
tag: "v1.1.1" | |
targetVersion: ">= 1.22" | |
sourceRepository: github.com/kubernetes-sigs/cloud-provider-azure | |
repository: mcr.microsoft.com/oss/kubernetes/azure-cloud-controller-manager | |
tag: "v1.1.1" | |
targetVersion: ">= 1.21" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If I read the matrix correctly, it should be safe to use for version v1.22
and higher. For v1.21 probably we should use v1.0.5
of the image.
The out-of-tree CCM should basically be a superset of the in-tree. The in-tree only gets bug fixes since v1.20 and these are first implemented in the out-of-tree then ported back.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Both versions work for 1.21 but the required parts for using the cluster on vmss-flex was also part of v1.0.5
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I will change image.yaml
to use the v1.0.5
for k8s 1.21
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
all tests are working with v1.0.5
and k8s 1.21
I updated the pr
/invite @ialidzhikov |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the PR!
Basically I have the same question/concern about the backwards compatibility as in #376 (review). If we are sure that there are no backwards compatibility issues and if we tested thoroughly that our output qualification is passing against this change, then lgtm. Another option (IMO much safer one) would be to start using the out-of-tree CCM starting K8s v1.23 (or K8s v1.22) as we anyway have already the forks in-place for K8s v1.21 and K8s v1.22. I am just putting this option to the table because Azure itself supports several deployment models wrt to networking); the CCMs usually have number of provider-specific annotations to opt-in/opt-out special handlings and if the out-of-tree CCM code-base is a new one, then IMO gets a little risky to rollout such change all of a sudden. But as I said, if the change is well tested and compatibility is ensured, then lgtm from my side.
If the only motivation behind this PR is to fix #374, another option to tackle #374 would be to revert the CCM version back to a working one until our upstream PR kubernetes/kubernetes#105185 is merged, cherry-picked, released and adopted on our side.
Also, can you please provide a proper release note?
Appropriate release note is still missing.
...al/seed-controlplane/charts/cloud-controller-manager/templates/cloud-controller-manager.yaml
Outdated
Show resolved
Hide resolved
Proper release notes:
|
After a discussion we had, we have decided to go with @ialidzhikov proposal and begin adoption with Kubernetes version v1.23. We will adapt it accordingly to the new versions once released, but until then the PR will remain open. |
/hold until we have k8s |
@HappyTobi You need rebase this pull request with latest master branch. Please check. |
/close - I will take this over as part of the 1.23 (ref gardener/gardener#5102) |
How to categorize this PR?
/area robustness
/kind enhancement
/platform azure
What this PR does / why we need it:
Because the official "new" way how the implementation will be updated was moved from the "inner-tree" to the k8s-sigs repos, so it would be easier to fetch the implementation direct from there.
Which issue(s) this PR fixes:
Fixes: #373
Multiple issues of shoots that was deployed on vmo / vmss-flex.
(we see some issues in the latest 4-6 months that was already fixed at the k8s-sigs repo)
Special notes for your reviewer:
What kind of way to you prefer to run the binary of the container.
Just move the args to the "args" key of the container spec or changing / adding the new entrypoint.
Release note: