Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Relax hyperkube version match requirement? #278

Closed
kkimdev opened this issue Feb 16, 2019 · 5 comments
Closed

Relax hyperkube version match requirement? #278

kkimdev opened this issue Feb 16, 2019 · 5 comments

Comments

@kkimdev
Copy link

kkimdev commented Feb 16, 2019

Currently users are supposed to match hyperkube's version to Kubernetes' version by --set scheduler.kubeSchedulerImage=gcr.io/google-containers/hyperkube:${KUBE_VERSION}

However, some managed Kubernetes service provides automatic Kubernetes version upgrade which can introduce version mismatch overnight. It would be great if this exact version match requirement can be relaxed.

I don't know the technical feasibility about this though.

@gregwebs
Copy link
Contributor

I think it should be possible for us to default to matching what K8s is running (and even track if it changes)

@weekface
Copy link
Contributor

@kkimdev The pods created by TiDB Operator are scheduled by TiDB Scheduler(kube-scheduler + tidb-scheduler). We recommend keeping consistent, but these two versions can be different actually.

@gregwebs
Copy link
Contributor

Major version changes of k8s will definitely break this setup.
We can use either helm Capabilities.KubeVersion or a MutatingAdmissionWebhook to set the image tag to match K8s so that the user does not configure this parameter.
The helm approach would require the user to do a helm upgrade when ugprading the K8s version.

Otherwise we would need some code to monitor the K8s version and change the image tag when the K8s version changes.

@gregwebs
Copy link
Contributor

We now default the helm chart to try to match the k8s version, and it works on GKE. @kkimdev can you try it out on your K8s install?

@kkimdev
Copy link
Author

kkimdev commented Mar 22, 2019

It worked well on both gke and kubeadm-dind-cluster. Thanks!

@kkimdev kkimdev closed this as completed Mar 22, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants