Skip to content
This repository has been archived by the owner on Jul 30, 2021. It is now read-only.

How to upstream? #97

Closed
luxas opened this issue Aug 6, 2016 · 2 comments
Closed

How to upstream? #97

luxas opened this issue Aug 6, 2016 · 2 comments

Comments

@luxas
Copy link

luxas commented Aug 6, 2016

Hi,
I'd like to know your thoughts on upstreaming.
How is this going to be integrated with the new turnup ux that's developed in sig-cluster-lifecycle?

Really nice thing, would like to make it even more portable

@aaronlevy
Copy link
Contributor

aaronlevy commented Aug 8, 2016

@luxas I believe a core component of the functionality here could be replaced with something like a kubelet pod api. See kubernetes/kubernetes#28138 for somewhat of an umbrella issue.

What this might look like:

  1. Initial node starts the kubelet pointing to an api-server(s), which don't yet exist.
  2. Pod manifests for control-plane (api-server, scheduler, controller-manager) are pushed to kubelet-api.
  3. Once the api-server is running as a pod, daemonsets/deployment objects for control-plane are pushed to api-server.
  4. The higher-order objects (daemonsets/deployments) "adopt" the initially running pods, and we are now self-hosted.

Implementation wise, this should be relatively flexible, but one option is simply using kubectl:

# Create initial control-plane pods
until success:
    kubectl -s ${kubelet_api} create -f manifests/pods

# Create api objects (daemonsets/deployments) which will adopt those pods
until success:
    kubectl create -f manifests/objects

If we wanted to wrap that in some kind of kube bootstrap command, it would be fine. But hopefully it is such a simple process additional tooling is minimal / relatively unnecessary.

From here, I'm a proponent of the discovery api proposal: kubernetes/kubernetes#28422

So the UX for all subsequent nodes would potentially look something like:

$ kubelet --cluster=v1alpha1/${CLUSTER_ID}

@aaronlevy
Copy link
Contributor

This is a bit dated at this point, but the general plan was for kubeadm to adopt similar self-hosted installation methods (and eventually move to using kubeadm directly). Current tracking issue: kubernetes/kubeadm#127

Closing this in favor of the above issue - with the "upstreaming" plan being to move functionality into kubeadm

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants