-
Notifications
You must be signed in to change notification settings - Fork 39.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make it easy to run Kubernetes on top of the Kubelet (aka self-hosting) #246
Comments
The kubelet already supports reading config from a file, does that work? |
Yeah, I think the work to do here is:
|
As a further item (possibly as a separate issue), being able to change code on your devenv and either one-line a command (hack/update-local) or have the image/source automatically reloaded would be valuable for reducing code-test loop time. Probably the former though |
Self-hosting proposal: Kubernetes components are just applications, too. They have much the same needs for packaging/image-building, configuration, deployment, auth[nz], naming/discovery, process management, and so on. Leveraging the systems/APIs we build for our users avoids needing to build and maintain 2 systems to do the same thing, and helps us to eat our own dogfood. The recipe for building a bootstrappable system is fairly straightforward:
A more concrete sketch: Master components:
The only tricky part is transferring the etcd state. We don't have great solutions for stateful services yet, in general (#260, #1515), but if just running on the same host or same PD, etcd would just pick it up from the volume. The data could also be replicated to other instances. A step towards this could be to run the components on Kubelet using a local pod file. Kubelet:
|
There was an example for that for 0.4 here: https://github.com/GoogleCloudPlatform/kubernetes/blob/master/build/run-images/bootstrap/run.sh#L22
Last time I tried you couldn't run a kubelet in a pod w/ kubelet --runonce, because the child kubelet would kill its parent (because of the lack of namespace, he thought it was a leftover k8s container it didn't know about) |
CC: @saad-ali |
Hello everyone, I would like to work on this project during GSoC '15. I've gone through setting up the base dev environment and starting a local cluster. I'm going through the kubelet and apiserver code right now. Which of the future points suggested by @bgrant0607 and others do you think are critical and should be pursued for this project? The bootstrapping part definitely seems interesting, but there are a lot of suggestions related to kubelet present in the whole issue queue. I'm assuming the bootstrapping process has a lot to do with other components apart from kubelet too. And how should I proceed in order to contribute to the component dockerization work done until now by @jbeda? I mean, what parts have been covered and where should I focus now so as to further his work through this project? |
@vipulnayyar Thanks for your interest! Early next week I plan to flesh out more of the details of projects that GSoC candidates express interest in. I'll update this issue. My current thinking is that this project would focus on running the master components (apiserver, controller-manager, etc.) in pods. A necessary first step is to properly containerize them. We have made previous attempts at that, but didn't push them through to completion, for various reasons. |
@bgrant0607 Related to this topic, based on my past experience as a GSoC student, you also need to figure out the application template that students need to follow while submitting their GSoC application for Kubernetes. |
I created a wiki re. participation: https://github.com/GoogleCloudPlatform/kubernetes/wiki/Google-Summer-of-Code-(GSoC) GSoC project ideas are labeled kind/gsoc: "Starter project" ideas are labeled help-wanted (note that not all are necessarily small/easy): Bootstrapping-related issues are labeled in an obvious way: Work has started to run etcd in a container/pod (#4442). The apiserver, controller-manager, and scheduler also need to be handled. Pushing pod files via Salt is a reasonable starting point, but I'd love to be able to post the pods to Kubelet and then have the apiserver automatically pick them up (this latter bit is ongoing -- #4090). Being able to run master components with high availability (#473) is related -- I think self-hosting them is essentially a prerequisite. Self-hosting Kubelet is almost an entirely separate project. |
See also #5011. Sorry, @vipulnayyar. Someone has started working on this one, so I'm going to remove it from the GSoC list. |
Fix a minor typo in the README.md file.
Implement adding new nodes to cluster
I feel like the primary task is largely done and then some. Here is the current status of self-hosted and bootkube thread: https://groups.google.com/d/msg/kubernetes-sig-cluster-lifecycle/p9QFxw-7NKE/jeYJF1hBAwAJ |
Implement adding new nodes to cluster
Implement adding new nodes to cluster
Implement adding new nodes to cluster
Implement adding new nodes to cluster
cc: @kubernetes/sig-cluster-lifecycle-misc |
Issues go stale after 90d of inactivity. Prevent issues from auto-closing with an If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or |
/assign luxas |
@timothysc has been working on the bootstrapping checkpointing feature for the node to make this work easily with upstream k8s. Also kubeadm has self-hosting support (alpha in v1.9 but expected to be beta in v1.10) |
Given the plethora of incantations that exist today, I'm going to close this root issue b/c it no longer tracks the details. We are working in sig-cluster-lifecycle to refine this on our road to GA and this issue no longer tracks state so closing. |
Make curl retry 5 times throughout.
Add log-counter to the Dockerfile
Pulling this out from #167
Something I'd be happy to work on
The text was updated successfully, but these errors were encountered: