Skip to content

Commit

Permalink
Add proposal for supporting single node production installation using…
Browse files Browse the repository at this point in the history
… Bootstrap-In-Place
  • Loading branch information
eranco74 committed Dec 13, 2020
1 parent 0cacb62 commit c250c32
Showing 1 changed file with 255 additions and 0 deletions.
255 changes: 255 additions & 0 deletions enhancements/installer/single-node-installation-bootstrap-in-place.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,255 @@
title: single-node-installation-bootstrap-in-place
authors:
- "@dhellmann"
- "@eranco"
- "@romfreiman"
- "@tsorya"
reviewers:
- TBD
- "@mrunalp"
- "@markmc"
- "@deads2k"
- "@wking"
- "@eparis"
- "@hexfusion"
approvers:
- TBD
creation-date: 2020-12-13
last-updated: 2020-12-13
status: implementable
see-also:
- https://github.com/openshift/enhancements/pull/560
- https://github.com/openshift/enhancements/pull/302
---

# Single Node Production Edge Installer - bootstrap in place


## Release Signoff Checklist

- [x] Enhancement is `implementable`
- [ ] Design details are appropriately documented from clear requirements
- [ ] Test plan is defined
- [ ] Graduation criteria for dev preview, tech preview, GA
- [ ] User-facing documentation is created in [openshift-docs](https://github.com/openshift/openshift-docs/)

## Summary

As we add the new [`single-node production deployment`](https://github.com/openshift/enhancements/pull/560/files)
we need a way to insall such cluster without extra node dependency for bootstrap.

This enhancement describes the flow for installing Single Node Openshift using liveCD that perform the bootstrap logic and later reboot to become the single node.

## Motivation

Currently, all OpenShift installations use an auxiliary bootstrap node.
The bootstrap node creates a temporary control plane that is required for launching the actual cluster.

Removing the bootstrap node is a key for the 5G RAN use case and to the more general topic of single-node OCP clusters for edge use cases.

Requiring the axillary node for installing a Single Node Openshift brings to play some requirements:
1. The obvious additional node, in most cases there isn't additional Hardware/VM at the edge site.
2. Requires external dependencies:
a. Load balancer (only for bootstrap phase)
b. Requires DNS (configured per installation)
3. Cannot use Bare Metal IPI:
a. Adds irrelevant dependencies - vips, keepalived, mdns
b. Requires same L2 between bootstrap and the Single Node Openshift

### Goals

* This enhancement describes an approach for insallting Single Node OpenShift
for production use.
* Minimal changes to openshift installer, this implementation shouldn't effect existing deployment flows.
* Installation should result a clean Single Node Openshift, without bootstrap leftovers.
* Rely on a livecd, so it would be able to trigger the full installation using virtual media.
* Self-Contained bootstrap, The bootstrap flow is expressed as a single Ignition config which is layered on top of a pristine RHCOS image. It's important that we limit our use of this pattern to just the Ignition configs - as opposed to including other assets directly in the ISO image.

### Non-Goals

* This enhancement does not address similar insallation flow for 3 nodes cluster.
* This enhancement does not address high-availability for single-node
deployments.
* This enhancement does not address single-node-developer cluster-profile installation.

## Proposal

Execute the bootstrap flow on a liveCD.
Enrich the master ignition with the control plane static pods and write this ignition.
Write the updated master ignition + RHCOS to disk and reboot.
Post reboot the node will complete the installation.

### User Stories

#### As a user, I can deploy OpenShift in a supported single-node configuration

A user will be able to run the OpenShift installer to create a single-node
ignition configuration.
The user will boot RHCOS liveCD with this ignition to create a single-node deployment.

### Implementation Details/Notes/Constraints

The openshift installer `create ignition-configs` command will generate a `bootstrap-in-place.ign`
file, when the number of replicas for the control plane (in the `install-config.yaml`) is `1`.

The `bootstrap-in-place.ign` will be embedded into an RHCOS liveCD using coreos-install embed command.

The user will boot a machine with this liveCD, the liveCD will start executing a similar flow as a bootstrap node in a regular installation:
bootkube.sh will do all the rendering logic.
The bootstrap static pods should be generated in a way that the control plane operators will be able to identify them and either continues in a controlled way for the next revision, or just keep them as the correct revision and reuse them.

cluster-bootstrap will apply all the required manifests (under /opt/openshift/manifests/)

Bootkube will get the master ignition from machine-config-server and
enrich the master ignition with control plane static pods manifests and all required resources including etcd data.
Bootkube.sh will write the master ignition along with rhcos to disk.
At this point bootkube will reboot the node and let it finish the cluster creation.

Post reboot:
The kubelet service will start the control plane static pods.
Kubelet will send a CSR (see below) and join the cluster.
CVO will deploy all cluster operators.
The control plane operators will rollout a new revision (if necessary).

#### Openshift-installer

We will add logic to the installer to create `bootstrap-in-place.ign` ignition config.
This ignition config will diverge from the bootstrap ignitionin:
bootkube:
1. Start cluster-bootstrap without required pods (`--required-pods=''`)
2. Run cluster-bootstrap with iBIP entrypoint to enrich the master ignition.
3. Write RHCOS image and the master ignition to disk.
4. Reboot the node.

#### Cluster-bootstrap

By default, `cluster-bootstrap` start the bootstrap control plane and create all the manifests under /opt/openshift/manifests.
`cluster-bootstrap` also wait for a list required pods to be ready, these pods are expetced to start running on the master nodes.
In case we are running the bootstrap in place, there is no master node that can run the pods. cluster-bootstrap should apply the manifest and tear down the control plane, in case it fail to apply some of the manifest it should return an error.


`cluster-bootstrap` will have a new entrypoint `iBIP` which will get the master ignition as input and will enrich the master ignition with control plane static pods manifests and all required resources including etcd data.

#### Bootstrap / Control plane static pods
The control plane components we will add to the master ignition are (to be placed under `/etc/kubernetes/manifests`):
1. etcd-pod
2. kube-apiserver-pod
3. kube-controller-manager-pod
4. kube-scheduler-pod

Control plane required resources to be added to the ignition:
1. `/var/lib/etcd`
2. `/etc/kubernetes/bootstrap-configs`
3. /opt/openshift/tls/* (`/etc/kubernetes/bootstrap-secrets`)
4. /opt/openshift/auth/kubeconfig-loopback (`/etc/kubernetes/bootstrap-secrets/kubeconfig`)

Note: `/etc/kubernetes/bootstrap-secrets` and `/etc/kubernetes/bootstrap-configs` will be deleted post reboot once the OCP control plane is ready.

The control plane opertors (that will run on the node post reboot) will manage the rollout of new revision of the control plane pods

#### Etcd data

In order to add a viable, working etcd post reboot.
We will take a snapshot of etcd and add it to the master ignition.
Oost reboot we will use restore etcd-member from the snapshot

Another option is to stop the etcd pod (move the static pod manifest from `/etc/kubernetes/manifests`).
Let etcd save it's state and exit, once etcd is down we will add the `/var/lib/etcd` directory to the master ignition.
After the reboot etcd should start with all the data it had prior to the reboot.

#### Post reboot

We will add a new post-reboot service for approving the kubelet and the node Certificate Sign Requests.
This service will also cleanup the bootstrap static pods resources once the OCP control plane is ready.

### Risks and Mitigations

*What are the risks of this proposal and how do we mitigate. Think broadly. For
example, consider both security and how this will impact the larger OKD
ecosystem.*

*How will security be reviewed and by whom? How will UX be reviewed and by whom?*

## Design Details

### Open Questions

1. What platform should it support? (just start with none)
2. Bootable installation artifact?, perhpas the installer should produce the liveCD?

### Test Plan

In order to claim full support for this configuration, we must have
CI coverage informing the release. An end-to-end job using the iBIP installation flow.
and running an appropriate subset of the standard OpenShift tests
will be created and configured to block accepting release images
unless it passes.

That end-to-end job should also be run against pull requests for
the isntaller and cluster-bootstrap.

### Graduation Criteria

**Note:** *Section not required until targeted at a release.*

Define graduation milestones.

These may be defined in terms of API maturity, or as something else. Initial proposal
should keep this high-level with a focus on what signals will be looked at to
determine graduation.

Consider the following in developing the graduation criteria for this
enhancement:

- Maturity levels
- [`alpha`, `beta`, `stable` in upstream Kubernetes][maturity-levels]
- `Dev Preview`, `Tech Preview`, `GA` in OpenShift
- [Deprecation policy][deprecation-policy]

Clearly define what graduation means by either linking to the [API doc definition](https://kubernetes.io/docs/concepts/overview/kubernetes-api/#api-versioning),
or by redefining what graduation means.

In general, we try to use the same stages (alpha, beta, GA), regardless how the functionality is accessed.

[maturity-levels]: https://git.k8s.io/community/contributors/devel/sig-architecture/api_changes.md#alpha-beta-and-stable-versions
[deprecation-policy]: https://kubernetes.io/docs/reference/using-api/deprecation-policy/

#### Examples

##### Dev Preview -> Tech Preview

- Ability to utilize the enhancement end to end
- End user documentation, relative API stability
- Sufficient test coverage
- Gather feedback from users rather than just developers

##### Tech Preview -> GA

- More testing (upgrade, downgrade, scale)
- Sufficient time for feedback
- Available by default

**For non-optional features moving to GA, the graduation criteria must include
end to end tests.**

## Implementation History

Major milestones in the life cycle of a proposal should be tracked in `Implementation
History`.

## Drawbacks

1. The API will be unavailable from time to time during the installation.
2. Coreos-installer cannot be used in the cloud environment.

## Alternatives

### Installing using remote bootstrap node

Run the bootstrap node in a HUB cluster as VM.
This approach is appealing because it keeps the current installation flow.
Requires external dependencies.
However, there are drawbacks:
1. It will require Load balancer and DNS per installation.
2. Deployments run remotely via L3 connection (high latency (up to 150ms), low BW in some cases), this isn't optimal for Etcd cluster (one member is running on the bootstrap during the installation)
3. Running the bootstrap on the HUB cluster present a (resources) scale issue (~50*(8GB+4cores)), limits ACM capacity

0 comments on commit c250c32

Please sign in to comment.