The OpenShift Installer is designed to help users, ranging from novices to experts, create OpenShift clusters in various environments. By default, the installer acts as an installation wizard, prompting the user for values that it cannot determine on its own and providing reasonable defaults for everything else. For more advanced users, the installer provides facilities for varying levels of customization.
On supported platforms, the installer is also capable of provisioning the underlying infrastructure for the cluster. It is recommended that most users make use of this functionality in order to avoid having to provision their own infrastructure. For other platforms or in scenarios where installer-created infrastructure would be incompatible, the installer can stop short of creating the infrastructure, and allow the user to provision their own infrastructure using the cluster assets generated by the installer.
OpenShift is unique in that its management extends all the way down to the operating system itself. Every machine boots with a configuration which references resources hosted in the cluster it is joining. This allows the cluster to manage itself as updates are applied. A downside to this approach, however, is that new clusters have no way of starting without external help - every machine in the to-be-created cluster is waiting on the to-be-created cluster.
OpenShift breaks this dependency loop using a temporary bootstrap machine. This bootstrap machine is booted with a concrete Ignition Config which describes how to create the cluster. This machine acts as a temporary control plane whose sole purpose is launching the rest of the cluster.
The main assets generated by the installer are the Ignition Configs for the bootstrap, master, and worker machines. Given these three configs (and correctly configured infrastructure), it is possible to start an OpenShift cluster. The process for bootstrapping a cluster looks like the following:
- The bootstrap machine boots and starts hosting the remote resources required for the master machines to boot.
- The master machines fetch the remote resources from the bootstrap machine and finish booting.
- The master machines use the bootstrap node to form an etcd cluster.
- The bootstrap node starts a temporary Kubernetes control plane using the newly-created etcd cluster.
- The temporary control plane schedules the production control plane to the master machines.
- The bootstrap node injects OpenShift-specific components via the temporary control plane.
- The temporary control plane shuts down, leaving just the production control plane.
- The installer tears down the bootstrap node.
The result of this bootstrapping process is a fully running OpenShift cluster. The cluster will then download and configure remaining components needed for the day-to-day operation, including the creation of worker machines in supported platforms.
While striving to remain simple and easy to use, the installer allows many aspects of the clusters it creates to be customized. It is helpful to understand certain key concepts before attempting to customize the installation.
The OpenShift Installer operates on the notion of creating and destroying targets. Similar to other tools which operate on a graph of dependencies (e.g. make, systemd), each target represents a subset of the dependencies in the graph. The main target in the installer creates a cluster, but the other targets allow the user to interrupt this process and consume or modify the intermediate artifacts (e.g. the Kubernetes manifests that will be installed into the cluster). Only the immediate dependencies of a target are written to disk by the installer, but the installer can be invoked multiple times.
The following targets can be created by the installer:
install-config
- The install config contains the main parameters for the installation process. This configuration provides the user with more options than the interactive prompts and comes pre-populated with default values.manifests
- This target outputs all of the Kubernetes manifests that will be installed on the cluster.ignition-configs
- These are the three Ignition Configs for the bootstrap, master, and worker machines.cluster
- This target provisions the cluster and its associated infrastructure.
The following targets can be destroyed by the installer:
cluster
- This destroys the created cluster and its associated infrastructure.bootstrap
- This destroys the bootstrap infrastructure.
In order to allow users to customize their installation, the installer can be invoked multiple times. The state is stored in a hidden file in the asset directory and contains all of the intermediate artifacts. This allows the installer to pause during the installation and wait for the user to modify intermediate artifacts.
For example, you can create an install config and save it in a cluster-agnostic location:
openshift-install --dir=initial create install-config
mv initial/install-config.yaml .
rm -rf initial
You can use the saved install-config for future clusters by copying it into the asset directory and then invoking the installer:
mkdir cluster-0
cp install-config.yaml cluster-0/
openshift-install --dir=cluster-0 create cluster
Supplying a previously-generated install-config like this is explicitly part of the stable installer API.
Note that the installer would consume install-config.yaml
from the asset directory.
At any point before running destroy cluster
, install-config.yaml
can be regenerated by running openshift-install --dir=cluster-0 create install-config
.
You can also edit the assets in the asset directory during a single run. For example, you can adjust the cluster-version operator's configuration:
mkdir cluster-1
cp install-config.yaml cluster-1/
openshift-install --dir=cluster-1 create manifests # warning: this target is unstable
"${EDITOR}" cluster-1/manifests/cvo-overrides.yaml
openshift-install --dir=cluster-1 create cluster
As the unstable warning suggests, the presence of manifests
and the names and content of its output is an unstable installer API.
It is occasionally useful to make alterations like this as one-off changes, but don't expect them to work on subsequent installer releases.
The openshift-install
binary contains pinned versions of RHEL CoreOS "bootimages" (e.g. OpenStack qcow2
, AWS AMI, bare metal .iso
).
Fully automated installs use these by default.
For UPI (User Provisioned Infrastructure) installs, you can use the openshift-install coreos print-stream-json
command to access information
about the bootimages in CoreOS Stream Metadata format.
For example, this command will print the x86_64
AMI for us-west-1
:
$ openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.images.aws.regions["us-west-1"].image'
ami-0c548bdf93b74cd59
For on-premise clouds (e.g. OpenStack) with UPI installs, you may need to manually copy
a bootimage into the infrastructure. Here's an example command to print the x86_64
qcow2
file for openstack
:
$ openshift-install coreos print-stream-json | jq -r '.architectures.x86_64.artifacts.openstack.formats["qcow2.gz"]'
{
"disk": {
"location": "https://releases-art-rhcos.svc.ci.openshift.org/art/storage/releases/rhcos-4.8/48.83.202102230316-0/x86_64/rhcos-48.83.202102230316-0-openstack.x86_64.qcow2.gz",
"signature": "https://releases-art-rhcos.svc.ci.openshift.org/art/storage/releases/rhcos-4.8/48.83.202102230316-0/x86_64/rhcos-48.83.202102230316-0-openstack.x86_64.qcow2.gz.sig",
"sha256": "abc2add9746eb7be82e6919ec13aad8e9eae8cf073d8da6126d7c95ea0dee962",
"uncompressed-sha256": "9ed73a4e415ac670535c2188221e5a4a5f3e945bc2e03a65b1ed4fc76e5db6f2"
}
}