This repository explores the different ways to install Openshift.
In this repository, I will focus on the Agent-based installation. This is a subcommand of the OpenShift Container Platform installer. It generates a bootable ISO image containing all of the information required to deploy an OpenShift Container Platform cluster, with an available release image.
kcli create plan -f kcli-plan-acm.yaml acm # Create the OCP plan with name acm
kcli start plan acm
kcli stop plan acm
kcli get vm
kcli delete plan acm
ℹ️
|
Then, you can edit the name of the VM after installation to sno-node .
|
Alternatively, use the govc vm.create command:
source ./ocp-sno/env-vars && govc vm.create --dc=SDDC-Datacenter -c 8 -m=16384 -folder=$(govc ls /SDDC-Datacenter/vm/Workloads) -net=$(govc ls /SDDC-Datacenter/network | grep segment) -on=false test
govc device.cdrom.add -vm test
source ./ocp-sno/env-vars && cat ./ocp-sno/install-config-template.yaml | envsubst > ./agent/install-config.yaml
source ./ocp-sno/env-vars && cat ./ocp-sno/agent-config-template.yaml | envsubst > ./agent/agent-config.yaml
ℹ️
|
Remember that the NMState has to be installed in the node you launch the iso generation: sudo dnf install nmstate .
|
openshift-install --dir ./agent agent create image
We use the govc CLI.
source ocp-sno/env-vars
# List all VMs
govc ls -l=true $(govc ls /SDDC-Datacenter/vm/Workloads)
# Get VM info
govc vm.info sno-node
govc datastore.upload --dc=SDDC-Datacenter --ds=WorkloadDatastore ./agent/agent.x86_64.iso agent2/alvaro-agent.x86_64.iso
source ./ocp/env-vars && cat ./ocp/install-config-template.yaml | envsubst > ./agent/install-config.yaml
source ./ocp/env-vars && cat ./ocp/agent-config-template.yaml | envsubst > ./agent/agent-config.yaml
ℹ️
|
Remember that the NMState has to be installed in the node you launch the iso generation: sudo dnf install nmstate .
|
openshift-install --dir ./agent agent create image
We use the govc CLI.
source ocp/env-vars
# List all VMs
govc ls -l=true $(govc ls /SDDC-Datacenter/vm/Workloads)
# Get VM info
govc vm.info sno-node
govc datastore.upload --dc=SDDC-Datacenter --ds=WorkloadDatastore ./agent/agent.x86_64.iso agent2/alvaro-ha/agent.x86_64.iso
cd Ansible
ansible-playbook -vvv -i inventory create-vms.yaml
openshift-install --dir ./agent agent wait-for bootstrap-complete --log-level=debug
openshift-install --dir ./agent agent wait-for install-complete --log-level=debug
ℹ️
|
How to connect to the cluster
|
Automation execution environments are container images on which all automation in Red Hat Ansible Automation Platform is run.
# Clean previous subscription
sudo mv /etc/rhsm/rhsm.conf /etc/rhsm/rhsm.conf.satellite-backup
sudo mv /etc/rhsm/rhsm.conf.kat-backup /etc/rhsm/rhsm.conf
sudo subscription-manager clean
# Subscribe to Red Hat CDN
sudo subscription-manager register
sudo subscription-manager repos --enable=ansible-automation-platform-2.4-for-rhel-8-x86_64-rpms # For ansible-builder
sudo subscription-manager repos --enable=rhocp-4.12-for-rhel-8-x86_64-rpms # For the build itself
sudo dnf upgrade
# Add EPEL to RHEL 8
sudo subscription-manager repos --enable codeready-builder-for-rhel-8-$(arch)-rpms
sudo dnf install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm
The previous commands are based on: https://access.redhat.com/solutions/253273
ℹ️
|
You need to obtain a token from here and add it to the ansible.cfg file. |
# Installation
sudo dnf install ansible-core ansible-navigator ansible-builder
# Add the podman credentials to all the container registries
mkdir $HOME/.docker
# cp mirror/auth.json $XDG_RUNTIME_DIR/containers/auth.json # This is ephemeral
cp mirror/auth.json $HOME/.docker/config.json # This is persistent
# Download the OCP tools with https://github.com/jtudelag/ocp-disconnected/blob/main/scripts/download-ocp-tools.sh
./scripts/download-ocp-tools.sh
mkdir files
cp <location-of-the-tar> files
# Create the container image
ansible-builder build -v 3 --tag quay.io/alopezme/ee-ocp-vmware:latest
# Run the playbook
ansible-navigator run create-vms.yaml --eei quay.io/alopezme/ee-ocp-vmware:latest --pp never -b -m stdout
Documentation:
# Install
cd ocp4-abi/ansible/ocp4-abi-installation
./install-ocp4.sh
# Check status
KUBECONFIG=my_cluster_ansible/../auth/kubeconfig oc get nodes
openshift-install --dir my_cluster_ansible/.. agent wait-for bootstrap-complete --log-level=debug
ansible-vault view vars/vault.yml
ansible-vault edit vars/vault.yml
# Login to the Red Hat Registry using your Customer Portal credentials
mkdir $XDG_RUNTIME_DIR/containers
mkdir $HOME/.docker
cp mirror/auth.json $XDG_RUNTIME_DIR/containers/auth.json # This is ephemeral
cp mirror/auth.json $HOME/.docker/config.json # This is persistent
# Generate initial config file (Then modify manually the location and the catalog version)
oc mirror init --registry example.com/mirror/oc-mirror-metadata > ./mirror/imageset-config.yaml
# Mirror it locally
oc mirror -v 3 --config ./mirror/imageset-config.yaml file://mirror-to-disk
# After copying it to the new folder, generate the registry
oc mirror --from=./mirror_seq1_000000.tar docker://registry.example:5000
The previous process is based on this documentation:
At some point, you will need to check the latest version of your installed operators in order to decide if you need to upgrade them or not. The oc-mirror
command is the tool to use, but you will find it quite slow, as it has to download the catalog container image every time it checks the version of each operator.
For that reason, I have created a hack to mirror the image locally and point to it locally.
First of all, you will use the normal catalog command to check the name of the catalog images. For example, this is the execution for OCP 4.12:
$ oc mirror list operators --catalogs --version=4.12
Available OpenShift OperatorHub catalogs:
OpenShift 4.12:
registry.redhat.io/redhat/redhat-operator-index:v4.12
registry.redhat.io/redhat/certified-operator-index:v4.12
registry.redhat.io/redhat/community-operator-index:v4.12
registry.redhat.io/redhat/redhat-marketplace-index:v4.12
ℹ️
|
It took me around 2min with good internet connection. |
Now, we need to deploy a local container registry to sync the container images locally. I’m using jtudelag script, that I synced here:
$ ./catalog-check-versions/local-registry-deploy.sh
$ podman login localhost:5000
$ podman login registry.redhat.io
podman login registry.redhat.io
podman login localhost:5000
podman pull registry.redhat.io/redhat/redhat-operator-index:v4.12
podman pull registry.redhat.io/redhat/certified-operator-index:v4.12
podman pull registry.redhat.io/redhat/community-operator-index:v4.12
ℹ️
|
If you get the following error:
|
podman tag registry.redhat.io/redhat/redhat-operator-index:v4.12 localhost:5000/redhat/redhat-operator-index:v4.12
podman tag registry.redhat.io/redhat/certified-operator-index:v4.12 localhost:5000/redhat/certified-operator-index:v4.12
podman tag registry.redhat.io/redhat/community-operator-index:v4.12 localhost:5000/redhat/community-operator-index:v4.12
podman push --remove-signatures localhost:5000/redhat/redhat-operator-index:v4.12
podman push --remove-signatures localhost:5000/redhat/certified-operator-index:v4.12
podman push --remove-signatures localhost:5000/redhat/community-operator-index:v4.12
jq '{ 'name': .packageName, 'channel': .channelName, version}' operators.json
Now, you have to always execute the scripts pointing to the local catalog images, instead of the external ones.
ℹ️
|
Comparing times, I see that oc-mirror list operators takes 5min to the external registry and less than a minute locally.
|
This script will give you all the versions that you are looking for:
./catalog-check-versions/retrieve-versions.sh