OpenShift Origin v1.5.1 based hyper-converged infrastructure deployment tutorial (deploying containerized Gluster storage with Atomic Host and OpenShift)
Step by step tutorial how to deploy hyper-converged infrustructure by OpenShift Origin v1.5.1 + Gluster for CentOS Atomic Host
Host | OS | IP | Cores | RAM | dev/vda (system) | dev/vdb (docker) | dev/vdc (gluster) |
---|---|---|---|---|---|---|---|
installer.openshift151.amsokol.me | CentOS Minimal | 192.168.151.10 | 2 | 2048 MB | 64 GB | - | - |
master-01.openshift151.amsokol.me | CentOS Atomic | 192.168.151.11 | 2 | 4096 MB | 64 GB | 128 GB | - |
node-1-01.openshift151.amsokol.me | CentOS Atomic | 192.168.151.101 | 2 | 4096 MB | 64 GB | 128 GB | 256 GB |
node-1-02.openshift151.amsokol.me | CentOS Atomic | 192.168.151.102 | 2 | 4096 MB | 64 GB | 128 GB | 256 GB |
node-2-01.openshift151.amsokol.me | CentOS Atomic | 192.168.151.201 | 2 | 4096 MB | 64 GB | 128 GB | 256 GB |
node-2-02.openshift151.amsokol.me | CentOS Atomic | 192.168.151.202 | 2 | 4096 MB | 64 GB | 128 GB | 256 GB |
-
CentOS Atomic (tested for
CentOS-Atomic-Host-7.1704-Installer.iso
): http://cloud.centos.org/centos/7/atomic/images/ -
CentOS Minimal (tested for
CentOS-7-x86_64-Minimal-1704-01.iso
): https://buildlogs.centos.org/rolling/7/isos/x86_64/
-
Set DNS records from table above.
-
Set
*.app.openshift151.amsokol.me
to192.168.151.101
-
Set
openshift151.amsokol.me
to192.168.151.11
You need only root account on installer
and master-01
.
All command should be run under root
!
-
Install OS
-
SSH as root and run:
# atomic host upgrade
# reboot
- SSH as root and run:
# systemctl stop docker
# atomic storage reset
# atomic storage modify --driver devicemapper --add-device /dev/vdb --vgroup vg-docker
# systemctl start docker
- Run as root:
# cat <<EOF >> /etc/sysctl.conf
vm.overcommit_memory = 1
vm.panic_on_oom = 0
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
# swapoff -a
# reboot
# docker info
-
Install OS
-
SSH as root and run:
# yum -y update && yum -y clean all
# reboot
-
SSH as root
-
Run (leave all passwords empty):
# ssh-keygen
- Run (enter root password for for each server):
# for host in master-01.openshift151.amsokol.me \
node-1-01.openshift151.amsokol.me \
node-1-02.openshift151.amsokol.me \
node-2-01.openshift151.amsokol.me \
node-2-02.openshift151.amsokol.me; \
do ssh-copy-id -i ~/.ssh/id_rsa.pub $host; \
done
- Run:
# yum -y install centos-release-openshift-origin
# yum -y install git python-cryptography pyOpenSSL httpd-tools ansible
# yum -y clean all
# cd ~
# git clone https://github.com/openshift/openshift-ansible
# git clone https://github.com/amsokol/openshift-lab01-hyper-converged.git
-
SSH as root to
installer
-
Check if all nodes are ready:
# cd ~
# ansible -i openshift-lab01-hyper-converged/inventory-lab02.toml nodes -a '/usr/bin/rpm-ostree status'
- Start installation:
# ansible-playbook -i openshift-lab01-hyper-converged/inventory-lab02.toml openshift-ansible/playbooks/byo/config.yml
[Optional, just FYI] Redeploy master certificates (you need to have your own domain instead of amsokol.me):
-
SSH as root to
installer
-
Uncomment two lines below
"# Redeploy master certificates"
ininventory-lab02.properties
file:
openshift_master_named_certificates=[{"certfile": "/root/openshift.amsokol.me.crt", "keyfile": "/root/openshift.amsokol.me.key", "names":["openshift.amsokol.me"]}]
openshift_master_overwrite_named_certificates=true
-
Create
openshift-master.pem
andopenshift-master.pem
onhttps://www.startssl.com/
-
Copy
openshift-master.pem
andopenshift-master.pem
toinstaller
/root folder -
Run installation:
# ansible-playbook -i openshift-lab01-hyper-converged/inventory-lab02.toml openshift-ansible/playbooks/byo/openshift-cluster/redeploy-master-certificates.yml
-
SSH as root to
installer
-
Add
admin
with password:
# ansible -i openshift-lab01-hyper-converged/inventory-lab02.toml masters -a "sed -i '$ a `htpasswd -n admin`' /etc/origin/master/htpasswd"
# ansible -i openshift-lab01-hyper-converged/inventory-lab02.toml masters -a 'oc adm policy add-cluster-role-to-user cluster-admin admin'
-
SSH as root to
installer
-
Add
amsokol
with password
# ansible -i openshift-lab01-hyper-converged/inventory-lab02.toml masters -a "sed -i '$ a `htpasswd -n amsokol`' /etc/origin/master/htpasswd"
- [Optional] Give
amsokol
direct access to OpenShift's Docker registry:
# ansible -i openshift-lab01-hyper-converged/inventory-lab02.toml masters -a "oc adm policy add-role-to-user system:registry amsokol"
# ansible -i openshift-lab01-hyper-converged/inventory-lab02.toml masters -a "oc adm policy add-role-to-user system:image-builder amsokol"
- SSH as root to
installer
and run:
# yum -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-7.noarch.rpm
# yum -y install heketi-templates heketi-client
-
Copy all files from
/usr/share/heketi/templates
(oninstaller
) to/root/heketi/templates
(onmaster-01
where you need to create/root/heketi/templates
before) -
For each
node-1-01
,node-1-02
,node-2-01
,node-2-02
hosts add the following rules to/etc/sysconfig/iptables
and reboot:
-A OS_FIREWALL_ALLOW -p tcp -m state --state NEW -m tcp --dport 24007 -j ACCEPT
-A OS_FIREWALL_ALLOW -p tcp -m state --state NEW -m tcp --dport 24008 -j ACCEPT
-A OS_FIREWALL_ALLOW -p tcp -m state --state NEW -m tcp --dport 2222 -j ACCEPT
-A OS_FIREWALL_ALLOW -p tcp -m state --state NEW -m multiport --dports 49152:49251 -j ACCEPT
- [Workaround due to issue #656 in Heketi] For each
node-1-01
,node-1-02
,node-2-01
,node-2-02
run the following as root:
# systemctl stop rpcbind.socket
# systemctl disable rpcbind.socket
- SSH as root to
master-01
and run:
# oc new-project aplo
# oc project aplo
# oc adm policy add-scc-to-user privileged -z default
# oc create -f /root/heketi/templates
# oc process glusterfs -p GLUSTERFS_NODE=node-1-01.openshift151.amsokol.me | oc create -f -
# oc process glusterfs -p GLUSTERFS_NODE=node-1-02.openshift151.amsokol.me | oc create -f -
# oc process glusterfs -p GLUSTERFS_NODE=node-2-01.openshift151.amsokol.me | oc create -f -
# oc process glusterfs -p GLUSTERFS_NODE=node-2-02.openshift151.amsokol.me | oc create -f -
-
Wait while all pods are created
-
Run (replace
<admin_password>
byadmin
password you set when created account):
# oc process deploy-heketi \
-p HEKETI_KUBE_NAMESPACE=aplo \
-p HEKETI_KUBE_APIHOST=https://openshift151.amsokol.me:8443 \
-p HEKETI_KUBE_INSECURE=y \
-p HEKETI_KUBE_USER=admin \
-p HEKETI_KUBE_PASSWORD=<admin_password> | oc create -f -
- Wait while pod is created and test result:
# curl http://deploy-heketi-aplo.app.openshift151.amsokol.me/hello
- Run:
# oc adm policy add-role-to-user admin system:serviceaccount:aplo:default -n aplo
- SSH as root to
installer
and run:
# export HEKETI_CLI_SERVER=http://deploy-heketi-aplo.app.openshift151.amsokol.me:80
# heketi-cli topology load --json=openshift-lab01-hyper-converged/gluster-topology.json
# heketi-cli setup-openshift-heketi-storage
-
Copy
heketi-storage.json
from/root
(oninstaller
) to/root
(onmaster-01
) -
SSH as root to
master-01
and run:
# oc create -f heketi-storage.json
# oc delete all,job,template,secret --selector="deploy-heketi"
- Run (replace
<admin_password>
byadmin
password you set when created account):
# oc process heketi \
-p HEKETI_KUBE_NAMESPACE=aplo \
-p HEKETI_KUBE_APIHOST=https://openshift151.amsokol.me:8443 \
-p HEKETI_KUBE_INSECURE=y \
-p HEKETI_KUBE_USER=admin \
-p HEKETI_KUBE_PASSWORD=<admin_password> | oc create -f -
- Wait while pod is created and test result:
# curl http://heketi-aplo.app.openshift151.amsokol.me/hello
- SSH as root to
installer
and run:
# export HEKETI_CLI_SERVER=http://heketi-aplo.app.openshift151.amsokol.me:80
# heketi-cli topology info
-
Copy
glusterfs-storageclass.yaml
from/root/openshift-lab01-hyper-converged
(oninstaller
) to/root
(onmaster-01
) -
SSH as root to
master-01
and run:
oc create -f glusterfs-storageclass.yaml
-
Login as
admin
(account you created above) tohttps://openshift151.amsokol.me:8443
-
Open
default
project -
Create storage (
'Storage Classes'
='slow'
,'Name'
='docker-registry-claim'
,'Access Mode'
='Shared Access'
,'Size'
=50GiB
) -
SSH as root to
master-01
and run:
# oc project default
# oc volume deploymentconfigs/docker-registry --add --name=registry-storage -t pvc --claim-name=docker-registry-claim --overwrite