All the commands regarding the requirements must be done on each
nodes. To do it i used tmux with the :setw synchronize-panes on
option.
ip link
sudo cat /sys/class/dmi/id/product_uuid
swapoff -a
- Add kubernetes repository
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
- Set SELinux in permissive mode (effectively disabling it)
setenforce 0
sed -i 's/^SELINUX=enforcing$/SELINUX=permissive/' /etc/selinux/config
yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
systemctl enable --now kubelet
- Make sure that the
br_netfilter
module is loaded before this step with=lsmod | grep br_netfilter=. To load it explicitly:modprobe br_netfilter
- Install required packages
yum install yum-utils device-mapper-persistent-data lvm2
- Add Docker repository
yum-config-manager \ --add-repo \ https://download.docker.com/linux/centos/docker-ce.repo
- Install Docker CE
yum update && yum install docker-ce-18.06.2.ce
- Create /etc/docker directory
mkdir /etc/docker
- Setup daemon “`bash cat > /etc/docker/daemon.json <<EOF {
“exec-opts”: [“native.cgroupdriver=systemd”], “log-driver”:
“json-file”, “log-opts”: { “max-size”: “100m” }, “storage-driver”:
“overlay2”, “storage-opts”: [ “overlay2.override_kernel_check=true”
] } EOF
mkdir -p /etc/systemd/system/docker.service.d “`
- Restart Docker
bash systemctl daemon-reload systemctl restart docker
- Set
/proc/sys/net/bridge/bridge-nf-call-iptables
to1
by running = sysctl net.bridge.bridge-nf-call-iptables=1= to pass bridged IPv4 traffic to iptables’ chains
With kubeadm init
adding --apiserver-advertise-address
with the IP if there are multiple
interface and --pod-network-cidr=10.244.0.0/16
to make flannel
work: kubeadm init –apiserver-advertise-address –pod-network-cidr=10.244.0.0/16
normal user
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
shell kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml
As the output of kubeadm say:
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.60.101:6443 --token kbrccr.gkqno5vco54n1ilc \
--discovery-token-ca-cert-hash sha256:2d16a996778f4a003d23725ec64bf6070fce61f49e96bc74123ccf98bcd6a08
To use kubectl
from other nodes than the master copy
/etc/kubernetes/admin.conf
on that node and choose a way to use it: -
kubectl --kubeconfig ./admin.conf get nodes
-
export KUBECONFIG=/etc/kubernetes/admin.conf
- copy it insiede
$HOME/.kube/config
- from vagrant vm
vagrant scp master1:/home/vagrant/.kube/config /home/giogio/.kube/vagrant-conf
See Pod not found
As suggested in the official GitLab documentation the best way to deploy GitLab on kuberntes is with an Helm Chart.
- Download your desired version
- Unpack it (
tar -zxvf helm-v3.0.0-linux-amd64.tgz
) - Find the
helm
binary in the unpacked directory, and move it to its desired destination (mv linux-amd64/helm /usr/local/bin/helm
)
kubectl -n kube-system create serviceaccount tiller
kubectl create clusterrolebinding tiller \
--clusterrole=cluster-admin \
--serviceaccount=kube-system:tiller
helm init --service-account tiller
See Troubleshooting/tiller for problems.
See apiVersion error and Use the edited chart
Then see The page is not reacheble to edit the ingress service
k describe nodes
journalctl -u kubelet
on each node And in case of
errors check if flannel pods are presents, or: swapoff -a
kubeadm reset
(Error: forwarding ports: error upgrading connection: unable to upgrade connection: pod does not exist
):
install tiller locally https://rimusz.net/tillerless-helm
Error: validation failed: unable to recognize "": no matches for kind "Deployment" in version "extensions/v1beta1"
Have to edit the apiVersion, so:
helm fetch gitlab/gitlab
- change each
extensions/v1beta1
in deployment resources toapps/v1
helm upgrade --install gitlab ./gitlab
--timeout 600 --set global.hosts.domain=local --set
global.hosts.externalIP=192.168.60.202 --set global.edition=ce --set
certmanager-issuer.email=me@example.com=
k get svc
we will see that the external ip
of the ingress controller is in pending.
We have to edit the service: k edit svc gitlab-nginx-ingress-controller
and
add the same IP we used as externalIP
before:
externalIPs:
- 192.168.60.102
- Paste
Environment="KUBELET_EXTRA_ARGS=--node-ip=VAGRANT_VM_EXTERNAL_IP_HERE"
inside/etc/systemd/system/kubelet.service.d/10-kubeadm.conf
and restart the kubelet give an error (systemctl status kubelet)
Add --node-ip
flag in /var/lib/kubelet/kubeadm-flags.env
gives no error, but the main error persist.
for i in `vagrant global-status | grep virtualbox | awk '{ print $1 }'` ; do vagrant destroy $i ; done
Not route to host
when trying to ping/curl a pod on the same namespace
kubectl label nodes worker2 w=2
and same for worker1
kubectl run nginx --restart=Never --image=nginx ==dry-run -o yaml > pod.yml
ping from one to another doesn’t work.
- Disable firewall
- Check
ip route
- enable ipv4 forwarding
minikube start –cpus 3 –memory 8192
helm repo add gitlab https://charts.gitlab.io/ helm repo update helm upgrade –install gitlab ./gitlab \ –timeout 600 \ -f minikube-values.yml –set global.hosts.domain=$(minikube ip).nip.io \ –set global.hosts.externalIP=$(minikube ip)
helm repo add gitlab https://charts.gitlab.io/ helm repo update helm upgrade –install gitlab ./gitlab \ –timeout 600 \ -f https://gitlab.com/gitlab-org/charts/gitlab/raw/master/examples/values-minikube-minimum.yaml –set global.hosts.domain=$(minikube ip).nip.io \ –set global.hosts.externalIP=$(minikube ip)