Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

run_sdk_container: Clean up a bit and add mounting of custom volumes #1306

Merged
merged 2 commits into from
Nov 2, 2023

Conversation

krnowak
Copy link
Member

@krnowak krnowak commented Oct 25, 2023

Cleanups:

  • Make cosmetic fixes in help output.
  • There is usually no need for putting variables inside quotes in
    assignments.
  • Use [[ ]] to avoid putting everything into strings.
  • Use arrays instead of relying on strings to be split on whitespace
    as it was the case for invoking docker and getting GPG volume flags
    for docker.
  • Make sure that some cleanup and trap strings quote variables
    properly.
  • Add a "call_docker" function to avoid dealing with "docker" and a
    new "docker_a" variables when willing to invoke docker. The "docker"
    variable rather shouldn't be used, but it is still there in case
    some other scripts were using it.

Adds -m <src>:<dest> flag to mount host directories inside SDK container.

- Make cosmetic fixes in help output.

- There is usually no need for putting variables inside quotes in
  assignments.

- Use [[ ]] to avoid putting everything into strings.

- Use arrays instead of relying on strings to be split on whitespace
  as it was the case for invoking docker and getting GPG volume flags
  for docker.

- Make sure that some cleanup and trap strings quote variables
  properly.

- Add a "call_docker" function to avoid dealing with "docker" and a
  new "docker_a" variables when willing to invoke docker. The "docker"
  variable rather shouldn't be used, but it is still there in case
  some other scripts were using it.
@krnowak krnowak temporarily deployed to development October 25, 2023 12:54 — with GitHub Actions Inactive
@krnowak krnowak requested a review from a team October 25, 2023 13:21
@krnowak krnowak temporarily deployed to development October 26, 2023 09:32 — with GitHub Actions Inactive
@github-actions
Copy link

Test report for 3767.0.0+nightly-20231024-2100 / amd64 arm64

Platforms tested : qemu_uefi-amd64 qemu_update-amd64 qemu_uefi-arm64 qemu_update-arm64

ok bpf.execsnoop 🟢 Succeeded: qemu_uefi-amd64 (1)

ok bpf.local-gadget 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.basic 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.cgroupv1 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.cloudinit.basic 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.cloudinit.multipart-mime 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.cloudinit.script 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.disk.raid0.data 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.disk.raid0.root 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.disk.raid1.data 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.disk.raid1.root 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.etcd-member.discovery 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.etcd-member.etcdctlv3 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.etcd-member.v2-backup-restore 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.filesystem 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.flannel.udp 🟢 Succeeded: qemu_uefi-amd64 (1)

ok cl.flannel.vxlan 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.instantiated.enable-unit 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.kargs 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.luks 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.oem.indirect 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.oem.indirect.new 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.oem.regular 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.oem.regular.new 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.oem.reuse 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.oem.wipe 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.symlink 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.translation 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v1.btrfsroot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v1.ext4root 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v1.groups 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v1.once 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v1.sethostname 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v1.users 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v1.xfsroot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v2.btrfsroot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v2.ext4root 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v2.users 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v2.xfsroot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v2_1.ext4checkexisting 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v2_1.swap 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v2_1.vfat 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.install.cloudinit 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.internet 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.locksmith.cluster 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.misc.falco 🟢 Succeeded: qemu_uefi-amd64 (1)

ok cl.network.initramfs.second-boot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.network.listeners 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.network.wireguard 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.omaha.ping 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.osreset.ignition-rerun 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.overlay.cleanup 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.swap_activation 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.sysext.boot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.sysext.fallbackdownload # SKIP 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.toolbox.dnf-install 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.update.badverity 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.update.grubnop 🟢 Succeeded: qemu_uefi-amd64 (1)

ok cl.update.payload 🟢 Succeeded: qemu_update-amd64 (1); qemu_update-arm64 (1)

ok cl.update.reboot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.users.shells 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.verity 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.auth.verify 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.ignition.groups 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.ignition.once 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (3) ❌ Failed: qemu_uefi-arm64 (1, 2)

                Diagnostic output for qemu_uefi-arm64, run 2
    L1: " Error: _execution.go:140: Couldn_t reboot machine: machine __27fdcbc5-d9e6-4d36-afbd-04029823d357__ failed basic checks: some systemd units failed:"
    L2: "??? ldconfig.service loaded failed failed Rebuild Dynamic Linker Cache"
    L3: "status: "
    L4: "journal:-- No entries --"
    L5: "harness.go:583: Found systemd unit failed to start (?[0;1;39mldconfig.s???0m - Rebuild Dynamic Linker Cache. ) on machine 27fdcbc5-d9e6-4d36-afbd-04029823d357 console_"
    L6: " "
                Diagnostic output for qemu_uefi-arm64, run 1
    L1: "  "
    L2: " Error: _execution.go:140: Couldn_t reboot machine: machine __0483c3f2-6bfc-4185-902b-409c860b83a5__ failed basic checks: some systemd units failed:"
    L3: "??? ldconfig.service loaded failed failed Rebuild Dynamic Linker Cache"
    L4: "status: "
    L5: "journal:-- No entries --"
    L6: "harness.go:583: Found systemd unit failed to start (?[0;1;39mldconfig.s???0m - Rebuild Dynamic Linker Cache. ) on machine 0483c3f2-6bfc-4185-902b-409c860b83a5 console_"
    L7: " "

ok coreos.ignition.resource.local 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.ignition.resource.remote 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.ignition.resource.s3.versioned 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.ignition.security.tls 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.ignition.sethostname 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.ignition.systemd.enable-service 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.locksmith.reboot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.locksmith.tls 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.selinux.boolean 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.selinux.enforce 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (2) ❌ Failed: qemu_uefi-arm64 (1)

                Diagnostic output for qemu_uefi-arm64, run 1
    L1: "  "
    L2: " Error: _selinux.go:115: failed to reboot machine: machine __6ee2653b-9cf1-4292-97fe-d0eb5cccde05__ failed basic checks: some systemd units failed:"
    L3: "??? ldconfig.service loaded failed failed Rebuild Dynamic Linker Cache"
    L4: "status: "
    L5: "journal:-- No entries --"
    L6: "harness.go:583: Found systemd unit failed to start (?[0;1;39mldconfig.s???0m - Rebuild Dynamic Linker Cache. ) on machine 6ee2653b-9cf1-4292-97fe-d0eb5cccde05 console_"
    L7: " "

ok coreos.tls.fetch-urls 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.update.badusr 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok devcontainer.docker 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok devcontainer.systemd-nspawn 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok docker.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok docker.btrfs-storage 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok docker.containerd-restart 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok docker.enable-service.sysext 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok docker.lib-coreos-dockerd-compat 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok docker.network 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok docker.selinux 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok docker.userns 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok extra-test.[first_dual].cl.update.payload 🟢 Succeeded: qemu_update-amd64 (1); qemu_update-arm64 (1)

ok kubeadm.v1.25.10.calico.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (2) ❌ Failed: qemu_uefi-arm64 (1)

                Diagnostic output for qemu_uefi-arm64, run 1
    L1: "  "
    L2: " Error: _cluster.go:125: I1026 13:05:44.662410    1581 version.go:256] remote version is much newer: v1.28.3; falling back to: stable-1.25"
    L3: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.25.15"
    L4: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.25.15"
    L5: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.25.15"
    L6: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-proxy:v1.25.15"
    L7: "cluster.go:125: [config/images] Pulled registry.k8s.io/pause:3.8"
    L8: "cluster.go:125: [config/images] Pulled registry.k8s.io/etcd:3.5.6-0"
    L9: "cluster.go:125: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.9.3"
    L10: "cluster.go:125: I1026 13:06:03.298289    1744 version.go:256] remote version is much newer: v1.28.3; falling back to: stable-1.25"
    L11: "cluster.go:125: [init] Using Kubernetes version: v1.25.15"
    L12: "cluster.go:125: [preflight] Running pre-flight checks"
    L13: "cluster.go:125: [preflight] Pulling images required for setting up a Kubernetes cluster"
    L14: "cluster.go:125: [preflight] This might take a minute or two, depending on the speed of your internet connection"
    L15: "cluster.go:125: [preflight] You can also perform this action in beforehand using _kubeadm config images pull_"
    L16: "cluster.go:125: [certs] Using certificateDir folder __/etc/kubernetes/pki__"
    L17: "cluster.go:125: [certs] Generating __ca__ certificate and key"
    L18: "cluster.go:125: [certs] Generating __apiserver__ certificate and key"
    L19: "cluster.go:125: [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 10.0.0.8?1]"
    L20: "cluster.go:125: [certs] Generating __apiserver-kubelet-client__ certificate and key"
    L21: "cluster.go:125: [certs] Generating __front-proxy-ca__ certificate and key"
    L22: "cluster.go:125: [certs] Generating __front-proxy-client__ certificate and key"
    L23: "cluster.go:125: [certs] External etcd mode: Skipping etcd/ca certificate authority generation"
    L24: "cluster.go:125: [certs] External etcd mode: Skipping etcd/server certificate generation"
    L25: "cluster.go:125: [certs] External etcd mode: Skipping etcd/peer certificate generation"
    L26: "cluster.go:125: [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation"
    L27: "cluster.go:125: [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation"
    L28: "cluster.go:125: [certs] Generating __sa__ key and public key"
    L29: "cluster.go:125: [kubeconfig] Using kubeconfig folder __/etc/kubernetes__"
    L30: "cluster.go:125: [kubeconfig] Writing __admin.conf__ kubeconfig file"
    L31: "cluster.go:125: [kubeconfig] Writing __kubelet.conf__ kubeconfig file"
    L32: "cluster.go:125: [kubeconfig] Writing __controller-manager.conf__ kubeconfig file"
    L33: "cluster.go:125: [kubeconfig] Writing __scheduler.conf__ kubeconfig file"
    L34: "cluster.go:125: [kubelet-start] Writing kubelet environment file with flags to file __/var/lib/kubelet/kubeadm-flags.env__"
    L35: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/config.yaml__"
    L36: "cluster.go:125: [kubelet-start] Starting the kubelet"
    L37: "cluster.go:125: [control-plane] Using manifest folder __/etc/kubernetes/manifests__"
    L38: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-apiserver__"
    L39: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-controller-manager__"
    L40: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-scheduler__"
    L41: "cluster.go:125: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory __/etc/kubernetes/manifests__. This can take up to 30m0s"
    L42: "cluster.go:125: [apiclient] All control plane components are healthy after 6.004747 seconds"
    L43: "cluster.go:125: [upload-config] Storing the configuration used in ConfigMap __kubeadm-config__ in the __kube-system__ Namespace"
    L44: "cluster.go:125: [kubelet] Creating a ConfigMap __kubelet-config__ in namespace kube-system with the configuration for the kubelets in the cluster"
    L45: "cluster.go:125: [upload-certs] Skipping phase. Please see --upload-certs"
    L46: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]"
    L47: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]"
    L48: "cluster.go:125: [bootstrap-token] Using token: xyk2ay.igov2u19dl4y36ba"
    L49: "cluster.go:125: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles"
    L50: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes"
    L51: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials"
    L52: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token"
    L53: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster"
    L54: "cluster.go:125: [bootstrap-token] Creating the __cluster-info__ ConfigMap in the __kube-public__ namespace"
    L55: "cluster.go:125: [kubelet-finalize] Updating __/etc/kubernetes/kubelet.conf__ to point to a rotatable kubelet client certificate and key"
    L56: "cluster.go:125: [addons] Applied essential addon: CoreDNS"
    L57: "cluster.go:125: [addons] Applied essential addon: kube-proxy"
    L58: "cluster.go:125: "
    L59: "cluster.go:125: Your Kubernetes control-plane has initialized successfully!"
    L60: "cluster.go:125: "
    L61: "cluster.go:125: To start using your cluster, you need to run the following as a regular user:"
    L62: "cluster.go:125: "
    L63: "cluster.go:125:   mkdir -p $HOME/.kube"
    L64: "cluster.go:125:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
    L65: "cluster.go:125:   sudo chown $(id -u):$(id -g) $HOME/.kube/config"
    L66: "cluster.go:125: "
    L67: "cluster.go:125: Alternatively, if you are the root user, you can run:"
    L68: "cluster.go:125: "
    L69: "cluster.go:125:   export KUBECONFIG=/etc/kubernetes/admin.conf"
    L70: "cluster.go:125: "
    L71: "cluster.go:125: You should now deploy a pod network to the cluster."
    L72: "cluster.go:125: Run __kubectl apply -f [podnetwork].yaml__ with one of the options listed at:"
    L73: "cluster.go:125:   https://kubernetes.io/docs/concepts/cluster-administration/addons/"
    L74: "cluster.go:125: "
    L75: "cluster.go:125: Then you can join any number of worker nodes by running the following on each as root:"
    L76: "cluster.go:125: "
    L77: "cluster.go:125: kubeadm join 10.0.0.81:6443 --token xyk2ay.igov2u19dl4y36ba _"
    L78: "cluster.go:125:  --discovery-token-ca-cert-hash sha256:e1990f45aa5cba91b7e77240d7bd0f1d52af08f8586e676a452f4ac851894ea4 "
    L79: "cluster.go:125: namespace/tigera-operator created"
    L80: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created"
    L81: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/bgpfilters.crd.projectcalico.org created"
    L82: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created"
    L83: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created"
    L84: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created"
    L85: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created"
    L86: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created"
    L87: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created"
    L88: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created"
    L89: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created"
    L90: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created"
    L91: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created"
    L92: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created"
    L93: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created"
    L94: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created"
    L95: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created"
    L96: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created"
    L97: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created"
    L98: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io created"
    L99: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/imagesets.operator.tigera.io created"
    L100: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io created"
    L101: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/tigerastatuses.operator.tigera.io created"
    L102: "cluster.go:125: serviceaccount/tigera-operator created"
    L103: "cluster.go:125: clusterrole.rbac.authorization.k8s.io/tigera-operator created"
    L104: "cluster.go:125: clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created"
    L105: "cluster.go:125: deployment.apps/tigera-operator created"
    L106: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io condition met"
    L107: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io condition met"
    L108: "cluster.go:125: installation.operator.tigera.io/default created"
    L109: "cluster.go:125: apiserver.operator.tigera.io/default created"
    L110: "cluster.go:125: Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service ??? /etc/systemd/system/kubelet.service."
    L111: "--- FAIL: kubeadm.v1.25.10.calico.base/nginx_deployment (93.77s)"
    L112: "kubeadm.go:319: nginx is not deployed: ready replicas should be equal to 1: null_"
    L113: " "

ok kubeadm.v1.25.10.calico.cgroupv1.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.25.10.cilium.base 🟢 Succeeded: qemu_uefi-amd64 (2); qemu_uefi-arm64 (1) ❌ Failed: qemu_uefi-amd64 (1)

                Diagnostic output for qemu_uefi-amd64, run 1
    L1: " Error: _cluster.go:125: I1026 13:08:06.412907    1610 version.go:256] remote version is much newer: v1.28.3; falling back to: stable-1.25"
    L2: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.25.15"
    L3: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.25.15"
    L4: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.25.15"
    L5: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-proxy:v1.25.15"
    L6: "cluster.go:125: [config/images] Pulled registry.k8s.io/pause:3.8"
    L7: "cluster.go:125: [config/images] Pulled registry.k8s.io/etcd:3.5.6-0"
    L8: "cluster.go:125: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.9.3"
    L9: "cluster.go:125: I1026 13:08:16.395619    1762 version.go:256] remote version is much newer: v1.28.3; falling back to: stable-1.25"
    L10: "cluster.go:125: [init] Using Kubernetes version: v1.25.15"
    L11: "cluster.go:125: [preflight] Running pre-flight checks"
    L12: "cluster.go:125: [preflight] Pulling images required for setting up a Kubernetes cluster"
    L13: "cluster.go:125: [preflight] This might take a minute or two, depending on the speed of your internet connection"
    L14: "cluster.go:125: [preflight] You can also perform this action in beforehand using _kubeadm config images pull_"
    L15: "cluster.go:125: [certs] Using certificateDir folder __/etc/kubernetes/pki__"
    L16: "cluster.go:125: [certs] Generating __ca__ certificate and key"
    L17: "cluster.go:125: [certs] Generating __apiserver__ certificate and key"
    L18: "cluster.go:125: [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 10.0.0.1?18]"
    L19: "cluster.go:125: [certs] Generating __apiserver-kubelet-client__ certificate and key"
    L20: "cluster.go:125: [certs] Generating __front-proxy-ca__ certificate and key"
    L21: "cluster.go:125: [certs] Generating __front-proxy-client__ certificate and key"
    L22: "cluster.go:125: [certs] External etcd mode: Skipping etcd/ca certificate authority generation"
    L23: "cluster.go:125: [certs] External etcd mode: Skipping etcd/server certificate generation"
    L24: "cluster.go:125: [certs] External etcd mode: Skipping etcd/peer certificate generation"
    L25: "cluster.go:125: [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation"
    L26: "cluster.go:125: [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation"
    L27: "cluster.go:125: [certs] Generating __sa__ key and public key"
    L28: "cluster.go:125: [kubeconfig] Using kubeconfig folder __/etc/kubernetes__"
    L29: "cluster.go:125: [kubeconfig] Writing __admin.conf__ kubeconfig file"
    L30: "cluster.go:125: [kubeconfig] Writing __kubelet.conf__ kubeconfig file"
    L31: "cluster.go:125: [kubeconfig] Writing __controller-manager.conf__ kubeconfig file"
    L32: "cluster.go:125: [kubeconfig] Writing __scheduler.conf__ kubeconfig file"
    L33: "cluster.go:125: [kubelet-start] Writing kubelet environment file with flags to file __/var/lib/kubelet/kubeadm-flags.env__"
    L34: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/config.yaml__"
    L35: "cluster.go:125: [kubelet-start] Starting the kubelet"
    L36: "cluster.go:125: [control-plane] Using manifest folder __/etc/kubernetes/manifests__"
    L37: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-apiserver__"
    L38: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-controller-manager__"
    L39: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-scheduler__"
    L40: "cluster.go:125: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory __/etc/kubernetes/manifests__. This can take up to 30m0s"
    L41: "cluster.go:125: [apiclient] All control plane components are healthy after 4.502463 seconds"
    L42: "cluster.go:125: [upload-config] Storing the configuration used in ConfigMap __kubeadm-config__ in the __kube-system__ Namespace"
    L43: "cluster.go:125: [kubelet] Creating a ConfigMap __kubelet-config__ in namespace kube-system with the configuration for the kubelets in the cluster"
    L44: "cluster.go:125: [upload-certs] Skipping phase. Please see --upload-certs"
    L45: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]"
    L46: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]"
    L47: "cluster.go:125: [bootstrap-token] Using token: 26f6rr.hp3jvrzvxjqybl24"
    L48: "cluster.go:125: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles"
    L49: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes"
    L50: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials"
    L51: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token"
    L52: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster"
    L53: "cluster.go:125: [bootstrap-token] Creating the __cluster-info__ ConfigMap in the __kube-public__ namespace"
    L54: "cluster.go:125: [kubelet-finalize] Updating __/etc/kubernetes/kubelet.conf__ to point to a rotatable kubelet client certificate and key"
    L55: "cluster.go:125: [addons] Applied essential addon: CoreDNS"
    L56: "cluster.go:125: [addons] Applied essential addon: kube-proxy"
    L57: "cluster.go:125: "
    L58: "cluster.go:125: Your Kubernetes control-plane has initialized successfully!"
    L59: "cluster.go:125: "
    L60: "cluster.go:125: To start using your cluster, you need to run the following as a regular user:"
    L61: "cluster.go:125: "
    L62: "cluster.go:125:   mkdir -p $HOME/.kube"
    L63: "cluster.go:125:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
    L64: "cluster.go:125:   sudo chown $(id -u):$(id -g) $HOME/.kube/config"
    L65: "cluster.go:125: "
    L66: "cluster.go:125: Alternatively, if you are the root user, you can run:"
    L67: "cluster.go:125: "
    L68: "cluster.go:125:   export KUBECONFIG=/etc/kubernetes/admin.conf"
    L69: "cluster.go:125: "
    L70: "cluster.go:125: You should now deploy a pod network to the cluster."
    L71: "cluster.go:125: Run __kubectl apply -f [podnetwork].yaml__ with one of the options listed at:"
    L72: "cluster.go:125:   https://kubernetes.io/docs/concepts/cluster-administration/addons/"
    L73: "cluster.go:125: "
    L74: "cluster.go:125: Then you can join any number of worker nodes by running the following on each as root:"
    L75: "cluster.go:125: "
    L76: "cluster.go:125: kubeadm join 10.0.0.118:6443 --token 26f6rr.hp3jvrzvxjqybl24 _"
    L77: "cluster.go:125:  --discovery-token-ca-cert-hash sha256:9c2a905ea199c02e1129d385e8ba2bc9f8b5108d81171d0bf7f6d5cdedeb3e2a "
    L78: "cluster.go:125: i  Using Cilium version 1.12.1"
    L79: "cluster.go:125: ? Auto-detected cluster name: kubernetes"
    L80: "cluster.go:125: ? Auto-detected datapath mode: tunnel"
    L81: "cluster.go:125: ? Auto-detected kube-proxy has been installed"
    L82: "cluster.go:125: i  helm template --namespace kube-system cilium cilium/cilium --version 1.12.1 --set cluster.id=0,cluster.name=kubernetes,encryption.nodeEncryption=false,extraConfig.cluster-pool-ipv4-?cidr=192.168.0.0/17,extraConfig.enable-endpoint-routes=true,kubeProxyReplacement=disabled,operator.replicas=1,serviceAccounts.cilium.name=cilium,serviceAccounts.operator.name=cilium-operator,tunnel=vx?lan"
    L83: "cluster.go:125: i  Storing helm values file in kube-system/cilium-cli-helm-values Secret"
    L84: "cluster.go:125: ? Created CA in secret cilium-ca"
    L85: "cluster.go:125: ? Generating certificates for Hubble..."
    L86: "cluster.go:125: ? Creating Service accounts..."
    L87: "cluster.go:125: ? Creating Cluster roles..."
    L88: "cluster.go:125: ? Creating ConfigMap for Cilium version 1.12.1..."
    L89: "cluster.go:125: i Manual overwrite in ConfigMap: enable-endpoint-routes=true"
    L90: "cluster.go:125: i Manual overwrite in ConfigMap: cluster-pool-ipv4-cidr=192.168.0.0/17"
    L91: "cluster.go:125: ? Creating Agent DaemonSet..."
    L92: "cluster.go:125: ? Creating Operator Deployment..."
    L93: "cluster.go:125: ? Waiting for Cilium to be installed and ready..."
    L94: "cluster.go:125: ? Cilium was successfully installed! Run _cilium status_ to view installation health"
    L95: "cluster.go:125: ?[33m    /??_"
    L96: "cluster.go:125: ?[36m /???[33m___/?[32m??_?[0m    Cilium:         ?[32mOK?[0m"
    L97: "cluster.go:125: ?[36m ___?[31m/??_?[32m__/?[0m    Operator:       ?[32mOK?[0m"
    L98: "cluster.go:125: ?[32m /???[31m___/?[35m??_?[0m    Hubble:         ?[36mdisabled?[0m"
    L99: "cluster.go:125: ?[32m ___?[34m/??_?[35m__/?[0m    ClusterMesh:    ?[36mdisabled?[0m"
    L100: "cluster.go:125: ?[34m    ___/"
    L101: "cluster.go:125: ?[0m"
    L102: "cluster.go:125: Deployment       cilium-operator    "
    L103: "cluster.go:125: DaemonSet        cilium             "
    L104: "cluster.go:125: Containers:      cilium             "
    L105: "cluster.go:125:                  cilium-operator    "
    L106: "cluster.go:125: Cluster Pods:    0/0 managed by Cilium"
    L107: "cluster.go:125: Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service ??? /etc/systemd/system/kubelet.service."
    L108: "--- FAIL: kubeadm.v1.25.10.cilium.base/node_readiness (91.69s)"
    L109: "kubeadm.go:301: nodes are not ready: ready nodes should be equal to 2: 1_"
    L110: " "
    L111: "  "

ok kubeadm.v1.25.10.cilium.cgroupv1.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.25.10.flannel.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.25.10.flannel.cgroupv1.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.26.5.calico.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.26.5.cilium.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (2) ❌ Failed: qemu_uefi-arm64 (1)

                Diagnostic output for qemu_uefi-arm64, run 1
    L1: "  "
    L2: " Error: _cluster.go:125: I1026 13:11:42.734574    1559 version.go:256] remote version is much newer: v1.28.3; falling back to: stable-1.26"
    L3: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.26.10"
    L4: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.26.10"
    L5: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.26.10"
    L6: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-proxy:v1.26.10"
    L7: "cluster.go:125: [config/images] Pulled registry.k8s.io/pause:3.9"
    L8: "cluster.go:125: [config/images] Pulled registry.k8s.io/etcd:3.5.6-0"
    L9: "cluster.go:125: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.9.3"
    L10: "cluster.go:125: I1026 13:11:58.728266    1719 version.go:256] remote version is much newer: v1.28.3; falling back to: stable-1.26"
    L11: "cluster.go:125: [init] Using Kubernetes version: v1.26.10"
    L12: "cluster.go:125: [preflight] Running pre-flight checks"
    L13: "cluster.go:125: [preflight] Pulling images required for setting up a Kubernetes cluster"
    L14: "cluster.go:125: [preflight] This might take a minute or two, depending on the speed of your internet connection"
    L15: "cluster.go:125: [preflight] You can also perform this action in beforehand using _kubeadm config images pull_"
    L16: "cluster.go:125: [certs] Using certificateDir folder __/etc/kubernetes/pki__"
    L17: "cluster.go:125: [certs] Generating __ca__ certificate and key"
    L18: "cluster.go:125: [certs] Generating __apiserver__ certificate and key"
    L19: "cluster.go:125: [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 10.0.0.9?1]"
    L20: "cluster.go:125: [certs] Generating __apiserver-kubelet-client__ certificate and key"
    L21: "cluster.go:125: [certs] Generating __front-proxy-ca__ certificate and key"
    L22: "cluster.go:125: [certs] Generating __front-proxy-client__ certificate and key"
    L23: "cluster.go:125: [certs] External etcd mode: Skipping etcd/ca certificate authority generation"
    L24: "cluster.go:125: [certs] External etcd mode: Skipping etcd/server certificate generation"
    L25: "cluster.go:125: [certs] External etcd mode: Skipping etcd/peer certificate generation"
    L26: "cluster.go:125: [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation"
    L27: "cluster.go:125: [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation"
    L28: "cluster.go:125: [certs] Generating __sa__ key and public key"
    L29: "cluster.go:125: [kubeconfig] Using kubeconfig folder __/etc/kubernetes__"
    L30: "cluster.go:125: [kubeconfig] Writing __admin.conf__ kubeconfig file"
    L31: "cluster.go:125: [kubeconfig] Writing __kubelet.conf__ kubeconfig file"
    L32: "cluster.go:125: [kubeconfig] Writing __controller-manager.conf__ kubeconfig file"
    L33: "cluster.go:125: [kubeconfig] Writing __scheduler.conf__ kubeconfig file"
    L34: "cluster.go:125: [kubelet-start] Writing kubelet environment file with flags to file __/var/lib/kubelet/kubeadm-flags.env__"
    L35: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/config.yaml__"
    L36: "cluster.go:125: [kubelet-start] Starting the kubelet"
    L37: "cluster.go:125: [control-plane] Using manifest folder __/etc/kubernetes/manifests__"
    L38: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-apiserver__"
    L39: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-controller-manager__"
    L40: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-scheduler__"
    L41: "cluster.go:125: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory __/etc/kubernetes/manifests__. This can take up to 30m0s"
    L42: "cluster.go:125: [apiclient] All control plane components are healthy after 6.503501 seconds"
    L43: "cluster.go:125: [upload-config] Storing the configuration used in ConfigMap __kubeadm-config__ in the __kube-system__ Namespace"
    L44: "cluster.go:125: [kubelet] Creating a ConfigMap __kubelet-config__ in namespace kube-system with the configuration for the kubelets in the cluster"
    L45: "cluster.go:125: [upload-certs] Skipping phase. Please see --upload-certs"
    L46: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]"
    L47: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]"
    L48: "cluster.go:125: [bootstrap-token] Using token: 9gwsi4.lshoxtb1hcddwi4n"
    L49: "cluster.go:125: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles"
    L50: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes"
    L51: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials"
    L52: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token"
    L53: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster"
    L54: "cluster.go:125: [bootstrap-token] Creating the __cluster-info__ ConfigMap in the __kube-public__ namespace"
    L55: "cluster.go:125: [kubelet-finalize] Updating __/etc/kubernetes/kubelet.conf__ to point to a rotatable kubelet client certificate and key"
    L56: "cluster.go:125: [addons] Applied essential addon: CoreDNS"
    L57: "cluster.go:125: [addons] Applied essential addon: kube-proxy"
    L58: "cluster.go:125: "
    L59: "cluster.go:125: Your Kubernetes control-plane has initialized successfully!"
    L60: "cluster.go:125: "
    L61: "cluster.go:125: To start using your cluster, you need to run the following as a regular user:"
    L62: "cluster.go:125: "
    L63: "cluster.go:125:   mkdir -p $HOME/.kube"
    L64: "cluster.go:125:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
    L65: "cluster.go:125:   sudo chown $(id -u):$(id -g) $HOME/.kube/config"
    L66: "cluster.go:125: "
    L67: "cluster.go:125: Alternatively, if you are the root user, you can run:"
    L68: "cluster.go:125: "
    L69: "cluster.go:125:   export KUBECONFIG=/etc/kubernetes/admin.conf"
    L70: "cluster.go:125: "
    L71: "cluster.go:125: You should now deploy a pod network to the cluster."
    L72: "cluster.go:125: Run __kubectl apply -f [podnetwork].yaml__ with one of the options listed at:"
    L73: "cluster.go:125:   https://kubernetes.io/docs/concepts/cluster-administration/addons/"
    L74: "cluster.go:125: "
    L75: "cluster.go:125: Then you can join any number of worker nodes by running the following on each as root:"
    L76: "cluster.go:125: "
    L77: "cluster.go:125: kubeadm join 10.0.0.91:6443 --token 9gwsi4.lshoxtb1hcddwi4n _"
    L78: "cluster.go:125:  --discovery-token-ca-cert-hash sha256:0089b3f21e3a8f63ddb026c108542c076453bfbf696de0984818b001f065edd8 "
    L79: "cluster.go:125: i  Using Cilium version 1.12.5"
    L80: "cluster.go:125: ? Auto-detected cluster name: kubernetes"
    L81: "cluster.go:125: ? Auto-detected datapath mode: tunnel"
    L82: "cluster.go:125: ? Auto-detected kube-proxy has been installed"
    L83: "cluster.go:125: i  helm template --namespace kube-system cilium cilium/cilium --version 1.12.5 --set cluster.id=0,cluster.name=kubernetes,encryption.nodeEncryption=false,extraConfig.cluster-pool-ipv4-?cidr=192.168.0.0/17,extraConfig.enable-endpoint-routes=true,kubeProxyReplacement=disabled,operator.replicas=1,serviceAccounts.cilium.name=cilium,serviceAccounts.operator.name=cilium-operator,tunnel=vx?lan"
    L84: "cluster.go:125: i  Storing helm values file in kube-system/cilium-cli-helm-values Secret"
    L85: "cluster.go:125: ? Created CA in secret cilium-ca"
    L86: "cluster.go:125: ? Generating certificates for Hubble..."
    L87: "cluster.go:125: ? Creating Service accounts..."
    L88: "cluster.go:125: ? Creating Cluster roles..."
    L89: "cluster.go:125: ? Creating ConfigMap for Cilium version 1.12.5..."
    L90: "cluster.go:125: i  Manual overwrite in ConfigMap: enable-endpoint-routes=true"
    L91: "cluster.go:125: i  Manual overwrite in ConfigMap: cluster-pool-ipv4-cidr=192.168.0.0/17"
    L92: "cluster.go:125: ? Creating Agent DaemonSet..."
    L93: "cluster.go:125: ? Creating Operator Deployment..."
    L94: "cluster.go:125: ? Waiting for Cilium to be installed and ready..."
    L95: "cluster.go:125: ? Cilium was successfully installed! Run _cilium status_ to view installation health"
    L96: "cluster.go:125: ?[33m    /??_"
    L97: "cluster.go:125: ?[36m /???[33m___/?[32m??_?[0m    Cilium:         ?[32mOK?[0m"
    L98: "cluster.go:125: ?[36m ___?[31m/??_?[32m__/?[0m    Operator:       ?[32mOK?[0m"
    L99: "cluster.go:125: ?[32m /???[31m___/?[35m??_?[0m    Hubble:         ?[36mdisabled?[0m"
    L100: "cluster.go:125: ?[32m ___?[34m/??_?[35m__/?[0m    ClusterMesh:    ?[36mdisabled?[0m"
    L101: "cluster.go:125: ?[34m    ___/"
    L102: "cluster.go:125: ?[0m"
    L103: "cluster.go:125: Deployment       cilium-operator    "
    L104: "cluster.go:125: DaemonSet        cilium             "
    L105: "cluster.go:125: Containers:      cilium             "
    L106: "cluster.go:125:                  cilium-operator    "
    L107: "cluster.go:125: Cluster Pods:    0/0 managed by Cilium"
    L108: "cluster.go:125: Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service ??? /etc/systemd/system/kubelet.service."
    L109: "harness.go:583: Found emergency shell on machine 3c0b1b93-c892-4938-8373-d1331a3cff68 console"
    L110: "harness.go:583: Found systemd unit failed to start (?[0;1;39mignition-f???es.service?[0m - Ignition (files). ) on machine 3c0b1b93-c892-4938-8373-d1331a3cff68 console"
    L111: "harness.go:583: Found systemd dependency unit failed to start (?[0;1;39migni???te.target?[0m - Ignition Complete. ) on machine 3c0b1b93-c892-4938-8373-d1331a3cff68 console_"
    L112: " "

ok kubeadm.v1.26.5.flannel.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.27.2.calico.base 🟢 Succeeded: qemu_uefi-amd64 (3); qemu_uefi-arm64 (1) ❌ Failed: qemu_uefi-amd64 (1, 2)

                Diagnostic output for qemu_uefi-amd64, run 2
    L1: " Error: _cluster.go:125: I1026 13:17:21.766372    1612 version.go:256] remote version is much newer: v1.28.3; falling back to: stable-1.27"
    L2: "cluster.go:125: W1026 13:17:21.944713    1612 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.7, falling back to the nearest etcd version (3.5.7-0)"
    L3: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.27.7"
    L4: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.27.7"
    L5: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.27.7"
    L6: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-proxy:v1.27.7"
    L7: "cluster.go:125: [config/images] Pulled registry.k8s.io/pause:3.9"
    L8: "cluster.go:125: [config/images] Pulled registry.k8s.io/etcd:3.5.7-0"
    L9: "cluster.go:125: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.10.1"
    L10: "cluster.go:125: I1026 13:17:31.986627    1766 version.go:256] remote version is much newer: v1.28.3; falling back to: stable-1.27"
    L11: "cluster.go:125: [init] Using Kubernetes version: v1.27.7"
    L12: "cluster.go:125: [preflight] Running pre-flight checks"
    L13: "cluster.go:125: [preflight] Pulling images required for setting up a Kubernetes cluster"
    L14: "cluster.go:125: [preflight] This might take a minute or two, depending on the speed of your internet connection"
    L15: "cluster.go:125: [preflight] You can also perform this action in beforehand using _kubeadm config images pull_"
    L16: "cluster.go:125: W1026 13:17:32.291742    1766 checks.go:835] detected that the sandbox image __registry.k8s.io/pause:3.8__ of the container runtime is inconsistent with that used by kubeadm. It is rec?ommended that using __registry.k8s.io/pause:3.9__ as the CRI sandbox image."
    L17: "cluster.go:125: [certs] Using certificateDir folder __/etc/kubernetes/pki__"
    L18: "cluster.go:125: [certs] Generating __ca__ certificate and key"
    L19: "cluster.go:125: [certs] Generating __apiserver__ certificate and key"
    L20: "cluster.go:125: [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 10.0.0.5?]"
    L21: "cluster.go:125: [certs] Generating __apiserver-kubelet-client__ certificate and key"
    L22: "cluster.go:125: [certs] Generating __front-proxy-ca__ certificate and key"
    L23: "cluster.go:125: [certs] Generating __front-proxy-client__ certificate and key"
    L24: "cluster.go:125: [certs] External etcd mode: Skipping etcd/ca certificate authority generation"
    L25: "cluster.go:125: [certs] External etcd mode: Skipping etcd/server certificate generation"
    L26: "cluster.go:125: [certs] External etcd mode: Skipping etcd/peer certificate generation"
    L27: "cluster.go:125: [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation"
    L28: "cluster.go:125: [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation"
    L29: "cluster.go:125: [certs] Generating __sa__ key and public key"
    L30: "cluster.go:125: [kubeconfig] Using kubeconfig folder __/etc/kubernetes__"
    L31: "cluster.go:125: [kubeconfig] Writing __admin.conf__ kubeconfig file"
    L32: "cluster.go:125: [kubeconfig] Writing __kubelet.conf__ kubeconfig file"
    L33: "cluster.go:125: [kubeconfig] Writing __controller-manager.conf__ kubeconfig file"
    L34: "cluster.go:125: [kubeconfig] Writing __scheduler.conf__ kubeconfig file"
    L35: "cluster.go:125: [kubelet-start] Writing kubelet environment file with flags to file __/var/lib/kubelet/kubeadm-flags.env__"
    L36: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/config.yaml__"
    L37: "cluster.go:125: [kubelet-start] Starting the kubelet"
    L38: "cluster.go:125: [control-plane] Using manifest folder __/etc/kubernetes/manifests__"
    L39: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-apiserver__"
    L40: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-controller-manager__"
    L41: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-scheduler__"
    L42: "cluster.go:125: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory __/etc/kubernetes/manifests__. This can take up to 30m0s"
    L43: "cluster.go:125: [apiclient] All control plane components are healthy after 4.502960 seconds"
    L44: "cluster.go:125: [upload-config] Storing the configuration used in ConfigMap __kubeadm-config__ in the __kube-system__ Namespace"
    L45: "cluster.go:125: [kubelet] Creating a ConfigMap __kubelet-config__ in namespace kube-system with the configuration for the kubelets in the cluster"
    L46: "cluster.go:125: [upload-certs] Skipping phase. Please see --upload-certs"
    L47: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]"
    L48: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]"
    L49: "cluster.go:125: [bootstrap-token] Using token: utvies.c1jpp0wrr0rw2364"
    L50: "cluster.go:125: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles"
    L51: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes"
    L52: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials"
    L53: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token"
    L54: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster"
    L55: "cluster.go:125: [bootstrap-token] Creating the __cluster-info__ ConfigMap in the __kube-public__ namespace"
    L56: "cluster.go:125: [kubelet-finalize] Updating __/etc/kubernetes/kubelet.conf__ to point to a rotatable kubelet client certificate and key"
    L57: "cluster.go:125: [addons] Applied essential addon: CoreDNS"
    L58: "cluster.go:125: [addons] Applied essential addon: kube-proxy"
    L59: "cluster.go:125: "
    L60: "cluster.go:125: Your Kubernetes control-plane has initialized successfully!"
    L61: "cluster.go:125: "
    L62: "cluster.go:125: To start using your cluster, you need to run the following as a regular user:"
    L63: "cluster.go:125: "
    L64: "cluster.go:125:   mkdir -p $HOME/.kube"
    L65: "cluster.go:125:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
    L66: "cluster.go:125:   sudo chown $(id -u):$(id -g) $HOME/.kube/config"
    L67: "cluster.go:125: "
    L68: "cluster.go:125: Alternatively, if you are the root user, you can run:"
    L69: "cluster.go:125: "
    L70: "cluster.go:125:   export KUBECONFIG=/etc/kubernetes/admin.conf"
    L71: "cluster.go:125: "
    L72: "cluster.go:125: You should now deploy a pod network to the cluster."
    L73: "cluster.go:125: Run __kubectl apply -f [podnetwork].yaml__ with one of the options listed at:"
    L74: "cluster.go:125:   https://kubernetes.io/docs/concepts/cluster-administration/addons/"
    L75: "cluster.go:125: "
    L76: "cluster.go:125: Then you can join any number of worker nodes by running the following on each as root:"
    L77: "cluster.go:125: "
    L78: "cluster.go:125: kubeadm join 10.0.0.5:6443 --token utvies.c1jpp0wrr0rw2364 _"
    L79: "cluster.go:125:  --discovery-token-ca-cert-hash sha256:c38045834460f8b03d5ab5f6179038932cc753e82a839a25791b2a9f6b7c758d "
    L80: "cluster.go:125: namespace/tigera-operator created"
    L81: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created"
    L82: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/bgpfilters.crd.projectcalico.org created"
    L83: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created"
    L84: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created"
    L85: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created"
    L86: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created"
    L87: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created"
    L88: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created"
    L89: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created"
    L90: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created"
    L91: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created"
    L92: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created"
    L93: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created"
    L94: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created"
    L95: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created"
    L96: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created"
    L97: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created"
    L98: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created"
    L99: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io created"
    L100: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/imagesets.operator.tigera.io created"
    L101: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io created"
    L102: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/tigerastatuses.operator.tigera.io created"
    L103: "cluster.go:125: serviceaccount/tigera-operator created"
    L104: "cluster.go:125: clusterrole.rbac.authorization.k8s.io/tigera-operator created"
    L105: "cluster.go:125: clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created"
    L106: "cluster.go:125: deployment.apps/tigera-operator created"
    L107: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io condition met"
    L108: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io condition met"
    L109: "cluster.go:125: installation.operator.tigera.io/default created"
    L110: "cluster.go:125: apiserver.operator.tigera.io/default created"
    L111: "cluster.go:125: Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service ??? /etc/systemd/system/kubelet.service."
    L112: "--- FAIL: kubeadm.v1.27.2.calico.base/nginx_deployment (92.69s)"
    L113: "kubeadm.go:319: nginx is not deployed: ready replicas should be equal to 1: null_"
    L114: " "
                Diagnostic output for qemu_uefi-amd64, run 1
    L1: " Error: _cluster.go:125: I1026 13:06:24.048270    1607 version.go:256] remote version is much newer: v1.28.3; falling back to: stable-1.27"
    L2: "cluster.go:125: W1026 13:06:24.374870    1607 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.7, falling back to the nearest etcd version (3.5.7-0)"
    L3: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.27.7"
    L4: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.27.7"
    L5: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.27.7"
    L6: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-proxy:v1.27.7"
    L7: "cluster.go:125: [config/images] Pulled registry.k8s.io/pause:3.9"
    L8: "cluster.go:125: [config/images] Pulled registry.k8s.io/etcd:3.5.7-0"
    L9: "cluster.go:125: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.10.1"
    L10: "cluster.go:125: I1026 13:06:35.307199    1766 version.go:256] remote version is much newer: v1.28.3; falling back to: stable-1.27"
    L11: "cluster.go:125: [init] Using Kubernetes version: v1.27.7"
    L12: "cluster.go:125: [preflight] Running pre-flight checks"
    L13: "cluster.go:125: [preflight] Pulling images required for setting up a Kubernetes cluster"
    L14: "cluster.go:125: [preflight] This might take a minute or two, depending on the speed of your internet connection"
    L15: "cluster.go:125: [preflight] You can also perform this action in beforehand using _kubeadm config images pull_"
    L16: "cluster.go:125: W1026 13:06:35.698080    1766 checks.go:835] detected that the sandbox image __registry.k8s.io/pause:3.8__ of the container runtime is inconsistent with that used by kubeadm. It is rec?ommended that using __registry.k8s.io/pause:3.9__ as the CRI sandbox image."
    L17: "cluster.go:125: [certs] Using certificateDir folder __/etc/kubernetes/pki__"
    L18: "cluster.go:125: [certs] Generating __ca__ certificate and key"
    L19: "cluster.go:125: [certs] Generating __apiserver__ certificate and key"
    L20: "cluster.go:125: [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 10.0.0.1?01]"
    L21: "cluster.go:125: [certs] Generating __apiserver-kubelet-client__ certificate and key"
    L22: "cluster.go:125: [certs] Generating __front-proxy-ca__ certificate and key"
    L23: "cluster.go:125: [certs] Generating __front-proxy-client__ certificate and key"
    L24: "cluster.go:125: [certs] External etcd mode: Skipping etcd/ca certificate authority generation"
    L25: "cluster.go:125: [certs] External etcd mode: Skipping etcd/server certificate generation"
    L26: "cluster.go:125: [certs] External etcd mode: Skipping etcd/peer certificate generation"
    L27: "cluster.go:125: [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation"
    L28: "cluster.go:125: [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation"
    L29: "cluster.go:125: [certs] Generating __sa__ key and public key"
    L30: "cluster.go:125: [kubeconfig] Using kubeconfig folder __/etc/kubernetes__"
    L31: "cluster.go:125: [kubeconfig] Writing __admin.conf__ kubeconfig file"
    L32: "cluster.go:125: [kubeconfig] Writing __kubelet.conf__ kubeconfig file"
    L33: "cluster.go:125: [kubeconfig] Writing __controller-manager.conf__ kubeconfig file"
    L34: "cluster.go:125: [kubeconfig] Writing __scheduler.conf__ kubeconfig file"
    L35: "cluster.go:125: [kubelet-start] Writing kubelet environment file with flags to file __/var/lib/kubelet/kubeadm-flags.env__"
    L36: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/config.yaml__"
    L37: "cluster.go:125: [kubelet-start] Starting the kubelet"
    L38: "cluster.go:125: [control-plane] Using manifest folder __/etc/kubernetes/manifests__"
    L39: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-apiserver__"
    L40: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-controller-manager__"
    L41: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-scheduler__"
    L42: "cluster.go:125: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory __/etc/kubernetes/manifests__. This can take up to 30m0s"
    L43: "cluster.go:125: [apiclient] All control plane components are healthy after 4.001805 seconds"
    L44: "cluster.go:125: [upload-config] Storing the configuration used in ConfigMap __kubeadm-config__ in the __kube-system__ Namespace"
    L45: "cluster.go:125: [kubelet] Creating a ConfigMap __kubelet-config__ in namespace kube-system with the configuration for the kubelets in the cluster"
    L46: "cluster.go:125: [upload-certs] Skipping phase. Please see --upload-certs"
    L47: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]"
    L48: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]"
    L49: "cluster.go:125: [bootstrap-token] Using token: dhncj5.i8ozuadszx2vknsq"
    L50: "cluster.go:125: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles"
    L51: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes"
    L52: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials"
    L53: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token"
    L54: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster"
    L55: "cluster.go:125: [bootstrap-token] Creating the __cluster-info__ ConfigMap in the __kube-public__ namespace"
    L56: "cluster.go:125: [kubelet-finalize] Updating __/etc/kubernetes/kubelet.conf__ to point to a rotatable kubelet client certificate and key"
    L57: "cluster.go:125: [addons] Applied essential addon: CoreDNS"
    L58: "cluster.go:125: [addons] Applied essential addon: kube-proxy"
    L59: "cluster.go:125: "
    L60: "cluster.go:125: Your Kubernetes control-plane has initialized successfully!"
    L61: "cluster.go:125: "
    L62: "cluster.go:125: To start using your cluster, you need to run the following as a regular user:"
    L63: "cluster.go:125: "
    L64: "cluster.go:125:   mkdir -p $HOME/.kube"
    L65: "cluster.go:125:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
    L66: "cluster.go:125:   sudo chown $(id -u):$(id -g) $HOME/.kube/config"
    L67: "cluster.go:125: "
    L68: "cluster.go:125: Alternatively, if you are the root user, you can run:"
    L69: "cluster.go:125: "
    L70: "cluster.go:125:   export KUBECONFIG=/etc/kubernetes/admin.conf"
    L71: "cluster.go:125: "
    L72: "cluster.go:125: You should now deploy a pod network to the cluster."
    L73: "cluster.go:125: Run __kubectl apply -f [podnetwork].yaml__ with one of the options listed at:"
    L74: "cluster.go:125:   https://kubernetes.io/docs/concepts/cluster-administration/addons/"
    L75: "cluster.go:125: "
    L76: "cluster.go:125: Then you can join any number of worker nodes by running the following on each as root:"
    L77: "cluster.go:125: "
    L78: "cluster.go:125: kubeadm join 10.0.0.101:6443 --token dhncj5.i8ozuadszx2vknsq _"
    L79: "cluster.go:125:  --discovery-token-ca-cert-hash sha256:340b579a10f56677fcd9b27b5a350dcfbf2b47ba1d96c48d5daf5767c5186aa1 "
    L80: "cluster.go:125: namespace/tigera-operator created"
    L81: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created"
    L82: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/bgpfilters.crd.projectcalico.org created"
    L83: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created"
    L84: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created"
    L85: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created"
    L86: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created"
    L87: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created"
    L88: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created"
    L89: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created"
    L90: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created"
    L91: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created"
    L92: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created"
    L93: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created"
    L94: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created"
    L95: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created"
    L96: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created"
    L97: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created"
    L98: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created"
    L99: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io created"
    L100: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/imagesets.operator.tigera.io created"
    L101: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io created"
    L102: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/tigerastatuses.operator.tigera.io created"
    L103: "cluster.go:125: serviceaccount/tigera-operator created"
    L104: "cluster.go:125: clusterrole.rbac.authorization.k8s.io/tigera-operator created"
    L105: "cluster.go:125: clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created"
    L106: "cluster.go:125: deployment.apps/tigera-operator created"
    L107: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io condition met"
    L108: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io condition met"
    L109: "cluster.go:125: installation.operator.tigera.io/default created"
    L110: "cluster.go:125: apiserver.operator.tigera.io/default created"
    L111: "cluster.go:125: Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service ??? /etc/systemd/system/kubelet.service."
    L112: "--- FAIL: kubeadm.v1.27.2.calico.base/nginx_deployment (92.61s)"
    L113: "kubeadm.go:319: nginx is not deployed: ready replicas should be equal to 1: null_"
    L114: " "
    L115: "  "

ok kubeadm.v1.27.2.cilium.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.27.2.flannel.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.28.1.calico.base 🟢 Succeeded: qemu_uefi-amd64 (4); qemu_uefi-arm64 (1) ❌ Failed: qemu_uefi-amd64 (1, 2, 3)

                Diagnostic output for qemu_uefi-amd64, run 3
    L1: " Error: _cluster.go:125: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.28.3"
    L2: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.28.3"
    L3: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.28.3"
    L4: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-proxy:v1.28.3"
    L5: "cluster.go:125: [config/images] Pulled registry.k8s.io/pause:3.9"
    L6: "cluster.go:125: [config/images] Pulled registry.k8s.io/etcd:3.5.9-0"
    L7: "cluster.go:125: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.10.1"
    L8: "cluster.go:125: [init] Using Kubernetes version: v1.28.3"
    L9: "cluster.go:125: [preflight] Running pre-flight checks"
    L10: "cluster.go:125: [preflight] Pulling images required for setting up a Kubernetes cluster"
    L11: "cluster.go:125: [preflight] This might take a minute or two, depending on the speed of your internet connection"
    L12: "cluster.go:125: [preflight] You can also perform this action in beforehand using _kubeadm config images pull_"
    L13: "cluster.go:125: W1026 13:21:04.885960    1801 checks.go:835] detected that the sandbox image __registry.k8s.io/pause:3.8__ of the container runtime is inconsistent with that used by kubeadm. It is rec?ommended that using __registry.k8s.io/pause:3.9__ as the CRI sandbox image."
    L14: "cluster.go:125: [certs] Using certificateDir folder __/etc/kubernetes/pki__"
    L15: "cluster.go:125: [certs] Generating __ca__ certificate and key"
    L16: "cluster.go:125: [certs] Generating __apiserver__ certificate and key"
    L17: "cluster.go:125: [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 10.0.0.4?]"
    L18: "cluster.go:125: [certs] Generating __apiserver-kubelet-client__ certificate and key"
    L19: "cluster.go:125: [certs] Generating __front-proxy-ca__ certificate and key"
    L20: "cluster.go:125: [certs] Generating __front-proxy-client__ certificate and key"
    L21: "cluster.go:125: [certs] External etcd mode: Skipping etcd/ca certificate authority generation"
    L22: "cluster.go:125: [certs] External etcd mode: Skipping etcd/server certificate generation"
    L23: "cluster.go:125: [certs] External etcd mode: Skipping etcd/peer certificate generation"
    L24: "cluster.go:125: [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation"
    L25: "cluster.go:125: [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation"
    L26: "cluster.go:125: [certs] Generating __sa__ key and public key"
    L27: "cluster.go:125: [kubeconfig] Using kubeconfig folder __/etc/kubernetes__"
    L28: "cluster.go:125: [kubeconfig] Writing __admin.conf__ kubeconfig file"
    L29: "cluster.go:125: [kubeconfig] Writing __kubelet.conf__ kubeconfig file"
    L30: "cluster.go:125: [kubeconfig] Writing __controller-manager.conf__ kubeconfig file"
    L31: "cluster.go:125: [kubeconfig] Writing __scheduler.conf__ kubeconfig file"
    L32: "cluster.go:125: [control-plane] Using manifest folder __/etc/kubernetes/manifests__"
    L33: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-apiserver__"
    L34: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-controller-manager__"
    L35: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-scheduler__"
    L36: "cluster.go:125: [kubelet-start] Writing kubelet environment file with flags to file __/var/lib/kubelet/kubeadm-flags.env__"
    L37: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/config.yaml__"
    L38: "cluster.go:125: [kubelet-start] Starting the kubelet"
    L39: "cluster.go:125: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory __/etc/kubernetes/manifests__. This can take up to 30m0s"
    L40: "cluster.go:125: [apiclient] All control plane components are healthy after 4.002423 seconds"
    L41: "cluster.go:125: [upload-config] Storing the configuration used in ConfigMap __kubeadm-config__ in the __kube-system__ Namespace"
    L42: "cluster.go:125: [kubelet] Creating a ConfigMap __kubelet-config__ in namespace kube-system with the configuration for the kubelets in the cluster"
    L43: "cluster.go:125: [upload-certs] Skipping phase. Please see --upload-certs"
    L44: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]"
    L45: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]"
    L46: "cluster.go:125: [bootstrap-token] Using token: 5r4olx.inovtl2drh25x8da"
    L47: "cluster.go:125: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles"
    L48: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes"
    L49: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials"
    L50: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token"
    L51: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster"
    L52: "cluster.go:125: [bootstrap-token] Creating the __cluster-info__ ConfigMap in the __kube-public__ namespace"
    L53: "cluster.go:125: [kubelet-finalize] Updating __/etc/kubernetes/kubelet.conf__ to point to a rotatable kubelet client certificate and key"
    L54: "cluster.go:125: [addons] Applied essential addon: CoreDNS"
    L55: "cluster.go:125: [addons] Applied essential addon: kube-proxy"
    L56: "cluster.go:125: "
    L57: "cluster.go:125: Your Kubernetes control-plane has initialized successfully!"
    L58: "cluster.go:125: "
    L59: "cluster.go:125: To start using your cluster, you need to run the following as a regular user:"
    L60: "cluster.go:125: "
    L61: "cluster.go:125:   mkdir -p $HOME/.kube"
    L62: "cluster.go:125:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
    L63: "cluster.go:125:   sudo chown $(id -u):$(id -g) $HOME/.kube/config"
    L64: "cluster.go:125: "
    L65: "cluster.go:125: Alternatively, if you are the root user, you can run:"
    L66: "cluster.go:125: "
    L67: "cluster.go:125:   export KUBECONFIG=/etc/kubernetes/admin.conf"
    L68: "cluster.go:125: "
    L69: "cluster.go:125: You should now deploy a pod network to the cluster."
    L70: "cluster.go:125: Run __kubectl apply -f [podnetwork].yaml__ with one of the options listed at:"
    L71: "cluster.go:125:   https://kubernetes.io/docs/concepts/cluster-administration/addons/"
    L72: "cluster.go:125: "
    L73: "cluster.go:125: Then you can join any number of worker nodes by running the following on each as root:"
    L74: "cluster.go:125: "
    L75: "cluster.go:125: kubeadm join 10.0.0.4:6443 --token 5r4olx.inovtl2drh25x8da _"
    L76: "cluster.go:125:  --discovery-token-ca-cert-hash sha256:bbeb238b803edacd780ddbabd071a1060942bd21a0434054d1a9562545b9b347 "
    L77: "cluster.go:125: namespace/tigera-operator created"
    L78: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created"
    L79: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/bgpfilters.crd.projectcalico.org created"
    L80: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created"
    L81: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created"
    L82: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created"
    L83: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created"
    L84: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created"
    L85: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created"
    L86: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created"
    L87: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created"
    L88: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created"
    L89: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created"
    L90: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created"
    L91: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created"
    L92: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created"
    L93: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created"
    L94: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created"
    L95: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created"
    L96: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io created"
    L97: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/imagesets.operator.tigera.io created"
    L98: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io created"
    L99: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/tigerastatuses.operator.tigera.io created"
    L100: "cluster.go:125: serviceaccount/tigera-operator created"
    L101: "cluster.go:125: clusterrole.rbac.authorization.k8s.io/tigera-operator created"
    L102: "cluster.go:125: clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created"
    L103: "cluster.go:125: deployment.apps/tigera-operator created"
    L104: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io condition met"
    L105: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io condition met"
    L106: "cluster.go:125: installation.operator.tigera.io/default created"
    L107: "cluster.go:125: apiserver.operator.tigera.io/default created"
    L108: "cluster.go:125: Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service ??? /etc/systemd/system/kubelet.service."
    L109: "--- FAIL: kubeadm.v1.28.1.calico.base/nginx_deployment (92.10s)"
    L110: "kubeadm.go:319: nginx is not deployed: ready replicas should be equal to 1: null_"
    L111: " "
                Diagnostic output for qemu_uefi-amd64, run 2
    L1: " Error: _cluster.go:125: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.28.3"
    L2: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.28.3"
    L3: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.28.3"
    L4: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-proxy:v1.28.3"
    L5: "cluster.go:125: [config/images] Pulled registry.k8s.io/pause:3.9"
    L6: "cluster.go:125: [config/images] Pulled registry.k8s.io/etcd:3.5.9-0"
    L7: "cluster.go:125: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.10.1"
    L8: "cluster.go:125: [init] Using Kubernetes version: v1.28.3"
    L9: "cluster.go:125: [preflight] Running pre-flight checks"
    L10: "cluster.go:125: [preflight] Pulling images required for setting up a Kubernetes cluster"
    L11: "cluster.go:125: [preflight] This might take a minute or two, depending on the speed of your internet connection"
    L12: "cluster.go:125: [preflight] You can also perform this action in beforehand using _kubeadm config images pull_"
    L13: "cluster.go:125: W1026 13:17:35.177518    1794 checks.go:835] detected that the sandbox image __registry.k8s.io/pause:3.8__ of the container runtime is inconsistent with that used by kubeadm. It is rec?ommended that using __registry.k8s.io/pause:3.9__ as the CRI sandbox image."
    L14: "cluster.go:125: [certs] Using certificateDir folder __/etc/kubernetes/pki__"
    L15: "cluster.go:125: [certs] Generating __ca__ certificate and key"
    L16: "cluster.go:125: [certs] Generating __apiserver__ certificate and key"
    L17: "cluster.go:125: [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 10.0.0.7?]"
    L18: "cluster.go:125: [certs] Generating __apiserver-kubelet-client__ certificate and key"
    L19: "cluster.go:125: [certs] Generating __front-proxy-ca__ certificate and key"
    L20: "cluster.go:125: [certs] Generating __front-proxy-client__ certificate and key"
    L21: "cluster.go:125: [certs] External etcd mode: Skipping etcd/ca certificate authority generation"
    L22: "cluster.go:125: [certs] External etcd mode: Skipping etcd/server certificate generation"
    L23: "cluster.go:125: [certs] External etcd mode: Skipping etcd/peer certificate generation"
    L24: "cluster.go:125: [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation"
    L25: "cluster.go:125: [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation"
    L26: "cluster.go:125: [certs] Generating __sa__ key and public key"
    L27: "cluster.go:125: [kubeconfig] Using kubeconfig folder __/etc/kubernetes__"
    L28: "cluster.go:125: [kubeconfig] Writing __admin.conf__ kubeconfig file"
    L29: "cluster.go:125: [kubeconfig] Writing __kubelet.conf__ kubeconfig file"
    L30: "cluster.go:125: [kubeconfig] Writing __controller-manager.conf__ kubeconfig file"
    L31: "cluster.go:125: [kubeconfig] Writing __scheduler.conf__ kubeconfig file"
    L32: "cluster.go:125: [control-plane] Using manifest folder __/etc/kubernetes/manifests__"
    L33: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-apiserver__"
    L34: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-controller-manager__"
    L35: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-scheduler__"
    L36: "cluster.go:125: [kubelet-start] Writing kubelet environment file with flags to file __/var/lib/kubelet/kubeadm-flags.env__"
    L37: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/config.yaml__"
    L38: "cluster.go:125: [kubelet-start] Starting the kubelet"
    L39: "cluster.go:125: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory __/etc/kubernetes/manifests__. This can take up to 30m0s"
    L40: "cluster.go:125: [apiclient] All control plane components are healthy after 4.001977 seconds"
    L41: "cluster.go:125: [upload-config] Storing the configuration used in ConfigMap __kubeadm-config__ in the __kube-system__ Namespace"
    L42: "cluster.go:125: [kubelet] Creating a ConfigMap __kubelet-config__ in namespace kube-system with the configuration for the kubelets in the cluster"
    L43: "cluster.go:125: [upload-certs] Skipping phase. Please see --upload-certs"
    L44: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]"
    L45: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]"
    L46: "cluster.go:125: [bootstrap-token] Using token: bp2r8a.5tu6lhj0kb8ej5y1"
    L47: "cluster.go:125: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles"
    L48: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes"
    L49: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials"
    L50: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token"
    L51: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster"
    L52: "cluster.go:125: [bootstrap-token] Creating the __cluster-info__ ConfigMap in the __kube-public__ namespace"
    L53: "cluster.go:125: [kubelet-finalize] Updating __/etc/kubernetes/kubelet.conf__ to point to a rotatable kubelet client certificate and key"
    L54: "cluster.go:125: [addons] Applied essential addon: CoreDNS"
    L55: "cluster.go:125: [addons] Applied essential addon: kube-proxy"
    L56: "cluster.go:125: "
    L57: "cluster.go:125: Your Kubernetes control-plane has initialized successfully!"
    L58: "cluster.go:125: "
    L59: "cluster.go:125: To start using your cluster, you need to run the following as a regular user:"
    L60: "cluster.go:125: "
    L61: "cluster.go:125:   mkdir -p $HOME/.kube"
    L62: "cluster.go:125:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
    L63: "cluster.go:125:   sudo chown $(id -u):$(id -g) $HOME/.kube/config"
    L64: "cluster.go:125: "
    L65: "cluster.go:125: Alternatively, if you are the root user, you can run:"
    L66: "cluster.go:125: "
    L67: "cluster.go:125:   export KUBECONFIG=/etc/kubernetes/admin.conf"
    L68: "cluster.go:125: "
    L69: "cluster.go:125: You should now deploy a pod network to the cluster."
    L70: "cluster.go:125: Run __kubectl apply -f [podnetwork].yaml__ with one of the options listed at:"
    L71: "cluster.go:125:   https://kubernetes.io/docs/concepts/cluster-administration/addons/"
    L72: "cluster.go:125: "
    L73: "cluster.go:125: Then you can join any number of worker nodes by running the following on each as root:"
    L74: "cluster.go:125: "
    L75: "cluster.go:125: kubeadm join 10.0.0.7:6443 --token bp2r8a.5tu6lhj0kb8ej5y1 _"
    L76: "cluster.go:125:  --discovery-token-ca-cert-hash sha256:b1bd5a6f5d32bc1fe76796f78310208c952e6085fa5c3e22c6deb367610f05a7 "
    L77: "cluster.go:125: namespace/tigera-operator created"
    L78: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created"
    L79: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/bgpfilters.crd.projectcalico.org created"
    L80: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created"
    L81: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created"
    L82: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created"
    L83: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created"
    L84: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created"
    L85: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created"
    L86: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created"
    L87: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created"
    L88: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created"
    L89: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created"
    L90: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created"
    L91: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created"
    L92: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created"
    L93: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created"
    L94: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created"
    L95: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created"
    L96: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io created"
    L97: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/imagesets.operator.tigera.io created"
    L98: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io created"
    L99: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/tigerastatuses.operator.tigera.io created"
    L100: "cluster.go:125: serviceaccount/tigera-operator created"
    L101: "cluster.go:125: clusterrole.rbac.authorization.k8s.io/tigera-operator created"
    L102: "cluster.go:125: clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created"
    L103: "cluster.go:125: deployment.apps/tigera-operator created"
    L104: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io condition met"
    L105: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io condition met"
    L106: "cluster.go:125: installation.operator.tigera.io/default created"
    L107: "cluster.go:125: apiserver.operator.tigera.io/default created"
    L108: "cluster.go:125: Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service ??? /etc/systemd/system/kubelet.service."
    L109: "--- FAIL: kubeadm.v1.28.1.calico.base/nginx_deployment (92.11s)"
    L110: "kubeadm.go:319: nginx is not deployed: ready replicas should be equal to 1: null_"
    L111: " "
                Diagnostic output for qemu_uefi-amd64, run 1
    L1: " Error: _cluster.go:125: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.28.3"
    L2: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.28.3"
    L3: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.28.3"
    L4: "cluster.go:125: [config/images] Pulled registry.k8s.io/kube-proxy:v1.28.3"
    L5: "cluster.go:125: [config/images] Pulled registry.k8s.io/pause:3.9"
    L6: "cluster.go:125: [config/images] Pulled registry.k8s.io/etcd:3.5.9-0"
    L7: "cluster.go:125: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.10.1"
    L8: "cluster.go:125: [init] Using Kubernetes version: v1.28.3"
    L9: "cluster.go:125: [preflight] Running pre-flight checks"
    L10: "cluster.go:125: [preflight] Pulling images required for setting up a Kubernetes cluster"
    L11: "cluster.go:125: [preflight] This might take a minute or two, depending on the speed of your internet connection"
    L12: "cluster.go:125: [preflight] You can also perform this action in beforehand using _kubeadm config images pull_"
    L13: "cluster.go:125: W1026 13:02:08.876180    1805 checks.go:835] detected that the sandbox image __registry.k8s.io/pause:3.8__ of the container runtime is inconsistent with that used by kubeadm. It is rec?ommended that using __registry.k8s.io/pause:3.9__ as the CRI sandbox image."
    L14: "cluster.go:125: [certs] Using certificateDir folder __/etc/kubernetes/pki__"
    L15: "cluster.go:125: [certs] Generating __ca__ certificate and key"
    L16: "cluster.go:125: [certs] Generating __apiserver__ certificate and key"
    L17: "cluster.go:125: [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 10.0.0.4?3]"
    L18: "cluster.go:125: [certs] Generating __apiserver-kubelet-client__ certificate and key"
    L19: "cluster.go:125: [certs] Generating __front-proxy-ca__ certificate and key"
    L20: "cluster.go:125: [certs] Generating __front-proxy-client__ certificate and key"
    L21: "cluster.go:125: [certs] External etcd mode: Skipping etcd/ca certificate authority generation"
    L22: "cluster.go:125: [certs] External etcd mode: Skipping etcd/server certificate generation"
    L23: "cluster.go:125: [certs] External etcd mode: Skipping etcd/peer certificate generation"
    L24: "cluster.go:125: [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation"
    L25: "cluster.go:125: [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation"
    L26: "cluster.go:125: [certs] Generating __sa__ key and public key"
    L27: "cluster.go:125: [kubeconfig] Using kubeconfig folder __/etc/kubernetes__"
    L28: "cluster.go:125: [kubeconfig] Writing __admin.conf__ kubeconfig file"
    L29: "cluster.go:125: [kubeconfig] Writing __kubelet.conf__ kubeconfig file"
    L30: "cluster.go:125: [kubeconfig] Writing __controller-manager.conf__ kubeconfig file"
    L31: "cluster.go:125: [kubeconfig] Writing __scheduler.conf__ kubeconfig file"
    L32: "cluster.go:125: [control-plane] Using manifest folder __/etc/kubernetes/manifests__"
    L33: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-apiserver__"
    L34: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-controller-manager__"
    L35: "cluster.go:125: [control-plane] Creating static Pod manifest for __kube-scheduler__"
    L36: "cluster.go:125: [kubelet-start] Writing kubelet environment file with flags to file __/var/lib/kubelet/kubeadm-flags.env__"
    L37: "cluster.go:125: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/config.yaml__"
    L38: "cluster.go:125: [kubelet-start] Starting the kubelet"
    L39: "cluster.go:125: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory __/etc/kubernetes/manifests__. This can take up to 30m0s"
    L40: "cluster.go:125: [apiclient] All control plane components are healthy after 4.501730 seconds"
    L41: "cluster.go:125: [upload-config] Storing the configuration used in ConfigMap __kubeadm-config__ in the __kube-system__ Namespace"
    L42: "cluster.go:125: [kubelet] Creating a ConfigMap __kubelet-config__ in namespace kube-system with the configuration for the kubelets in the cluster"
    L43: "cluster.go:125: [upload-certs] Skipping phase. Please see --upload-certs"
    L44: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]"
    L45: "cluster.go:125: [mark-control-plane] Marking the node localhost as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]"
    L46: "cluster.go:125: [bootstrap-token] Using token: k5lfd5.y0c378u5c2fcvgzg"
    L47: "cluster.go:125: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles"
    L48: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes"
    L49: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials"
    L50: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token"
    L51: "cluster.go:125: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster"
    L52: "cluster.go:125: [bootstrap-token] Creating the __cluster-info__ ConfigMap in the __kube-public__ namespace"
    L53: "cluster.go:125: [kubelet-finalize] Updating __/etc/kubernetes/kubelet.conf__ to point to a rotatable kubelet client certificate and key"
    L54: "cluster.go:125: [addons] Applied essential addon: CoreDNS"
    L55: "cluster.go:125: [addons] Applied essential addon: kube-proxy"
    L56: "cluster.go:125: "
    L57: "cluster.go:125: Your Kubernetes control-plane has initialized successfully!"
    L58: "cluster.go:125: "
    L59: "cluster.go:125: To start using your cluster, you need to run the following as a regular user:"
    L60: "cluster.go:125: "
    L61: "cluster.go:125:   mkdir -p $HOME/.kube"
    L62: "cluster.go:125:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
    L63: "cluster.go:125:   sudo chown $(id -u):$(id -g) $HOME/.kube/config"
    L64: "cluster.go:125: "
    L65: "cluster.go:125: Alternatively, if you are the root user, you can run:"
    L66: "cluster.go:125: "
    L67: "cluster.go:125:   export KUBECONFIG=/etc/kubernetes/admin.conf"
    L68: "cluster.go:125: "
    L69: "cluster.go:125: You should now deploy a pod network to the cluster."
    L70: "cluster.go:125: Run __kubectl apply -f [podnetwork].yaml__ with one of the options listed at:"
    L71: "cluster.go:125:   https://kubernetes.io/docs/concepts/cluster-administration/addons/"
    L72: "cluster.go:125: "
    L73: "cluster.go:125: Then you can join any number of worker nodes by running the following on each as root:"
    L74: "cluster.go:125: "
    L75: "cluster.go:125: kubeadm join 10.0.0.43:6443 --token k5lfd5.y0c378u5c2fcvgzg _"
    L76: "cluster.go:125:  --discovery-token-ca-cert-hash sha256:4513f3d567502828cfeb9edcffa5f6ba2b76ad9d5d254d1c4c60bc06e040687f "
    L77: "cluster.go:125: namespace/tigera-operator created"
    L78: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created"
    L79: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/bgpfilters.crd.projectcalico.org created"
    L80: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created"
    L81: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created"
    L82: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created"
    L83: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created"
    L84: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created"
    L85: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created"
    L86: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created"
    L87: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created"
    L88: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created"
    L89: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created"
    L90: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created"
    L91: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created"
    L92: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created"
    L93: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created"
    L94: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created"
    L95: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created"
    L96: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io created"
    L97: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/imagesets.operator.tigera.io created"
    L98: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io created"
    L99: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/tigerastatuses.operator.tigera.io created"
    L100: "cluster.go:125: serviceaccount/tigera-operator created"
    L101: "cluster.go:125: clusterrole.rbac.authorization.k8s.io/tigera-operator created"
    L102: "cluster.go:125: clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created"
    L103: "cluster.go:125: deployment.apps/tigera-operator created"
    L104: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io condition met"
    L105: "cluster.go:125: customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io condition met"
    L106: "cluster.go:125: installation.operator.tigera.io/default created"
    L107: "cluster.go:125: apiserver.operator.tigera.io/default created"
    L108: "cluster.go:125: Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service ??? /etc/systemd/system/kubelet.service."
    L109: "--- FAIL: kubeadm.v1.28.1.calico.base/nginx_deployment (92.17s)"
    L110: "kubeadm.go:319: nginx is not deployed: ready replicas should be equal to 1: null_"
    L111: " "
    L112: "  "

ok kubeadm.v1.28.1.cilium.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.28.1.flannel.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok linux.nfs.v3 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok linux.nfs.v4 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok linux.ntp 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok misc.fips 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok packages 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok sysext.custom-docker.sysext 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok sysext.custom-oem 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok sysext.disable-containerd 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok sysext.disable-docker 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok sysext.simple 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok systemd.journal.remote 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok systemd.journal.user 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok systemd.sysusers.gshadow 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

@krnowak krnowak added the main label Nov 1, 2023
Copy link
Member

@t-lo t-lo left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Github CI is green, changes LGTM. Thank you Krzesimir!

@krnowak krnowak merged commit def3c96 into main Nov 2, 2023
8 checks passed
@krnowak krnowak deleted the krnowak/mount-in-sdk branch November 2, 2023 13:36
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants