==> Audit <== |--------------|--------------------------------|----------|----------------|---------|---------------------|---------------------| | Command | Args | Profile | User | Version | Start Time | End Time | |--------------|--------------------------------|----------|----------------|---------|---------------------|---------------------| | config | defaults driver | minikube | TOASTER\rlyshw | v1.33.1 | 15 Aug 24 13:02 EDT | 15 Aug 24 13:02 EDT | | config | set driver hyperv | minikube | TOASTER\rlyshw | v1.33.1 | 15 Aug 24 13:02 EDT | 15 Aug 24 13:02 EDT | | help | | minikube | TOASTER\rlyshw | v1.33.1 | 15 Aug 24 13:02 EDT | 15 Aug 24 13:02 EDT | | config | get driver | minikube | TOASTER\rlyshw | v1.33.1 | 15 Aug 24 13:04 EDT | 15 Aug 24 13:04 EDT | | config | set driver hyperv | minikube | TOASTER\rlyshw | v1.33.1 | 15 Aug 24 13:04 EDT | 15 Aug 24 13:04 EDT | | start | | minikube | TOASTER\rlyshw | v1.33.1 | 15 Aug 24 13:04 EDT | | | start | | minikube | TOASTER\rlyshw | v1.33.1 | 15 Aug 24 13:06 EDT | | | start | | minikube | TOASTER\rlyshw | v1.33.1 | 15 Aug 24 13:53 EDT | 15 Aug 24 13:58 EDT | | kubectl | -- get pods -A | minikube | TOASTER\rlyshw | v1.33.1 | 15 Aug 24 14:01 EDT | 15 Aug 24 14:01 EDT | | update-check | | minikube | TOASTER\rlyshw | v1.33.1 | 15 Aug 24 15:42 EDT | 15 Aug 24 15:42 EDT | | tunnel | | minikube | TOASTER\rlyshw | v1.33.1 | 15 Aug 24 16:48 EDT | | | dashboard | | minikube | TOASTER\rlyshw | v1.33.1 | 15 Aug 24 16:49 EDT | | | dashboard | | minikube | TOASTER\rlyshw | v1.33.1 | 15 Aug 24 16:49 EDT | | | tunnel | | minikube | TOASTER\rlyshw | v1.33.1 | 15 Aug 24 16:50 EDT | | | ssh | | minikube | TOASTER\rlyshw | v1.33.1 | 15 Aug 24 16:55 EDT | | | stop | | minikube | TOASTER\rlyshw | v1.33.1 | 15 Aug 24 17:22 EDT | 15 Aug 24 17:23 EDT | | update-check | | minikube | TOASTER\rlyshw | v1.33.1 | 15 Aug 24 17:23 EDT | 15 Aug 24 17:23 EDT | | start | --hyperv-use-external-switch | minikube | TOASTER\rlyshw | v1.33.1 | 15 Aug 24 17:26 EDT | 15 Aug 24 17:28 EDT | | config | | minikube | TOASTER\rlyshw | v1.33.1 | 15 Aug 24 17:27 EDT | 15 Aug 24 17:27 EDT | | config | get hyperv-virtual-switch | minikube | TOASTER\rlyshw | v1.33.1 | 15 Aug 24 17:28 EDT | | | kubectl | -- get pods -A | minikube | TOASTER\rlyshw | v1.33.1 | 15 Aug 24 17:28 EDT | 15 Aug 24 17:28 EDT | | kubectl | | minikube | TOASTER\rlyshw | v1.33.1 | 15 Aug 24 17:28 EDT | 15 Aug 24 17:28 EDT | | kubectl | | minikube | TOASTER\rlyshw | v1.33.1 | 15 Aug 24 17:35 EDT | 15 Aug 24 17:35 EDT | | kubectl | | minikube | TOASTER\rlyshw | v1.33.1 | 15 Aug 24 17:35 EDT | 15 Aug 24 17:35 EDT | | kubectl | | minikube | TOASTER\rlyshw | v1.33.1 | 15 Aug 24 17:35 EDT | 15 Aug 24 17:35 EDT | | kubectl | | minikube | TOASTER\rlyshw | v1.33.1 | 15 Aug 24 17:37 EDT | 15 Aug 24 17:37 EDT | | stop | | minikube | TOASTER\rlyshw | v1.33.1 | 15 Aug 24 17:41 EDT | 15 Aug 24 17:41 EDT | | start | --hyperv-virtual-switch bridge | minikube | TOASTER\rlyshw | v1.33.1 | 15 Aug 24 17:44 EDT | 15 Aug 24 17:46 EDT | | ssh | | minikube | TOASTER\rlyshw | v1.33.1 | 15 Aug 24 17:50 EDT | | | kubectl | -- get pods -A | minikube | TOASTER\rlyshw | v1.33.1 | 15 Aug 24 18:04 EDT | | | start | --hyperv-virtual-switch bridge | minikube | TOASTER\rlyshw | v1.33.1 | 15 Aug 24 18:04 EDT | | | stop | | minikube | TOASTER\rlyshw | v1.33.1 | 15 Aug 24 18:08 EDT | 15 Aug 24 18:08 EDT | | start | --hyperv-virtual-switch bridge | minikube | TOASTER\rlyshw | v1.33.1 | 15 Aug 24 18:08 EDT | 15 Aug 24 18:11 EDT | | tunnel | | minikube | TOASTER\rlyshw | v1.33.1 | 15 Aug 24 18:12 EDT | | | tunnel | --cleanup | minikube | TOASTER\rlyshw | v1.33.1 | 15 Aug 24 18:14 EDT | | | ip | | minikube | TOASTER\rlyshw | v1.33.1 | 15 Aug 24 18:17 EDT | 15 Aug 24 18:17 EDT | | tunnel | --cleanup | minikube | TOASTER\rlyshw | v1.33.1 | 15 Aug 24 18:43 EDT | | | tunnel | --cleanup | minikube | TOASTER\rlyshw | v1.33.1 | 15 Aug 24 18:43 EDT | | | dashboard | | minikube | TOASTER\rlyshw | v1.33.1 | 15 Aug 24 18:49 EDT | | | tunnel | --cleanup | minikube | TOASTER\rlyshw | v1.33.1 | 15 Aug 24 19:21 EDT | | | dashboard | | minikube | TOASTER\rlyshw | v1.33.1 | 15 Aug 24 19:21 EDT | | | tunnel | --cleanup | minikube | TOASTER\rlyshw | v1.33.1 | 15 Aug 24 19:21 EDT | | | update-check | | minikube | TOASTER\rlyshw | v1.33.1 | 15 Aug 24 22:13 EDT | 15 Aug 24 22:13 EDT | | update-check | | minikube | TOASTER\rlyshw | v1.33.1 | 15 Aug 24 22:15 EDT | 15 Aug 24 22:15 EDT | | update-check | | minikube | TOASTER\rlyshw | v1.33.1 | 15 Aug 24 22:22 EDT | 15 Aug 24 22:22 EDT | | tunnel | --bind-address=* | minikube | TOASTER\rlyshw | v1.33.1 | 15 Aug 24 23:40 EDT | | | update-check | | minikube | TOASTER\rlyshw | v1.33.1 | 15 Aug 24 23:40 EDT | 15 Aug 24 23:40 EDT | | tunnel | --bind-address=* | minikube | TOASTER\rlyshw | v1.33.1 | 15 Aug 24 23:43 EDT | | | update-check | | minikube | TOASTER\rlyshw | v1.33.1 | 15 Aug 24 23:45 EDT | 15 Aug 24 23:45 EDT | | tunnel | --bind-address=* | minikube | TOASTER\rlyshw | v1.33.1 | 15 Aug 24 23:45 EDT | | | ssh | | minikube | TOASTER\rlyshw | v1.33.1 | 15 Aug 24 23:49 EDT | 15 Aug 24 23:49 EDT | | tunnel | -c | minikube | TOASTER\rlyshw | v1.33.1 | 15 Aug 24 23:49 EDT | | | tunnel | --bind-address=192.168.1.103 | minikube | TOASTER\rlyshw | v1.33.1 | 15 Aug 24 23:52 EDT | | | ssh | | minikube | TOASTER\rlyshw | v1.33.1 | 15 Aug 24 23:52 EDT | | | tunnel | --bind-address=192.168.1.103 | minikube | TOASTER\rlyshw | v1.33.1 | 16 Aug 24 00:00 EDT | | | update-check | | minikube | TOASTER\rlyshw | v1.33.1 | 16 Aug 24 09:43 EDT | 16 Aug 24 09:43 EDT | | ip | | minikube | TOASTER\rlyshw | v1.33.1 | 16 Aug 24 09:44 EDT | | | start | --hyperv-virtual-switch bridge | minikube | TOASTER\rlyshw | v1.33.1 | 16 Aug 24 09:44 EDT | | | start | --hyperv-virtual-switch bridge | minikube | TOASTER\rlyshw | v1.33.1 | 16 Aug 24 09:44 EDT | 16 Aug 24 09:46 EDT | | tunnel | --bind-address=* | minikube | TOASTER\rlyshw | v1.33.1 | 16 Aug 24 09:46 EDT | | |--------------|--------------------------------|----------|----------------|---------|---------------------|---------------------| ==> Last Start <== Log file created at: 2024/08/16 09:44:39 Running on machine: toaster Binary: Built with gc go1.22.1 for windows/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0816 09:44:39.147128 18084 out.go:291] Setting OutFile to fd 104 ... I0816 09:44:39.148136 18084 out.go:304] Setting ErrFile to fd 108... I0816 09:44:45.977827 18084 out.go:298] Setting JSON to false I0816 09:44:45.987897 18084 start.go:129] hostinfo: {"hostname":"toaster","uptime":34612,"bootTime":1723781273,"procs":352,"os":"windows","platform":"Microsoft Windows 11 Pro","platformFamily":"Standalone Workstation","platformVersion":"10.0.22621.4037 Build 22621.4037","kernelVersion":"10.0.22621.4037 Build 22621.4037","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"3ac04ef7-a6f6-493e-8ec8-7e76431f82e5"} W0816 09:44:45.988491 18084 start.go:137] gopshost.Virtualization returned error: not implemented yet I0816 09:44:45.990019 18084 out.go:177] 😄 minikube v1.33.1 on Microsoft Windows 11 Pro 10.0.22621.4037 Build 22621.4037 I0816 09:44:45.991608 18084 notify.go:220] Checking for updates... I0816 09:44:45.992651 18084 config.go:182] Loaded profile config "minikube": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0 I0816 09:44:45.993703 18084 driver.go:392] Setting default libvirt URI to qemu:///system I0816 09:44:49.111601 18084 out.go:177] ✨ Using the hyperv driver based on existing profile I0816 09:44:49.113269 18084 start.go:297] selected driver: hyperv I0816 09:44:49.113269 18084 start.go:901] validating driver "hyperv" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch:bridge HypervUseExternalSwitch:true HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.1.103 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\rlyshw.TOASTER:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} I0816 09:44:49.113269 18084 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:} I0816 09:44:49.138832 18084 cni.go:84] Creating CNI manager for "" I0816 09:44:49.138832 18084 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge I0816 09:44:49.138832 18084 start.go:340] cluster config: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch:bridge HypervUseExternalSwitch:true HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.1.103 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\rlyshw.TOASTER:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} I0816 09:44:49.139350 18084 iso.go:125] acquiring lock: {Name:mk21186367211b433efd10be15dd5d4df6ba7b90 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0816 09:44:49.140825 18084 out.go:177] 👍 Starting "minikube" primary control-plane node in "minikube" cluster I0816 09:44:49.142896 18084 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker I0816 09:44:49.142896 18084 preload.go:147] Found local preload: C:\Users\rlyshw.TOASTER\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 I0816 09:44:49.142896 18084 cache.go:56] Caching tarball of preloaded images I0816 09:44:49.143416 18084 preload.go:173] Found C:\Users\rlyshw.TOASTER\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download I0816 09:44:49.143416 18084 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker I0816 09:44:49.143416 18084 profile.go:143] Saving config to C:\Users\rlyshw.TOASTER\.minikube\profiles\minikube\config.json ... I0816 09:44:49.145440 18084 start.go:360] acquireMachinesLock for minikube: {Name:mk252b0243f2f49fe2efa9f6af910402736f1d87 Clock:{} Delay:500ms Timeout:13m0s Cancel:} I0816 09:44:49.145440 18084 start.go:364] duration metric: took 0s to acquireMachinesLock for "minikube" I0816 09:44:49.145440 18084 start.go:96] Skipping create...Using existing machine configuration I0816 09:44:49.145440 18084 fix.go:54] fixHost starting: I0816 09:44:49.145958 18084 main.go:141] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state I0816 09:44:50.681436 18084 main.go:141] libmachine: [stdout =====>] : Running I0816 09:44:50.681436 18084 main.go:141] libmachine: [stderr =====>] : I0816 09:44:50.681436 18084 fix.go:112] recreateIfNeeded on minikube: state=Running err= W0816 09:44:50.681436 18084 fix.go:138] unexpected machine state, will restart: I0816 09:44:50.682462 18084 out.go:177] 🏃 Updating the running hyperv "minikube" VM ... I0816 09:44:50.684532 18084 machine.go:94] provisionDockerMachine start ... I0816 09:44:50.684532 18084 main.go:141] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state I0816 09:44:51.932331 18084 main.go:141] libmachine: [stdout =====>] : Running I0816 09:44:51.932331 18084 main.go:141] libmachine: [stderr =====>] : I0816 09:44:51.932331 18084 main.go:141] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0] I0816 09:44:53.390009 18084 main.go:141] libmachine: [stdout =====>] : 192.168.1.103 I0816 09:44:53.390009 18084 main.go:141] libmachine: [stderr =====>] : I0816 09:44:53.421099 18084 main.go:141] libmachine: Using SSH client type: native I0816 09:44:53.426465 18084 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0xa1a3c0] 0xa1cfa0 [] 0s} 192.168.1.103 22 } I0816 09:44:53.426465 18084 main.go:141] libmachine: About to run SSH command: hostname I0816 09:44:53.525945 18084 main.go:141] libmachine: SSH cmd err, output: : minikube I0816 09:44:53.525945 18084 buildroot.go:166] provisioning hostname "minikube" I0816 09:44:53.525945 18084 main.go:141] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state I0816 09:44:54.753780 18084 main.go:141] libmachine: [stdout =====>] : Running I0816 09:44:54.753780 18084 main.go:141] libmachine: [stderr =====>] : I0816 09:44:54.753780 18084 main.go:141] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0] I0816 09:44:56.156837 18084 main.go:141] libmachine: [stdout =====>] : 192.168.1.103 I0816 09:44:56.156837 18084 main.go:141] libmachine: [stderr =====>] : I0816 09:44:56.188638 18084 main.go:141] libmachine: Using SSH client type: native I0816 09:44:56.188638 18084 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0xa1a3c0] 0xa1cfa0 [] 0s} 192.168.1.103 22 } I0816 09:44:56.188638 18084 main.go:141] libmachine: About to run SSH command: sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname I0816 09:44:56.295596 18084 main.go:141] libmachine: SSH cmd err, output: : minikube I0816 09:44:56.295596 18084 main.go:141] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state I0816 09:44:57.460175 18084 main.go:141] libmachine: [stdout =====>] : Running I0816 09:44:57.460175 18084 main.go:141] libmachine: [stderr =====>] : I0816 09:44:57.460175 18084 main.go:141] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0] I0816 09:44:58.825154 18084 main.go:141] libmachine: [stdout =====>] : 192.168.1.103 I0816 09:44:58.825154 18084 main.go:141] libmachine: [stderr =====>] : I0816 09:44:58.854820 18084 main.go:141] libmachine: Using SSH client type: native I0816 09:44:58.855361 18084 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0xa1a3c0] 0xa1cfa0 [] 0s} 192.168.1.103 22 } I0816 09:44:58.855361 18084 main.go:141] libmachine: About to run SSH command: if ! grep -xq '.*\sminikube' /etc/hosts; then if grep -xq '127.0.1.1\s.*' /etc/hosts; then sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts; else echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; fi fi I0816 09:44:58.957130 18084 main.go:141] libmachine: SSH cmd err, output: : I0816 09:44:58.957130 18084 buildroot.go:172] set auth options {CertDir:C:\Users\rlyshw.TOASTER\.minikube CaCertPath:C:\Users\rlyshw.TOASTER\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\rlyshw.TOASTER\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\rlyshw.TOASTER\.minikube\machines\server.pem ServerKeyPath:C:\Users\rlyshw.TOASTER\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\rlyshw.TOASTER\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\rlyshw.TOASTER\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\rlyshw.TOASTER\.minikube} I0816 09:44:58.957130 18084 buildroot.go:174] setting up certificates I0816 09:44:58.957130 18084 provision.go:84] configureAuth start I0816 09:44:58.957130 18084 main.go:141] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state I0816 09:45:00.107664 18084 main.go:141] libmachine: [stdout =====>] : Running I0816 09:45:00.107664 18084 main.go:141] libmachine: [stderr =====>] : I0816 09:45:00.107664 18084 main.go:141] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0] I0816 09:45:01.497649 18084 main.go:141] libmachine: [stdout =====>] : 192.168.1.103 I0816 09:45:01.497649 18084 main.go:141] libmachine: [stderr =====>] : I0816 09:45:01.497649 18084 main.go:141] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state I0816 09:45:02.683472 18084 main.go:141] libmachine: [stdout =====>] : Running I0816 09:45:02.683472 18084 main.go:141] libmachine: [stderr =====>] : I0816 09:45:02.683472 18084 main.go:141] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0] I0816 09:45:04.105085 18084 main.go:141] libmachine: [stdout =====>] : 192.168.1.103 I0816 09:45:04.105085 18084 main.go:141] libmachine: [stderr =====>] : I0816 09:45:04.105085 18084 provision.go:143] copyHostCerts I0816 09:45:04.110348 18084 exec_runner.go:144] found C:\Users\rlyshw.TOASTER\.minikube/ca.pem, removing ... I0816 09:45:04.110348 18084 exec_runner.go:203] rm: C:\Users\rlyshw.TOASTER\.minikube\ca.pem I0816 09:45:04.110902 18084 exec_runner.go:151] cp: C:\Users\rlyshw.TOASTER\.minikube\certs\ca.pem --> C:\Users\rlyshw.TOASTER\.minikube/ca.pem (1078 bytes) I0816 09:45:04.116988 18084 exec_runner.go:144] found C:\Users\rlyshw.TOASTER\.minikube/cert.pem, removing ... I0816 09:45:04.116988 18084 exec_runner.go:203] rm: C:\Users\rlyshw.TOASTER\.minikube\cert.pem I0816 09:45:04.117416 18084 exec_runner.go:151] cp: C:\Users\rlyshw.TOASTER\.minikube\certs\cert.pem --> C:\Users\rlyshw.TOASTER\.minikube/cert.pem (1123 bytes) I0816 09:45:04.123502 18084 exec_runner.go:144] found C:\Users\rlyshw.TOASTER\.minikube/key.pem, removing ... I0816 09:45:04.123502 18084 exec_runner.go:203] rm: C:\Users\rlyshw.TOASTER\.minikube\key.pem I0816 09:45:04.124053 18084 exec_runner.go:151] cp: C:\Users\rlyshw.TOASTER\.minikube\certs\key.pem --> C:\Users\rlyshw.TOASTER\.minikube/key.pem (1679 bytes) I0816 09:45:04.124842 18084 provision.go:117] generating server cert: C:\Users\rlyshw.TOASTER\.minikube\machines\server.pem ca-key=C:\Users\rlyshw.TOASTER\.minikube\certs\ca.pem private-key=C:\Users\rlyshw.TOASTER\.minikube\certs\ca-key.pem org=rlyshw.minikube san=[127.0.0.1 192.168.1.103 localhost minikube] I0816 09:45:04.209726 18084 provision.go:177] copyRemoteCerts I0816 09:45:04.298175 18084 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I0816 09:45:04.298701 18084 main.go:141] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state I0816 09:45:05.503043 18084 main.go:141] libmachine: [stdout =====>] : Running I0816 09:45:05.503043 18084 main.go:141] libmachine: [stderr =====>] : I0816 09:45:05.503043 18084 main.go:141] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0] I0816 09:45:06.899446 18084 main.go:141] libmachine: [stdout =====>] : 192.168.1.103 I0816 09:45:06.899446 18084 main.go:141] libmachine: [stderr =====>] : I0816 09:45:06.899446 18084 sshutil.go:53] new ssh client: &{IP:192.168.1.103 Port:22 SSHKeyPath:C:\Users\rlyshw.TOASTER\.minikube\machines\minikube\id_rsa Username:docker} I0816 09:45:06.975110 18084 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (2.6769227s) I0816 09:45:06.975667 18084 ssh_runner.go:362] scp C:\Users\rlyshw.TOASTER\.minikube\machines\server.pem --> /etc/docker/server.pem (1180 bytes) I0816 09:45:06.990897 18084 ssh_runner.go:362] scp C:\Users\rlyshw.TOASTER\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes) I0816 09:45:07.005549 18084 ssh_runner.go:362] scp C:\Users\rlyshw.TOASTER\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes) I0816 09:45:07.020511 18084 provision.go:87] duration metric: took 8.0633436s to configureAuth I0816 09:45:07.020511 18084 buildroot.go:189] setting minikube options for container-runtime I0816 09:45:07.021016 18084 config.go:182] Loaded profile config "minikube": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0 I0816 09:45:07.021016 18084 main.go:141] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state I0816 09:45:08.191402 18084 main.go:141] libmachine: [stdout =====>] : Running I0816 09:45:08.191402 18084 main.go:141] libmachine: [stderr =====>] : I0816 09:45:08.191402 18084 main.go:141] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0] I0816 09:45:09.660101 18084 main.go:141] libmachine: [stdout =====>] : 192.168.1.103 I0816 09:45:09.660101 18084 main.go:141] libmachine: [stderr =====>] : I0816 09:45:09.693717 18084 main.go:141] libmachine: Using SSH client type: native I0816 09:45:09.693717 18084 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0xa1a3c0] 0xa1cfa0 [] 0s} 192.168.1.103 22 } I0816 09:45:09.693717 18084 main.go:141] libmachine: About to run SSH command: df --output=fstype / | tail -n 1 I0816 09:45:09.791702 18084 main.go:141] libmachine: SSH cmd err, output: : tmpfs I0816 09:45:09.791702 18084 buildroot.go:70] root file system type: tmpfs I0816 09:45:09.791702 18084 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ... I0816 09:45:09.791702 18084 main.go:141] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state I0816 09:45:10.993168 18084 main.go:141] libmachine: [stdout =====>] : Running I0816 09:45:10.993168 18084 main.go:141] libmachine: [stderr =====>] : I0816 09:45:10.993168 18084 main.go:141] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0] I0816 09:45:12.350841 18084 main.go:141] libmachine: [stdout =====>] : 192.168.1.103 I0816 09:45:12.350841 18084 main.go:141] libmachine: [stderr =====>] : I0816 09:45:12.382418 18084 main.go:141] libmachine: Using SSH client type: native I0816 09:45:12.382931 18084 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0xa1a3c0] 0xa1cfa0 [] 0s} 192.168.1.103 22 } I0816 09:45:12.382931 18084 main.go:141] libmachine: About to run SSH command: sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com After=network.target minikube-automount.service docker.socket Requires= minikube-automount.service docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP \$MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target " | sudo tee /lib/systemd/system/docker.service.new I0816 09:45:12.487836 18084 main.go:141] libmachine: SSH cmd err, output: : [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com After=network.target minikube-automount.service docker.socket Requires= minikube-automount.service docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target I0816 09:45:12.487836 18084 main.go:141] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state I0816 09:45:13.642162 18084 main.go:141] libmachine: [stdout =====>] : Running I0816 09:45:13.642162 18084 main.go:141] libmachine: [stderr =====>] : I0816 09:45:13.642162 18084 main.go:141] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0] I0816 09:45:15.018658 18084 main.go:141] libmachine: [stdout =====>] : 192.168.1.103 I0816 09:45:15.018658 18084 main.go:141] libmachine: [stderr =====>] : I0816 09:45:15.077442 18084 main.go:141] libmachine: Using SSH client type: native I0816 09:45:15.077961 18084 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0xa1a3c0] 0xa1cfa0 [] 0s} 192.168.1.103 22 } I0816 09:45:15.077961 18084 main.go:141] libmachine: About to run SSH command: sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; } I0816 09:45:15.193443 18084 main.go:141] libmachine: SSH cmd err, output: : I0816 09:45:15.193443 18084 machine.go:97] duration metric: took 24.5087987s to provisionDockerMachine I0816 09:45:15.193443 18084 start.go:293] postStartSetup for "minikube" (driver="hyperv") I0816 09:45:15.193443 18084 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0816 09:45:15.403694 18084 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0816 09:45:15.403694 18084 main.go:141] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state I0816 09:45:16.818974 18084 main.go:141] libmachine: [stdout =====>] : Running I0816 09:45:16.818974 18084 main.go:141] libmachine: [stderr =====>] : I0816 09:45:16.818974 18084 main.go:141] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0] I0816 09:45:18.563305 18084 main.go:141] libmachine: [stdout =====>] : 192.168.1.103 I0816 09:45:18.563305 18084 main.go:141] libmachine: [stderr =====>] : I0816 09:45:18.563305 18084 sshutil.go:53] new ssh client: &{IP:192.168.1.103 Port:22 SSHKeyPath:C:\Users\rlyshw.TOASTER\.minikube\machines\minikube\id_rsa Username:docker} I0816 09:45:18.643123 18084 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (3.2394134s) I0816 09:45:18.758237 18084 ssh_runner.go:195] Run: cat /etc/os-release I0816 09:45:18.760924 18084 info.go:137] Remote host: Buildroot 2023.02.9 I0816 09:45:18.760924 18084 filesync.go:126] Scanning C:\Users\rlyshw.TOASTER\.minikube\addons for local assets ... I0816 09:45:18.761428 18084 filesync.go:126] Scanning C:\Users\rlyshw.TOASTER\.minikube\files for local assets ... I0816 09:45:18.761428 18084 start.go:296] duration metric: took 3.5679686s for postStartSetup I0816 09:45:18.761428 18084 fix.go:56] duration metric: took 29.6158518s for fixHost I0816 09:45:18.761428 18084 main.go:141] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state I0816 09:45:20.097566 18084 main.go:141] libmachine: [stdout =====>] : Running I0816 09:45:20.097566 18084 main.go:141] libmachine: [stderr =====>] : I0816 09:45:20.097566 18084 main.go:141] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0] I0816 09:45:21.575453 18084 main.go:141] libmachine: [stdout =====>] : 192.168.1.103 I0816 09:45:21.575453 18084 main.go:141] libmachine: [stderr =====>] : I0816 09:45:21.608670 18084 main.go:141] libmachine: Using SSH client type: native I0816 09:45:21.609188 18084 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0xa1a3c0] 0xa1cfa0 [] 0s} 192.168.1.103 22 } I0816 09:45:21.609188 18084 main.go:141] libmachine: About to run SSH command: date +%!s(MISSING).%!N(MISSING) I0816 09:45:21.715517 18084 main.go:141] libmachine: SSH cmd err, output: : 1723815921.742476916 I0816 09:45:21.715517 18084 fix.go:216] guest clock: 1723815921.742476916 I0816 09:45:21.715517 18084 fix.go:229] Guest: 2024-08-16 09:45:21.742476916 -0400 EDT Remote: 2024-08-16 09:45:18.7614287 -0400 EDT m=+39.792570501 (delta=2.981048216s) I0816 09:45:21.715517 18084 main.go:141] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state I0816 09:45:22.963099 18084 main.go:141] libmachine: [stdout =====>] : Running I0816 09:45:22.963099 18084 main.go:141] libmachine: [stderr =====>] : I0816 09:45:22.963099 18084 main.go:141] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0] I0816 09:45:24.413707 18084 main.go:141] libmachine: [stdout =====>] : 192.168.1.103 I0816 09:45:24.413707 18084 main.go:141] libmachine: [stderr =====>] : I0816 09:45:24.448933 18084 main.go:141] libmachine: Using SSH client type: native I0816 09:45:24.449452 18084 main.go:141] libmachine: &{{{ 0 [] [] []} docker [0xa1a3c0] 0xa1cfa0 [] 0s} 192.168.1.103 22 } I0816 09:45:24.449452 18084 main.go:141] libmachine: About to run SSH command: sudo date -s @1723815921 I0816 09:45:24.550502 18084 main.go:141] libmachine: SSH cmd err, output: : Fri Aug 16 13:45:21 UTC 2024 I0816 09:45:24.550502 18084 fix.go:236] clock set: Fri Aug 16 13:45:21 UTC 2024 (err=) I0816 09:45:24.550502 18084 start.go:83] releasing machines lock for "minikube", held for 35.4048991s I0816 09:45:24.550502 18084 main.go:141] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state I0816 09:45:25.733195 18084 main.go:141] libmachine: [stdout =====>] : Running I0816 09:45:25.733195 18084 main.go:141] libmachine: [stderr =====>] : I0816 09:45:25.733195 18084 main.go:141] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0] I0816 09:45:27.095504 18084 main.go:141] libmachine: [stdout =====>] : 192.168.1.103 I0816 09:45:27.095504 18084 main.go:141] libmachine: [stderr =====>] : I0816 09:45:27.135934 18084 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/ I0816 09:45:27.135934 18084 main.go:141] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state I0816 09:45:27.202114 18084 ssh_runner.go:195] Run: cat /version.json I0816 09:45:27.202114 18084 main.go:141] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state I0816 09:45:28.381698 18084 main.go:141] libmachine: [stdout =====>] : Running I0816 09:45:28.381698 18084 main.go:141] libmachine: [stderr =====>] : I0816 09:45:28.381698 18084 main.go:141] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0] I0816 09:45:28.452858 18084 main.go:141] libmachine: [stdout =====>] : Running I0816 09:45:28.452858 18084 main.go:141] libmachine: [stderr =====>] : I0816 09:45:28.452858 18084 main.go:141] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0] I0816 09:45:29.812789 18084 main.go:141] libmachine: [stdout =====>] : 192.168.1.103 I0816 09:45:29.812789 18084 main.go:141] libmachine: [stderr =====>] : I0816 09:45:29.812789 18084 sshutil.go:53] new ssh client: &{IP:192.168.1.103 Port:22 SSHKeyPath:C:\Users\rlyshw.TOASTER\.minikube\machines\minikube\id_rsa Username:docker} I0816 09:45:29.908518 18084 main.go:141] libmachine: [stdout =====>] : 192.168.1.103 I0816 09:45:29.908518 18084 main.go:141] libmachine: [stderr =====>] : I0816 09:45:29.908518 18084 sshutil.go:53] new ssh client: &{IP:192.168.1.103 Port:22 SSHKeyPath:C:\Users\rlyshw.TOASTER\.minikube\machines\minikube\id_rsa Username:docker} I0816 09:45:31.885548 18084 ssh_runner.go:235] Completed: cat /version.json: (4.683412s) I0816 09:45:31.885548 18084 ssh_runner.go:235] Completed: curl -sS -m 2 https://registry.k8s.io/: (4.7495919s) W0816 09:45:31.885548 18084 start.go:860] [curl -sS -m 2 https://registry.k8s.io/] failed: curl -sS -m 2 https://registry.k8s.io/: Process exited with status 28 stdout: stderr: curl: (28) Resolving timed out after 2001 milliseconds W0816 09:45:31.885548 18084 out.go:239] ❗ This VM is having trouble accessing https://registry.k8s.io W0816 09:45:31.885548 18084 out.go:239] 💡 To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/ I0816 09:45:31.973245 18084 ssh_runner.go:195] Run: systemctl --version I0816 09:45:32.065028 18084 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*" W0816 09:45:32.068067 18084 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found I0816 09:45:32.157910 18084 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ; I0816 09:45:32.164314 18084 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable I0816 09:45:32.164314 18084 start.go:494] detecting cgroup driver to use... I0816 09:45:32.164314 18084 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock " | sudo tee /etc/crictl.yaml" I0816 09:45:32.263801 18084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml" I0816 09:45:32.362820 18084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml" I0816 09:45:32.369672 18084 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver... I0816 09:45:32.460451 18084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml" I0816 09:45:32.560016 18084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml" I0816 09:45:32.661084 18084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml" I0816 09:45:32.758536 18084 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml" I0816 09:45:32.854773 18084 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk" I0816 09:45:32.947781 18084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml" I0816 09:45:33.042750 18084 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml" I0816 09:45:33.137474 18084 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1 enable_unprivileged_ports = true|' /etc/containerd/config.toml" I0816 09:45:33.234349 18084 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables I0816 09:45:33.330989 18084 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward" I0816 09:45:33.434503 18084 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0816 09:45:33.711340 18084 ssh_runner.go:195] Run: sudo systemctl restart containerd I0816 09:45:33.724662 18084 start.go:494] detecting cgroup driver to use... I0816 09:45:33.814193 18084 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0816 09:45:33.917164 18084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd I0816 09:45:34.016969 18084 ssh_runner.go:195] Run: sudo systemctl stop -f containerd I0816 09:45:34.119747 18084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd I0816 09:45:34.218724 18084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio I0816 09:45:34.226989 18084 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock " | sudo tee /etc/crictl.yaml" I0816 09:45:34.326770 18084 ssh_runner.go:195] Run: which cri-dockerd I0816 09:45:34.418765 18084 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d I0816 09:45:34.424970 18084 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes) I0816 09:45:34.527378 18084 ssh_runner.go:195] Run: sudo systemctl unmask docker.service I0816 09:45:34.815023 18084 ssh_runner.go:195] Run: sudo systemctl enable docker.socket I0816 09:45:34.999308 18084 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver... I0816 09:45:34.999308 18084 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes) I0816 09:45:35.099749 18084 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0816 09:45:35.372766 18084 ssh_runner.go:195] Run: sudo systemctl restart docker I0816 09:45:47.931564 18084 ssh_runner.go:235] Completed: sudo systemctl restart docker: (12.5587404s) I0816 09:45:48.022578 18084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket I0816 09:45:48.120549 18084 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket I0816 09:45:48.221339 18084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service I0816 09:45:48.318713 18084 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket I0816 09:45:48.524163 18084 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket I0816 09:45:48.724910 18084 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0816 09:45:48.925916 18084 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket I0816 09:45:49.028854 18084 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service I0816 09:45:49.136127 18084 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0816 09:45:49.367672 18084 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service I0816 09:45:49.420725 18084 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock I0816 09:45:49.506603 18084 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock I0816 09:45:49.509602 18084 start.go:562] Will wait 60s for crictl version I0816 09:45:49.598620 18084 ssh_runner.go:195] Run: which crictl I0816 09:45:49.688594 18084 ssh_runner.go:195] Run: sudo /usr/bin/crictl version I0816 09:45:49.710023 18084 start.go:578] Version: 0.1.0 RuntimeName: docker RuntimeVersion: 26.0.2 RuntimeApiVersion: v1 I0816 09:45:49.788873 18084 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0816 09:45:49.879134 18084 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0816 09:45:49.894742 18084 out.go:204] 🐳 Preparing Kubernetes v1.30.0 on Docker 26.0.2 ... I0816 09:45:49.894742 18084 ip.go:172] getIPForInterface: searching for "vEthernet (Default Switch)" I0816 09:45:49.901053 18084 ip.go:186] "Local Area Connection" does not match prefix "vEthernet (Default Switch)" I0816 09:45:49.901053 18084 ip.go:186] "vEthernet (Virtual Switch (automatic))" does not match prefix "vEthernet (Default Switch)" I0816 09:45:49.901053 18084 ip.go:186] "vEthernet (Bridge)" does not match prefix "vEthernet (Default Switch)" I0816 09:45:49.901053 18084 ip.go:186] "Ethernet 4" does not match prefix "vEthernet (Default Switch)" I0816 09:45:49.901053 18084 ip.go:186] "OpenVPN Connect DCO Adapter" does not match prefix "vEthernet (Default Switch)" I0816 09:45:49.901053 18084 ip.go:186] "Local Area Connection* 11" does not match prefix "vEthernet (Default Switch)" I0816 09:45:49.901053 18084 ip.go:186] "Bluetooth Network Connection" does not match prefix "vEthernet (Default Switch)" I0816 09:45:49.901053 18084 ip.go:186] "Wi-Fi" does not match prefix "vEthernet (Default Switch)" I0816 09:45:49.901053 18084 ip.go:186] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)" I0816 09:45:49.901053 18084 ip.go:186] "Teredo Tunneling Pseudo-Interface" does not match prefix "vEthernet (Default Switch)" I0816 09:45:49.901053 18084 ip.go:181] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)" I0816 09:45:49.901053 18084 ip.go:207] Found interface: {Index:74 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:b2:3a:fa Flags:up|broadcast|multicast|running} I0816 09:45:49.908217 18084 ip.go:210] interface addr: fe80::876f:e06c:6c8f:9008/64 I0816 09:45:49.908217 18084 ip.go:210] interface addr: 172.20.240.1/20 I0816 09:45:49.996657 18084 ssh_runner.go:195] Run: grep 172.20.240.1 host.minikube.internal$ /etc/hosts I0816 09:45:49.999325 18084 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.20.240.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0816 09:45:50.006605 18084 kubeadm.go:877] updating cluster {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch:bridge HypervUseExternalSwitch:true HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.1.103 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\rlyshw.TOASTER:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ... I0816 09:45:50.006605 18084 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker I0816 09:45:50.086421 18084 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0816 09:45:50.100912 18084 docker.go:685] Got preloaded images: -- stdout -- quay.io/minio/operator:v6.0.2 bitnami/keycloak:25.0.2-debian-12-r2 quay.io/jetstack/cert-manager-controller:v1.15.2 quay.io/jetstack/cert-manager-webhook:v1.15.2 quay.io/jetstack/cert-manager-cainjector:v1.15.2 quay.io/jetstack/cert-manager-startupapicheck:v1.15.2 bitnami/postgresql:16.3.0-debian-12-r23 kong/gateway-operator:1.3 kong:3.7.1 kong/kubernetes-ingress-controller:3.2.0 registry.k8s.io/kube-apiserver:v1.30.0 registry.k8s.io/kube-controller-manager:v1.30.0 registry.k8s.io/kube-scheduler:v1.30.0 registry.k8s.io/kube-proxy:v1.30.0 quay.io/oauth2-proxy/oauth2-proxy:v7.6.0 registry.k8s.io/etcd:3.5.12-0 kong/go-echo:latest registry.k8s.io/coredns/coredns:v1.11.1 busybox:latest registry.k8s.io/pause:3.9 kubernetesui/dashboard: registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.3.0 kubernetesui/metrics-scraper: gcr.io/k8s-minikube/storage-provisioner:v5 gcr.io/kubebuilder/kube-rbac-proxy:v0.8.0 -- /stdout -- I0816 09:45:50.100912 18084 docker.go:615] Images already preloaded, skipping extraction I0816 09:45:50.180791 18084 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0816 09:45:50.191856 18084 docker.go:685] Got preloaded images: -- stdout -- quay.io/minio/operator:v6.0.2 bitnami/keycloak:25.0.2-debian-12-r2 quay.io/jetstack/cert-manager-controller:v1.15.2 quay.io/jetstack/cert-manager-cainjector:v1.15.2 quay.io/jetstack/cert-manager-webhook:v1.15.2 quay.io/jetstack/cert-manager-startupapicheck:v1.15.2 bitnami/postgresql:16.3.0-debian-12-r23 kong/gateway-operator:1.3 kong:3.7.1 kong/kubernetes-ingress-controller:3.2.0 registry.k8s.io/kube-apiserver:v1.30.0 registry.k8s.io/kube-controller-manager:v1.30.0 registry.k8s.io/kube-scheduler:v1.30.0 registry.k8s.io/kube-proxy:v1.30.0 quay.io/oauth2-proxy/oauth2-proxy:v7.6.0 registry.k8s.io/etcd:3.5.12-0 kong/go-echo:latest registry.k8s.io/coredns/coredns:v1.11.1 busybox:latest registry.k8s.io/pause:3.9 kubernetesui/dashboard: registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.3.0 kubernetesui/metrics-scraper: gcr.io/k8s-minikube/storage-provisioner:v5 gcr.io/kubebuilder/kube-rbac-proxy:v0.8.0 -- /stdout -- I0816 09:45:50.191856 18084 cache_images.go:84] Images are preloaded, skipping loading I0816 09:45:50.191856 18084 kubeadm.go:928] updating node { 192.168.1.103 8443 v1.30.0 docker true true} ... I0816 09:45:50.191856 18084 kubeadm.go:940] kubelet [Unit] Wants=docker.socket [Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=minikube --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.1.103 [Install] config: {KubernetesVersion:v1.30.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} I0816 09:45:50.277751 18084 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}} I0816 09:45:50.290349 18084 cni.go:84] Creating CNI manager for "" I0816 09:45:50.290349 18084 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge I0816 09:45:50.290349 18084 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16 I0816 09:45:50.290349 18084 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.1.103 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.1.103"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.1.103 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true} I0816 09:45:50.290349 18084 kubeadm.go:187] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta3 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.1.103 bindPort: 8443 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token ttl: 24h0m0s usages: - signing - authentication nodeRegistration: criSocket: unix:///var/run/cri-dockerd.sock name: "minikube" kubeletExtraArgs: node-ip: 192.168.1.103 taints: [] --- apiVersion: kubeadm.k8s.io/v1beta3 kind: ClusterConfiguration apiServer: certSANs: ["127.0.0.1", "localhost", "192.168.1.103"] extraArgs: enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" controllerManager: extraArgs: allocate-node-cidrs: "true" leader-elect: "false" scheduler: extraArgs: leader-elect: "false" certificatesDir: /var/lib/minikube/certs clusterName: mk controlPlaneEndpoint: control-plane.minikube.internal:8443 etcd: local: dataDir: /var/lib/minikube/etcd extraArgs: proxy-refresh-interval: "70000" kubernetesVersion: v1.30.0 networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" serviceSubnet: 10.96.0.0/12 --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: cgroupfs containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock hairpinMode: hairpin-veth runtimeRequestTimeout: 15m clusterDomain: "cluster.local" # disable disk resource management by default imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%!"(MISSING) nodefs.inodesFree: "0%!"(MISSING) imagefs.available: "0%!"(MISSING) failSwapOn: false staticPodPath: /etc/kubernetes/manifests --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "10.244.0.0/16" metricsBindAddress: 0.0.0.0:10249 conntrack: maxPerCore: 0 # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established" tcpEstablishedTimeout: 0s # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close" tcpCloseWaitTimeout: 0s I0816 09:45:50.379856 18084 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0 I0816 09:45:50.385426 18084 binaries.go:44] Found k8s binaries, skipping transfer I0816 09:45:50.477646 18084 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube I0816 09:45:50.484066 18084 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes) I0816 09:45:50.500131 18084 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes) I0816 09:45:50.510891 18084 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes) I0816 09:45:50.619492 18084 ssh_runner.go:195] Run: grep 192.168.1.103 control-plane.minikube.internal$ /etc/hosts I0816 09:45:50.715060 18084 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0816 09:45:51.024243 18084 ssh_runner.go:195] Run: sudo systemctl start kubelet I0816 09:45:51.035403 18084 certs.go:68] Setting up C:\Users\rlyshw.TOASTER\.minikube\profiles\minikube for IP: 192.168.1.103 I0816 09:45:51.035403 18084 certs.go:194] generating shared ca certs ... I0816 09:45:51.035403 18084 certs.go:226] acquiring lock for ca certs: {Name:mk6af2cd2b5e6cc47da34d8b59a04c617ac9ebef Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0816 09:45:51.041097 18084 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\rlyshw.TOASTER\.minikube\ca.key I0816 09:45:51.051456 18084 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\rlyshw.TOASTER\.minikube\proxy-client-ca.key I0816 09:45:51.051456 18084 certs.go:256] generating profile certs ... I0816 09:45:51.052488 18084 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\rlyshw.TOASTER\.minikube\profiles\minikube\client.key I0816 09:45:51.063849 18084 certs.go:359] skipping valid signed profile cert regeneration for "minikube": C:\Users\rlyshw.TOASTER\.minikube\profiles\minikube\apiserver.key.31eb9311 I0816 09:45:51.076095 18084 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\rlyshw.TOASTER\.minikube\profiles\minikube\proxy-client.key I0816 09:45:51.079191 18084 certs.go:484] found cert: C:\Users\rlyshw.TOASTER\.minikube\certs\ca-key.pem (1679 bytes) I0816 09:45:51.079707 18084 certs.go:484] found cert: C:\Users\rlyshw.TOASTER\.minikube\certs\ca.pem (1078 bytes) I0816 09:45:51.079707 18084 certs.go:484] found cert: C:\Users\rlyshw.TOASTER\.minikube\certs\cert.pem (1123 bytes) I0816 09:45:51.080225 18084 certs.go:484] found cert: C:\Users\rlyshw.TOASTER\.minikube\certs\key.pem (1679 bytes) I0816 09:45:51.081307 18084 ssh_runner.go:362] scp C:\Users\rlyshw.TOASTER\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes) I0816 09:45:51.126365 18084 ssh_runner.go:362] scp C:\Users\rlyshw.TOASTER\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes) I0816 09:45:51.146142 18084 ssh_runner.go:362] scp C:\Users\rlyshw.TOASTER\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) I0816 09:45:51.167365 18084 ssh_runner.go:362] scp C:\Users\rlyshw.TOASTER\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes) I0816 09:45:51.186815 18084 ssh_runner.go:362] scp C:\Users\rlyshw.TOASTER\.minikube\profiles\minikube\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1411 bytes) I0816 09:45:51.217115 18084 ssh_runner.go:362] scp C:\Users\rlyshw.TOASTER\.minikube\profiles\minikube\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes) I0816 09:45:51.238422 18084 ssh_runner.go:362] scp C:\Users\rlyshw.TOASTER\.minikube\profiles\minikube\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes) I0816 09:45:51.265142 18084 ssh_runner.go:362] scp C:\Users\rlyshw.TOASTER\.minikube\profiles\minikube\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes) I0816 09:45:51.301325 18084 ssh_runner.go:362] scp C:\Users\rlyshw.TOASTER\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) I0816 09:45:51.332520 18084 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes) I0816 09:45:51.409487 18084 ssh_runner.go:195] Run: openssl version I0816 09:45:51.502298 18084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I0816 09:45:51.595811 18084 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I0816 09:45:51.603750 18084 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Aug 15 17:57 /usr/share/ca-certificates/minikubeCA.pem I0816 09:45:51.672425 18084 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I0816 09:45:51.773807 18084 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I0816 09:45:51.878476 18084 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt I0816 09:45:51.949006 18084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400 I0816 09:45:52.020557 18084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400 I0816 09:45:52.087637 18084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400 I0816 09:45:52.160216 18084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400 I0816 09:45:52.226642 18084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400 I0816 09:45:52.304267 18084 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400 I0816 09:45:52.308618 18084 kubeadm.go:391] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.33.1-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.44@sha256:eb04641328b06c5c4a14f4348470e1046bbcf9c2cbc551486e343d3a49db557e Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch:bridge HypervUseExternalSwitch:true HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:minikube Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.1.103 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\rlyshw.TOASTER:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} I0816 09:45:52.388073 18084 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}} I0816 09:45:52.486368 18084 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd W0816 09:45:52.492145 18084 kubeadm.go:404] apiserver tunnel failed: apiserver port not set I0816 09:45:52.492145 18084 kubeadm.go:407] found existing configuration files, will attempt cluster restart I0816 09:45:52.492145 18084 kubeadm.go:587] restartPrimaryControlPlane start ... I0816 09:45:52.582956 18084 ssh_runner.go:195] Run: sudo test -d /data/minikube I0816 09:45:52.588774 18084 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1 stdout: stderr: I0816 09:45:52.589295 18084 kubeconfig.go:125] found "minikube" server: "https://192.168.1.103:8443" I0816 09:45:52.679088 18084 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new I0816 09:45:52.686520 18084 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.1.103 I0816 09:45:52.686520 18084 kubeadm.go:1154] stopping kube-system containers ... I0816 09:45:52.771764 18084 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}} I0816 09:45:52.797123 18084 docker.go:483] Stopping containers: [a8ead16e9305 aeb942683261 a67a428613f7 798274a40fd1 c467ae72fd26 b36e17224ad0 5749acba7066 33fb99024cce a5fb020776eb 0bb71be71321 624b4b48c7bf 55c4e5dedfcd 5a78c66e22bd 2868ee09107d 9831e8c52d83 5d55b3690c44 9eaff52efaca d53f2c25449a 2ea8f4ff8c5a 43214bb4c0ae 0ec7ae98c573 6f51efa3893b 8412f410be1f fcebd2457096 75915c4e6ec9 e3059bd8bc96 563217b814dc 41dc2ef5fbb3 8d03a69c878c 46b10ca5db4b 15f1825e8a69 11cc7e7a0b17 93a5d79e0d21 5b8dec991942 f70cad5edd6f a1c9ecea3837 c66d9791ca83 177beb012405] I0816 09:45:52.877228 18084 ssh_runner.go:195] Run: docker stop a8ead16e9305 aeb942683261 a67a428613f7 798274a40fd1 c467ae72fd26 b36e17224ad0 5749acba7066 33fb99024cce a5fb020776eb 0bb71be71321 624b4b48c7bf 55c4e5dedfcd 5a78c66e22bd 2868ee09107d 9831e8c52d83 5d55b3690c44 9eaff52efaca d53f2c25449a 2ea8f4ff8c5a 43214bb4c0ae 0ec7ae98c573 6f51efa3893b 8412f410be1f fcebd2457096 75915c4e6ec9 e3059bd8bc96 563217b814dc 41dc2ef5fbb3 8d03a69c878c 46b10ca5db4b 15f1825e8a69 11cc7e7a0b17 93a5d79e0d21 5b8dec991942 f70cad5edd6f a1c9ecea3837 c66d9791ca83 177beb012405 I0816 09:46:03.018442 18084 ssh_runner.go:235] Completed: docker stop a8ead16e9305 aeb942683261 a67a428613f7 798274a40fd1 c467ae72fd26 b36e17224ad0 5749acba7066 33fb99024cce a5fb020776eb 0bb71be71321 624b4b48c7bf 55c4e5dedfcd 5a78c66e22bd 2868ee09107d 9831e8c52d83 5d55b3690c44 9eaff52efaca d53f2c25449a 2ea8f4ff8c5a 43214bb4c0ae 0ec7ae98c573 6f51efa3893b 8412f410be1f fcebd2457096 75915c4e6ec9 e3059bd8bc96 563217b814dc 41dc2ef5fbb3 8d03a69c878c 46b10ca5db4b 15f1825e8a69 11cc7e7a0b17 93a5d79e0d21 5b8dec991942 f70cad5edd6f a1c9ecea3837 c66d9791ca83 177beb012405: (10.1411672s) I0816 09:46:03.107875 18084 ssh_runner.go:195] Run: sudo systemctl stop kubelet I0816 09:46:03.234128 18084 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0816 09:46:03.240626 18084 kubeadm.go:156] found existing configuration files: -rw------- 1 root root 5651 Aug 15 22:10 /etc/kubernetes/admin.conf -rw------- 1 root root 5657 Aug 15 22:10 /etc/kubernetes/controller-manager.conf -rw------- 1 root root 5655 Aug 15 22:10 /etc/kubernetes/kubelet.conf -rw------- 1 root root 5601 Aug 15 22:10 /etc/kubernetes/scheduler.conf I0816 09:46:03.329967 18084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf I0816 09:46:03.424457 18084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf I0816 09:46:03.517785 18084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf I0816 09:46:03.524970 18084 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1 stdout: stderr: I0816 09:46:03.619159 18084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf I0816 09:46:03.712971 18084 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf I0816 09:46:03.717354 18084 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1 stdout: stderr: I0816 09:46:03.810174 18084 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf I0816 09:46:03.905915 18084 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I0816 09:46:03.912424 18084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml" I0816 09:46:03.942259 18084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml" I0816 09:46:04.460652 18084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml" I0816 09:46:04.667919 18084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml" I0816 09:46:04.714771 18084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml" I0816 09:46:04.770901 18084 api_server.go:52] waiting for apiserver process to appear ... I0816 09:46:04.860896 18084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0816 09:46:05.369672 18084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0816 09:46:05.380624 18084 api_server.go:72] duration metric: took 609.7205ms to wait for apiserver process to appear ... I0816 09:46:05.380624 18084 api_server.go:88] waiting for apiserver healthz status ... I0816 09:46:05.380624 18084 api_server.go:253] Checking apiserver healthz at https://192.168.1.103:8443/healthz ... I0816 09:46:07.524058 18084 api_server.go:279] https://192.168.1.103:8443/healthz returned 403: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403} W0816 09:46:07.524058 18084 api_server.go:103] status: https://192.168.1.103:8443/healthz returned error 403: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403} I0816 09:46:07.524314 18084 api_server.go:253] Checking apiserver healthz at https://192.168.1.103:8443/healthz ... I0816 09:46:07.559858 18084 api_server.go:279] https://192.168.1.103:8443/healthz returned 403: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403} W0816 09:46:07.559858 18084 api_server.go:103] status: https://192.168.1.103:8443/healthz returned error 403: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403} I0816 09:46:07.884676 18084 api_server.go:253] Checking apiserver healthz at https://192.168.1.103:8443/healthz ... I0816 09:46:07.894808 18084 api_server.go:279] https://192.168.1.103:8443/healthz returned 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/apiservice-discovery-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok healthz check failed W0816 09:46:07.894808 18084 api_server.go:103] status: https://192.168.1.103:8443/healthz returned error 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/apiservice-discovery-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok healthz check failed I0816 09:46:08.381795 18084 api_server.go:253] Checking apiserver healthz at https://192.168.1.103:8443/healthz ... I0816 09:46:08.400286 18084 api_server.go:279] https://192.168.1.103:8443/healthz returned 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/apiservice-discovery-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok healthz check failed W0816 09:46:08.400286 18084 api_server.go:103] status: https://192.168.1.103:8443/healthz returned error 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/apiservice-discovery-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok healthz check failed I0816 09:46:08.895253 18084 api_server.go:253] Checking apiserver healthz at https://192.168.1.103:8443/healthz ... I0816 09:46:08.901335 18084 api_server.go:279] https://192.168.1.103:8443/healthz returned 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/apiservice-discovery-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok healthz check failed W0816 09:46:08.901335 18084 api_server.go:103] status: https://192.168.1.103:8443/healthz returned error 500: [+]ping ok [+]log ok [+]etcd ok [+]poststarthook/start-apiserver-admission-initializer ok [+]poststarthook/generic-apiserver-start-informers ok [+]poststarthook/priority-and-fairness-config-consumer ok [+]poststarthook/priority-and-fairness-filter ok [+]poststarthook/storage-object-count-tracker-hook ok [+]poststarthook/start-apiextensions-informers ok [+]poststarthook/start-apiextensions-controllers ok [+]poststarthook/crd-informer-synced ok [+]poststarthook/start-service-ip-repair-controllers ok [-]poststarthook/rbac/bootstrap-roles failed: reason withheld [+]poststarthook/scheduling/bootstrap-system-priority-classes ok [+]poststarthook/priority-and-fairness-config-producer ok [+]poststarthook/start-system-namespaces-controller ok [+]poststarthook/bootstrap-controller ok [+]poststarthook/start-cluster-authentication-info-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-controller ok [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok [+]poststarthook/start-legacy-token-tracking-controller ok [+]poststarthook/aggregator-reload-proxy-client-cert ok [+]poststarthook/start-kube-aggregator-informers ok [+]poststarthook/apiservice-registration-controller ok [+]poststarthook/apiservice-status-available-controller ok [+]poststarthook/apiservice-discovery-controller ok [+]poststarthook/kube-apiserver-autoregistration ok [+]autoregister-completion ok [+]poststarthook/apiservice-openapi-controller ok [+]poststarthook/apiservice-openapiv3-controller ok healthz check failed I0816 09:46:09.387499 18084 api_server.go:253] Checking apiserver healthz at https://192.168.1.103:8443/healthz ... I0816 09:46:09.390660 18084 api_server.go:279] https://192.168.1.103:8443/healthz returned 200: ok I0816 09:46:09.395259 18084 api_server.go:141] control plane version: v1.30.0 I0816 09:46:09.395259 18084 api_server.go:131] duration metric: took 4.0146166s to wait for apiserver health ... I0816 09:46:09.395259 18084 cni.go:84] Creating CNI manager for "" I0816 09:46:09.395259 18084 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge I0816 09:46:09.396307 18084 out.go:177] 🔗 Configuring bridge CNI (Container Networking Interface) ... I0816 09:46:09.486944 18084 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d I0816 09:46:09.493016 18084 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes) I0816 09:46:09.506536 18084 system_pods.go:43] waiting for kube-system pods to appear ... I0816 09:46:09.512031 18084 system_pods.go:59] 7 kube-system pods found I0816 09:46:09.512031 18084 system_pods.go:61] "coredns-7db6d8ff4d-l9cjz" [b7027c61-e8c4-4cba-8352-2aabf147190d] Running I0816 09:46:09.512031 18084 system_pods.go:61] "etcd-minikube" [35734ecc-f35d-421f-87bc-573fc34f5c5f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd]) I0816 09:46:09.512031 18084 system_pods.go:61] "kube-apiserver-minikube" [99a7a576-4eeb-4169-9fda-dde18d03fa81] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver]) I0816 09:46:09.512031 18084 system_pods.go:61] "kube-controller-manager-minikube" [dc77ba7d-3405-451a-9aa9-c28d1b19fd98] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager]) I0816 09:46:09.512031 18084 system_pods.go:61] "kube-proxy-b8lgl" [cdc977a3-4771-4ebd-8900-0af885aaa3bd] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy]) I0816 09:46:09.512031 18084 system_pods.go:61] "kube-scheduler-minikube" [3f4804bb-2519-42df-834f-28f13a4c6056] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler]) I0816 09:46:09.512031 18084 system_pods.go:61] "storage-provisioner" [61e112d0-de76-467d-8ba5-ab6f6f49bbf6] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner]) I0816 09:46:09.512031 18084 system_pods.go:74] duration metric: took 5.4946ms to wait for pod list to return data ... I0816 09:46:09.512031 18084 node_conditions.go:102] verifying NodePressure condition ... I0816 09:46:09.514775 18084 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki I0816 09:46:09.514775 18084 node_conditions.go:123] node cpu capacity is 2 I0816 09:46:09.514775 18084 node_conditions.go:105] duration metric: took 2.7445ms to run NodePressure ... I0816 09:46:09.514775 18084 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml" I0816 09:46:12.876743 18084 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml": (3.3619526s) I0816 09:46:12.876743 18084 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj" I0816 09:46:12.884268 18084 ops.go:34] apiserver oom_adj: -16 I0816 09:46:12.884268 18084 kubeadm.go:591] duration metric: took 20.3920289s to restartPrimaryControlPlane I0816 09:46:12.884268 18084 kubeadm.go:393] duration metric: took 20.5755549s to StartCluster I0816 09:46:12.884268 18084 settings.go:142] acquiring lock: {Name:mk2db990b9a0edae9ff85fb3fa49f6b8b9f65136 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0816 09:46:12.884268 18084 settings.go:150] Updating kubeconfig: C:\Users\rlyshw.TOASTER\.kube\config I0816 09:46:12.885329 18084 lock.go:35] WriteFile acquiring C:\Users\rlyshw.TOASTER\.kube\config: {Name:mk25f50d75880b086e318af04d88f7c40ffbf9de Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0816 09:46:12.886372 18084 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.1.103 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} I0816 09:46:12.887433 18084 out.go:177] 🔎 Verifying Kubernetes components... I0816 09:46:12.886372 18084 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:true default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] I0816 09:46:12.887433 18084 addons.go:69] Setting storage-provisioner=true in profile "minikube" I0816 09:46:12.887433 18084 addons.go:234] Setting addon storage-provisioner=true in "minikube" I0816 09:46:12.887433 18084 addons.go:69] Setting dashboard=true in profile "minikube" W0816 09:46:12.891584 18084 addons.go:243] addon storage-provisioner should already be in state true I0816 09:46:12.891584 18084 addons.go:234] Setting addon dashboard=true in "minikube" I0816 09:46:12.887433 18084 addons.go:69] Setting default-storageclass=true in profile "minikube" W0816 09:46:12.891584 18084 addons.go:243] addon dashboard should already be in state true I0816 09:46:12.887969 18084 config.go:182] Loaded profile config "minikube": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.30.0 I0816 09:46:12.891584 18084 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube" I0816 09:46:12.893157 18084 host.go:66] Checking if "minikube" exists ... I0816 09:46:12.893157 18084 host.go:66] Checking if "minikube" exists ... I0816 09:46:12.893671 18084 main.go:141] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state I0816 09:46:12.894705 18084 main.go:141] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state I0816 09:46:12.894705 18084 main.go:141] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state I0816 09:46:13.046216 18084 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0816 09:46:13.531491 18084 ssh_runner.go:195] Run: sudo systemctl start kubelet I0816 09:46:13.543518 18084 api_server.go:52] waiting for apiserver process to appear ... I0816 09:46:13.659339 18084 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0816 09:46:13.671871 18084 api_server.go:72] duration metric: took 785.4959ms to wait for apiserver process to appear ... I0816 09:46:13.671871 18084 api_server.go:88] waiting for apiserver healthz status ... I0816 09:46:13.671871 18084 api_server.go:253] Checking apiserver healthz at https://192.168.1.103:8443/healthz ... I0816 09:46:13.682761 18084 api_server.go:279] https://192.168.1.103:8443/healthz returned 200: ok I0816 09:46:13.684680 18084 api_server.go:141] control plane version: v1.30.0 I0816 09:46:13.684680 18084 api_server.go:131] duration metric: took 12.8089ms to wait for apiserver health ... I0816 09:46:13.684680 18084 system_pods.go:43] waiting for kube-system pods to appear ... I0816 09:46:13.692494 18084 system_pods.go:59] 7 kube-system pods found I0816 09:46:13.692494 18084 system_pods.go:61] "coredns-7db6d8ff4d-l9cjz" [b7027c61-e8c4-4cba-8352-2aabf147190d] Running I0816 09:46:13.692494 18084 system_pods.go:61] "etcd-minikube" [35734ecc-f35d-421f-87bc-573fc34f5c5f] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd]) I0816 09:46:13.692494 18084 system_pods.go:61] "kube-apiserver-minikube" [99a7a576-4eeb-4169-9fda-dde18d03fa81] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver]) I0816 09:46:13.692494 18084 system_pods.go:61] "kube-controller-manager-minikube" [dc77ba7d-3405-451a-9aa9-c28d1b19fd98] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager]) I0816 09:46:13.692494 18084 system_pods.go:61] "kube-proxy-b8lgl" [cdc977a3-4771-4ebd-8900-0af885aaa3bd] Running I0816 09:46:13.692494 18084 system_pods.go:61] "kube-scheduler-minikube" [3f4804bb-2519-42df-834f-28f13a4c6056] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler]) I0816 09:46:13.692494 18084 system_pods.go:61] "storage-provisioner" [61e112d0-de76-467d-8ba5-ab6f6f49bbf6] Running I0816 09:46:13.692494 18084 system_pods.go:74] duration metric: took 7.814ms to wait for pod list to return data ... I0816 09:46:13.692494 18084 kubeadm.go:576] duration metric: took 806.1188ms to wait for: map[apiserver:true system_pods:true] I0816 09:46:13.692494 18084 node_conditions.go:102] verifying NodePressure condition ... I0816 09:46:13.695369 18084 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki I0816 09:46:13.695369 18084 node_conditions.go:123] node cpu capacity is 2 I0816 09:46:13.695369 18084 node_conditions.go:105] duration metric: took 2.8744ms to run NodePressure ... I0816 09:46:13.695369 18084 start.go:240] waiting for startup goroutines ... I0816 09:46:14.526976 18084 main.go:141] libmachine: [stdout =====>] : Running I0816 09:46:14.526976 18084 main.go:141] libmachine: [stderr =====>] : I0816 09:46:14.528566 18084 out.go:177] ▪ Using image docker.io/kubernetesui/dashboard:v2.7.0 I0816 09:46:14.529622 18084 out.go:177] ▪ Using image docker.io/kubernetesui/metrics-scraper:v1.0.8 I0816 09:46:14.530659 18084 addons.go:426] installing /etc/kubernetes/addons/dashboard-ns.yaml I0816 09:46:14.530659 18084 ssh_runner.go:362] scp dashboard/dashboard-ns.yaml --> /etc/kubernetes/addons/dashboard-ns.yaml (759 bytes) I0816 09:46:14.530659 18084 main.go:141] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state I0816 09:46:14.557745 18084 main.go:141] libmachine: [stdout =====>] : Running I0816 09:46:14.557745 18084 main.go:141] libmachine: [stderr =====>] : I0816 09:46:14.559876 18084 addons.go:234] Setting addon default-storageclass=true in "minikube" W0816 09:46:14.559876 18084 addons.go:243] addon default-storageclass should already be in state true I0816 09:46:14.560397 18084 host.go:66] Checking if "minikube" exists ... I0816 09:46:14.561440 18084 main.go:141] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state I0816 09:46:14.574534 18084 main.go:141] libmachine: [stdout =====>] : Running I0816 09:46:14.574534 18084 main.go:141] libmachine: [stderr =====>] : I0816 09:46:14.576111 18084 out.go:177] ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5 I0816 09:46:14.577180 18084 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml I0816 09:46:14.577180 18084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes) I0816 09:46:14.577180 18084 main.go:141] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state I0816 09:46:15.939691 18084 main.go:141] libmachine: [stdout =====>] : Running I0816 09:46:15.939691 18084 main.go:141] libmachine: [stderr =====>] : I0816 09:46:15.939691 18084 main.go:141] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0] I0816 09:46:15.969108 18084 main.go:141] libmachine: [stdout =====>] : Running I0816 09:46:15.969108 18084 main.go:141] libmachine: [stderr =====>] : I0816 09:46:15.969108 18084 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml I0816 09:46:15.969108 18084 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes) I0816 09:46:15.969108 18084 main.go:141] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM minikube ).state I0816 09:46:15.979715 18084 main.go:141] libmachine: [stdout =====>] : Running I0816 09:46:15.979715 18084 main.go:141] libmachine: [stderr =====>] : I0816 09:46:15.979715 18084 main.go:141] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0] I0816 09:46:17.273392 18084 main.go:141] libmachine: [stdout =====>] : Running I0816 09:46:17.273392 18084 main.go:141] libmachine: [stderr =====>] : I0816 09:46:17.273392 18084 main.go:141] libmachine: [executing ==>] : C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM minikube ).networkadapters[0]).ipaddresses[0] I0816 09:46:17.444004 18084 main.go:141] libmachine: [stdout =====>] : 192.168.1.103 I0816 09:46:17.444004 18084 main.go:141] libmachine: [stderr =====>] : I0816 09:46:17.444004 18084 sshutil.go:53] new ssh client: &{IP:192.168.1.103 Port:22 SSHKeyPath:C:\Users\rlyshw.TOASTER\.minikube\machines\minikube\id_rsa Username:docker} I0816 09:46:17.497565 18084 main.go:141] libmachine: [stdout =====>] : 192.168.1.103 I0816 09:46:17.497565 18084 main.go:141] libmachine: [stderr =====>] : I0816 09:46:17.497565 18084 sshutil.go:53] new ssh client: &{IP:192.168.1.103 Port:22 SSHKeyPath:C:\Users\rlyshw.TOASTER\.minikube\machines\minikube\id_rsa Username:docker} I0816 09:46:17.535983 18084 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrole.yaml I0816 09:46:17.535983 18084 ssh_runner.go:362] scp dashboard/dashboard-clusterrole.yaml --> /etc/kubernetes/addons/dashboard-clusterrole.yaml (1001 bytes) I0816 09:46:17.546115 18084 addons.go:426] installing /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml I0816 09:46:17.546115 18084 ssh_runner.go:362] scp dashboard/dashboard-clusterrolebinding.yaml --> /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml (1018 bytes) I0816 09:46:17.556511 18084 addons.go:426] installing /etc/kubernetes/addons/dashboard-configmap.yaml I0816 09:46:17.556511 18084 ssh_runner.go:362] scp dashboard/dashboard-configmap.yaml --> /etc/kubernetes/addons/dashboard-configmap.yaml (837 bytes) I0816 09:46:17.567055 18084 addons.go:426] installing /etc/kubernetes/addons/dashboard-dp.yaml I0816 09:46:17.567055 18084 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/dashboard-dp.yaml (4288 bytes) I0816 09:46:17.577231 18084 addons.go:426] installing /etc/kubernetes/addons/dashboard-role.yaml I0816 09:46:17.577231 18084 ssh_runner.go:362] scp dashboard/dashboard-role.yaml --> /etc/kubernetes/addons/dashboard-role.yaml (1724 bytes) I0816 09:46:17.588897 18084 addons.go:426] installing /etc/kubernetes/addons/dashboard-rolebinding.yaml I0816 09:46:17.588897 18084 ssh_runner.go:362] scp dashboard/dashboard-rolebinding.yaml --> /etc/kubernetes/addons/dashboard-rolebinding.yaml (1046 bytes) I0816 09:46:17.598966 18084 addons.go:426] installing /etc/kubernetes/addons/dashboard-sa.yaml I0816 09:46:17.598966 18084 ssh_runner.go:362] scp dashboard/dashboard-sa.yaml --> /etc/kubernetes/addons/dashboard-sa.yaml (837 bytes) I0816 09:46:17.609375 18084 addons.go:426] installing /etc/kubernetes/addons/dashboard-secret.yaml I0816 09:46:17.609375 18084 ssh_runner.go:362] scp dashboard/dashboard-secret.yaml --> /etc/kubernetes/addons/dashboard-secret.yaml (1389 bytes) I0816 09:46:17.619675 18084 addons.go:426] installing /etc/kubernetes/addons/dashboard-svc.yaml I0816 09:46:17.619675 18084 ssh_runner.go:362] scp dashboard/dashboard-svc.yaml --> /etc/kubernetes/addons/dashboard-svc.yaml (1294 bytes) I0816 09:46:17.673589 18084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml I0816 09:46:17.724757 18084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml I0816 09:46:18.683016 18084 main.go:141] libmachine: [stdout =====>] : 192.168.1.103 I0816 09:46:18.683016 18084 main.go:141] libmachine: [stderr =====>] : I0816 09:46:18.683016 18084 sshutil.go:53] new ssh client: &{IP:192.168.1.103 Port:22 SSHKeyPath:C:\Users\rlyshw.TOASTER\.minikube\machines\minikube\id_rsa Username:docker} I0816 09:46:18.855126 18084 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml I0816 09:46:21.778907 18084 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/dashboard-ns.yaml -f /etc/kubernetes/addons/dashboard-clusterrole.yaml -f /etc/kubernetes/addons/dashboard-clusterrolebinding.yaml -f /etc/kubernetes/addons/dashboard-configmap.yaml -f /etc/kubernetes/addons/dashboard-dp.yaml -f /etc/kubernetes/addons/dashboard-role.yaml -f /etc/kubernetes/addons/dashboard-rolebinding.yaml -f /etc/kubernetes/addons/dashboard-sa.yaml -f /etc/kubernetes/addons/dashboard-secret.yaml -f /etc/kubernetes/addons/dashboard-svc.yaml: (4.0541314s) I0816 09:46:21.780352 18084 out.go:177] 💡 Some dashboard features require the metrics-server addon. To enable all features please run: minikube addons enable metrics-server I0816 09:46:21.778907 18084 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.9237681s) I0816 09:46:21.784086 18084 out.go:177] 🌟 Enabled addons: storage-provisioner, dashboard, default-storageclass I0816 09:46:21.786273 18084 addons.go:505] duration metric: took 8.8998604s for enable addons: enabled=[storage-provisioner dashboard default-storageclass] I0816 09:46:21.786273 18084 start.go:245] waiting for cluster config update ... I0816 09:46:21.786273 18084 start.go:254] writing updated cluster config ... I0816 09:46:21.875338 18084 ssh_runner.go:195] Run: rm -f paused I0816 09:46:21.910265 18084 out.go:177] 💡 kubectl not found. If you need it, try: 'minikube kubectl -- get pods -A' I0816 09:46:21.911315 18084 out.go:177] 🏄 Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default ==> Docker <== Aug 16 13:46:05 minikube dockerd[403391]: time="2024-08-16T13:46:05.343406640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 16 13:46:05 minikube dockerd[403391]: time="2024-08-16T13:46:05.343465641Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 16 13:46:05 minikube dockerd[403391]: time="2024-08-16T13:46:05.358797070Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 16 13:46:05 minikube dockerd[403391]: time="2024-08-16T13:46:05.358855671Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 16 13:46:05 minikube dockerd[403391]: time="2024-08-16T13:46:05.358881471Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 16 13:46:05 minikube dockerd[403391]: time="2024-08-16T13:46:05.358935672Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 16 13:46:05 minikube dockerd[403391]: time="2024-08-16T13:46:05.362557802Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 16 13:46:05 minikube dockerd[403391]: time="2024-08-16T13:46:05.363414409Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 16 13:46:05 minikube dockerd[403391]: time="2024-08-16T13:46:05.363553311Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 16 13:46:05 minikube dockerd[403391]: time="2024-08-16T13:46:05.363663612Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 16 13:46:07 minikube cri-dockerd[404372]: time="2024-08-16T13:46:07Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}" Aug 16 13:46:08 minikube dockerd[403391]: time="2024-08-16T13:46:08.240256124Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 16 13:46:08 minikube dockerd[403391]: time="2024-08-16T13:46:08.240540526Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 16 13:46:08 minikube dockerd[403391]: time="2024-08-16T13:46:08.241067030Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 16 13:46:08 minikube dockerd[403391]: time="2024-08-16T13:46:08.241118831Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 16 13:46:08 minikube dockerd[403391]: time="2024-08-16T13:46:08.263949824Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 16 13:46:08 minikube dockerd[403391]: time="2024-08-16T13:46:08.263992424Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 16 13:46:08 minikube dockerd[403391]: time="2024-08-16T13:46:08.264003024Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 16 13:46:08 minikube dockerd[403391]: time="2024-08-16T13:46:08.264058625Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 16 13:46:08 minikube dockerd[403391]: time="2024-08-16T13:46:08.308430700Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 16 13:46:08 minikube dockerd[403391]: time="2024-08-16T13:46:08.308473400Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 16 13:46:08 minikube dockerd[403391]: time="2024-08-16T13:46:08.308487800Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 16 13:46:08 minikube dockerd[403391]: time="2024-08-16T13:46:08.308541201Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 16 13:46:08 minikube dockerd[403391]: time="2024-08-16T13:46:08.343457296Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 16 13:46:08 minikube dockerd[403391]: time="2024-08-16T13:46:08.344081901Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 16 13:46:08 minikube dockerd[403391]: time="2024-08-16T13:46:08.344149002Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 16 13:46:08 minikube dockerd[403391]: time="2024-08-16T13:46:08.344256103Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 16 13:46:08 minikube dockerd[403391]: time="2024-08-16T13:46:08.357062911Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 16 13:46:08 minikube dockerd[403391]: time="2024-08-16T13:46:08.357126812Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 16 13:46:08 minikube dockerd[403391]: time="2024-08-16T13:46:08.357146912Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 16 13:46:08 minikube dockerd[403391]: time="2024-08-16T13:46:08.357210312Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 16 13:46:08 minikube dockerd[403391]: time="2024-08-16T13:46:08.359641733Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 16 13:46:08 minikube dockerd[403391]: time="2024-08-16T13:46:08.362927861Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 16 13:46:08 minikube dockerd[403391]: time="2024-08-16T13:46:08.362940861Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 16 13:46:08 minikube dockerd[403391]: time="2024-08-16T13:46:08.363000661Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 16 13:46:08 minikube dockerd[403391]: time="2024-08-16T13:46:08.366567091Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 16 13:46:08 minikube dockerd[403391]: time="2024-08-16T13:46:08.366676992Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 16 13:46:08 minikube dockerd[403391]: time="2024-08-16T13:46:08.366730393Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 16 13:46:08 minikube dockerd[403391]: time="2024-08-16T13:46:08.366973295Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 16 13:46:08 minikube dockerd[403391]: time="2024-08-16T13:46:08.382698128Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 16 13:46:08 minikube dockerd[403391]: time="2024-08-16T13:46:08.382733328Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 16 13:46:08 minikube dockerd[403391]: time="2024-08-16T13:46:08.382740428Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 16 13:46:08 minikube dockerd[403391]: time="2024-08-16T13:46:08.382809129Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 16 13:46:08 minikube dockerd[403391]: time="2024-08-16T13:46:08.386336658Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 16 13:46:08 minikube dockerd[403391]: time="2024-08-16T13:46:08.386378559Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 16 13:46:08 minikube dockerd[403391]: time="2024-08-16T13:46:08.386390059Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 16 13:46:08 minikube dockerd[403391]: time="2024-08-16T13:46:08.386439059Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 16 13:46:08 minikube cri-dockerd[404372]: time="2024-08-16T13:46:08Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/99b296ba3d986ad32249339d86c09c42595cb5920d4dbaed5de7a15f699a865b/resolv.conf as [nameserver 192.168.1.1]" Aug 16 13:46:08 minikube dockerd[403391]: time="2024-08-16T13:46:08.610840656Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 16 13:46:08 minikube dockerd[403391]: time="2024-08-16T13:46:08.610917557Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 16 13:46:08 minikube dockerd[403391]: time="2024-08-16T13:46:08.610933457Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 16 13:46:08 minikube dockerd[403391]: time="2024-08-16T13:46:08.611009458Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 16 13:51:18 minikube dockerd[403385]: time="2024-08-16T13:51:18.918865908Z" level=info msg="ignoring event" container=8431e0a7c43ea07f818018df8785354bbb0a5529b77dc68c03af5bbf16b1cc02 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Aug 16 13:51:18 minikube dockerd[403391]: time="2024-08-16T13:51:18.919776516Z" level=info msg="shim disconnected" id=8431e0a7c43ea07f818018df8785354bbb0a5529b77dc68c03af5bbf16b1cc02 namespace=moby Aug 16 13:51:18 minikube dockerd[403391]: time="2024-08-16T13:51:18.920087319Z" level=warning msg="cleaning up after shim disconnected" id=8431e0a7c43ea07f818018df8785354bbb0a5529b77dc68c03af5bbf16b1cc02 namespace=moby Aug 16 13:51:18 minikube dockerd[403391]: time="2024-08-16T13:51:18.920117219Z" level=info msg="cleaning up dead shim" namespace=moby Aug 16 13:51:37 minikube dockerd[403391]: time="2024-08-16T13:51:37.886890918Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Aug 16 13:51:37 minikube dockerd[403391]: time="2024-08-16T13:51:37.887536724Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Aug 16 13:51:37 minikube dockerd[403391]: time="2024-08-16T13:51:37.887609224Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Aug 16 13:51:37 minikube dockerd[403391]: time="2024-08-16T13:51:37.887737825Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 ==> container status <== CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID POD acd4c5a7188d9 9cfcf4d72e564 47 seconds ago Running controller 4 ef4e8579c4f87 controlplane-cnmp-cnmp-chart-gtw-4ntfl-7ghtg-5c6686c665-r78qq b3904ae55b6f5 6e38f40d628db 6 minutes ago Running storage-provisioner 12 99b296ba3d986 storage-provisioner 5caa8637c1de2 6e53747a73a04 6 minutes ago Running cert-manager-cainjector 2 c0b0b4bc82e85 cnmp-cert-manager-cainjector-77999c57f4-m977l 777f268efaa25 9095b2e049e11 6 minutes ago Running postgresql 1 39b425ef019d5 cnmp-postgresql-0 92d14f6a89a3d eed7100048ed4 6 minutes ago Running manager 2 35247d6517f44 cnmp-kgo-controller-manager-54df87f4d4-zk6vx 8431e0a7c43ea 9cfcf4d72e564 6 minutes ago Exited controller 3 ef4e8579c4f87 controlplane-cnmp-cnmp-chart-gtw-4ntfl-7ghtg-5c6686c665-r78qq 91c1c9f95d3ca cf975c3730660 6 minutes ago Running keycloak 1 ec45b890c24c7 cnmp-keycloak-0 7d53204cc4000 c5c6a16763cb7 6 minutes ago Running oauth2-proxy 1 613460bc1ac03 cnmp-oauth2-proxy-8d6c6ff98-lxcrs 2d855d5da74b4 07655ddf2eebe 6 minutes ago Running kubernetes-dashboard 9 78564dbe9c634 kubernetes-dashboard-779776cb65-bqxdz 519c1956680a9 a0bf559e280cf 6 minutes ago Running kube-proxy 5 53a008e2f6863 kube-proxy-b8lgl b6bfd1733ef13 3861cfcd7c04c 6 minutes ago Running etcd 2 966a827920179 etcd-minikube 3e6dd8663c0f6 c42f13656d0b2 6 minutes ago Running kube-apiserver 3 ed1426a4d01f9 kube-apiserver-minikube a7205e26ed9e0 c7aad43836fa5 6 minutes ago Running kube-controller-manager 9 55fe2ae3665d0 kube-controller-manager-minikube a0b843ee6595c 71cba45f5492e 6 minutes ago Running mo 1 3b1e7c661d27b minio-operator-7b8bb7db5b-nfsbm 2647a9fa17dfd cbb01a7bd410d 6 minutes ago Running coredns 5 2f2c107c0bf81 coredns-7db6d8ff4d-l9cjz 990983e53213c 115053965e86b 6 minutes ago Running dashboard-metrics-scraper 4 bc1e576c1457a dashboard-metrics-scraper-b5fc48f67-2ngr5 37b758cb6c7e4 259c8277fcbbc 6 minutes ago Running kube-scheduler 6 f30ef70b07559 kube-scheduler-minikube 4c0396e33d245 6e53747a73a04 6 minutes ago Exited cert-manager-cainjector 1 c0b0b4bc82e85 cnmp-cert-manager-cainjector-77999c57f4-m977l e4cb2337c92dc 110898f7403fe 6 minutes ago Running app 1 e84b8bf0a3235 cnmp-app-b84fb7ccf-t5tpj 9349a683fccce 07655ddf2eebe 6 minutes ago Exited kubernetes-dashboard 8 78564dbe9c634 kubernetes-dashboard-779776cb65-bqxdz d659309e0093e 6e38f40d628db 6 minutes ago Exited storage-provisioner 11 b36e17224ad00 storage-provisioner 614cd00b04021 cf4e67952bb8c 6 minutes ago Running cert-manager-webhook 1 a5287b99d2b36 cnmp-cert-manager-webhook-667c4b7c54-fs62f 15868def86279 ad393d6a4d1b1 6 minutes ago Running kube-rbac-proxy 1 35247d6517f44 cnmp-kgo-controller-manager-54df87f4d4-zk6vx 31f0b3227d8b6 71cba45f5492e 6 minutes ago Running mo 1 51d0e0ae19129 minio-operator-7b8bb7db5b-krq5f a8ead16e93051 cbb01a7bd410d 6 minutes ago Exited coredns 4 5749acba70668 coredns-7db6d8ff4d-l9cjz aeb942683261e c42f13656d0b2 6 minutes ago Exited kube-apiserver 2 c467ae72fd265 kube-apiserver-minikube b1ef547c3c7ac 8047104cfac02 6 minutes ago Running proxy 1 b2c0baae300a5 dataplane-cnmp-cnmp-chart-gtw-m4tfz-rqg8f-5cc6994589-xqfm5 80814fb4f2565 cf975c3730660 6 minutes ago Exited prepare-write-dirs 1 ec45b890c24c7 cnmp-keycloak-0 a67a428613f71 3861cfcd7c04c 6 minutes ago Exited etcd 1 a5fb020776eb5 etcd-minikube 27fa4c323f99e eed7100048ed4 6 minutes ago Exited manager 1 35247d6517f44 cnmp-kgo-controller-manager-54df87f4d4-zk6vx 798274a40fd16 c7aad43836fa5 6 minutes ago Exited kube-controller-manager 8 0bb71be713212 kube-controller-manager-minikube f286b07685996 1b5d9daaccb1c 6 minutes ago Running cert-manager-controller 1 02229d4829124 cnmp-cert-manager-fd565d748-rsr6g 33fb99024cce6 259c8277fcbbc 6 minutes ago Exited kube-scheduler 5 55c4e5dedfcda kube-scheduler-minikube 624b4b48c7bf4 a0bf559e280cf 6 minutes ago Exited kube-proxy 4 9831e8c52d83c kube-proxy-b8lgl d17fe204b53d0 cf975c3730660 13 hours ago Exited keycloak 0 b14f04d63a490 cnmp-keycloak-0 e2c2c96549ef2 9095b2e049e11 14 hours ago Exited postgresql 0 603ec755f6ceb cnmp-postgresql-0 144f446b405c7 8047104cfac02 14 hours ago Exited proxy 0 e0e481485900e dataplane-cnmp-cnmp-chart-gtw-m4tfz-rqg8f-5cc6994589-xqfm5 9db175eb7e7c1 1b5d9daaccb1c 14 hours ago Exited cert-manager-controller 0 902b129ed6dd1 cnmp-cert-manager-fd565d748-rsr6g e1708865252e5 c5c6a16763cb7 14 hours ago Exited oauth2-proxy 0 5a69f61f724cd cnmp-oauth2-proxy-8d6c6ff98-lxcrs d92ed599e8824 ad393d6a4d1b1 14 hours ago Exited kube-rbac-proxy 0 0d1cf50e05006 cnmp-kgo-controller-manager-54df87f4d4-zk6vx d4c9421c4671c 71cba45f5492e 14 hours ago Exited mo 0 fb51f41a07ae6 minio-operator-7b8bb7db5b-krq5f c4eb1556737d9 71cba45f5492e 14 hours ago Exited mo 0 ba38c4205e0ff minio-operator-7b8bb7db5b-nfsbm f4bc93f1b6187 cf4e67952bb8c 14 hours ago Exited cert-manager-webhook 0 ed328e0d98f22 cnmp-cert-manager-webhook-667c4b7c54-fs62f b87c1532dc87b 110898f7403fe 14 hours ago Exited app 0 079e94743a941 cnmp-app-b84fb7ccf-t5tpj 58e71e2c9335d 115053965e86b 16 hours ago Exited dashboard-metrics-scraper 3 edd84a9c2c20d dashboard-metrics-scraper-b5fc48f67-2ngr5 ==> coredns [2647a9fa17df] <== [INFO] 10.244.0.80:60198 - 14298 "AAAA IN 10-244-0-76.dataplane-admin-cnmp-cnmp-chart-gtw-m4tfz-qrvvm.default.svc.default.svc.cluster.local. udp 126 false 1232" NXDOMAIN qr,aa,rd 208 0.0000343s [INFO] 10.244.0.80:42342 - 52049 "A IN 10-244-0-76.dataplane-admin-cnmp-cnmp-chart-gtw-m4tfz-qrvvm.default.svc.svc.cluster.local. udp 118 false 1232" NXDOMAIN qr,aa,rd 200 0.000038601s [INFO] 10.244.0.80:38213 - 40936 "AAAA IN 10-244-0-76.dataplane-admin-cnmp-cnmp-chart-gtw-m4tfz-qrvvm.default.svc.svc.cluster.local. udp 118 false 1232" NXDOMAIN qr,aa,rd 200 0.0000272s [INFO] 10.244.0.80:55933 - 57388 "A IN 10-244-0-76.dataplane-admin-cnmp-cnmp-chart-gtw-m4tfz-qrvvm.default.svc.cluster.local. udp 114 false 1232" NXDOMAIN qr,aa,rd 196 0.0000919s [INFO] 10.244.0.80:45882 - 58157 "AAAA IN 10-244-0-76.dataplane-admin-cnmp-cnmp-chart-gtw-m4tfz-qrvvm.default.svc.cluster.local. udp 114 false 1232" NXDOMAIN qr,aa,rd 196 0.000091801s [INFO] 10.244.0.80:47255 - 26019 "AAAA IN 10-244-0-76.dataplane-admin-cnmp-cnmp-chart-gtw-m4tfz-qrvvm.default.svc. udp 100 false 1232" NOERROR qr,rd,ra 89 0.000714906s [INFO] 10.244.0.80:38026 - 24522 "A IN 10-244-0-76.dataplane-admin-cnmp-cnmp-chart-gtw-m4tfz-qrvvm.default.svc. udp 100 false 1232" NOERROR qr,rd,ra 89 0.000701906s [INFO] 10.244.0.80:57162 - 18846 "A IN 10-244-0-76.dataplane-admin-cnmp-cnmp-chart-gtw-m4tfz-qrvvm.default.svc.default.svc.cluster.local. udp 126 false 1232" NXDOMAIN qr,aa,rd 208 0.000090201s [INFO] 10.244.0.80:50691 - 26998 "AAAA IN 10-244-0-76.dataplane-admin-cnmp-cnmp-chart-gtw-m4tfz-qrvvm.default.svc.default.svc.cluster.local. udp 126 false 1232" NXDOMAIN qr,aa,rd 208 0.000080401s [INFO] 10.244.0.80:58897 - 31637 "AAAA IN 10-244-0-76.dataplane-admin-cnmp-cnmp-chart-gtw-m4tfz-qrvvm.default.svc.svc.cluster.local. udp 118 false 1232" NXDOMAIN qr,aa,rd 200 0.0000484s [INFO] 10.244.0.80:44306 - 16354 "A IN 10-244-0-76.dataplane-admin-cnmp-cnmp-chart-gtw-m4tfz-qrvvm.default.svc.svc.cluster.local. udp 118 false 1232" NXDOMAIN qr,aa,rd 200 0.000077701s [INFO] 10.244.0.80:42304 - 43097 "AAAA IN 10-244-0-76.dataplane-admin-cnmp-cnmp-chart-gtw-m4tfz-qrvvm.default.svc.cluster.local. udp 114 false 1232" NXDOMAIN qr,aa,rd 196 0.0000439s [INFO] 10.244.0.80:46384 - 52218 "A IN 10-244-0-76.dataplane-admin-cnmp-cnmp-chart-gtw-m4tfz-qrvvm.default.svc.cluster.local. udp 114 false 1232" NXDOMAIN qr,aa,rd 196 0.0000267s [INFO] 10.244.0.80:53696 - 24964 "A IN 10-244-0-76.dataplane-admin-cnmp-cnmp-chart-gtw-m4tfz-qrvvm.default.svc. udp 100 false 1232" NOERROR qr,rd,ra 89 0.000739006s [INFO] 10.244.0.80:48141 - 13055 "AAAA IN 10-244-0-76.dataplane-admin-cnmp-cnmp-chart-gtw-m4tfz-qrvvm.default.svc. udp 100 false 1232" NOERROR qr,rd,ra 89 0.000951208s [INFO] 10.244.0.80:39278 - 27777 "A IN 10-244-0-76.dataplane-admin-cnmp-cnmp-chart-gtw-m4tfz-qrvvm.default.svc.default.svc.cluster.local. udp 126 false 1232" NXDOMAIN qr,aa,rd 208 0.000064701s [INFO] 10.244.0.80:41035 - 17887 "AAAA IN 10-244-0-76.dataplane-admin-cnmp-cnmp-chart-gtw-m4tfz-qrvvm.default.svc.default.svc.cluster.local. udp 126 false 1232" NXDOMAIN qr,aa,rd 208 0.000047901s [INFO] 10.244.0.80:49829 - 22222 "AAAA IN 10-244-0-76.dataplane-admin-cnmp-cnmp-chart-gtw-m4tfz-qrvvm.default.svc.svc.cluster.local. udp 118 false 1232" NXDOMAIN qr,aa,rd 200 0.0000544s [INFO] 10.244.0.80:60863 - 42860 "A IN 10-244-0-76.dataplane-admin-cnmp-cnmp-chart-gtw-m4tfz-qrvvm.default.svc.svc.cluster.local. udp 118 false 1232" NXDOMAIN qr,aa,rd 200 0.000133101s [INFO] 10.244.0.80:57073 - 22333 "AAAA IN 10-244-0-76.dataplane-admin-cnmp-cnmp-chart-gtw-m4tfz-qrvvm.default.svc.cluster.local. udp 114 false 1232" NXDOMAIN qr,aa,rd 196 0.0000316s [INFO] 10.244.0.80:32774 - 5502 "A IN 10-244-0-76.dataplane-admin-cnmp-cnmp-chart-gtw-m4tfz-qrvvm.default.svc.cluster.local. udp 114 false 1232" NXDOMAIN qr,aa,rd 196 0.000059501s [INFO] 10.244.0.80:41058 - 41427 "AAAA IN 10-244-0-76.dataplane-admin-cnmp-cnmp-chart-gtw-m4tfz-qrvvm.default.svc. udp 100 false 1232" NOERROR qr,rd,ra 89 0.000690106s [INFO] 10.244.0.80:45951 - 5372 "A IN 10-244-0-76.dataplane-admin-cnmp-cnmp-chart-gtw-m4tfz-qrvvm.default.svc. udp 100 false 1232" NOERROR qr,rd,ra 89 0.000778907s [INFO] 10.244.0.80:59452 - 60732 "A IN 10-244-0-85.dataplane-admin-cnmp-cnmp-chart-gtw-m4tfz-qrvvm.default.svc.default.svc.cluster.local. udp 126 false 1232" NXDOMAIN qr,aa,rd 208 0.000109201s [INFO] 10.244.0.80:43925 - 65380 "AAAA IN 10-244-0-85.dataplane-admin-cnmp-cnmp-chart-gtw-m4tfz-qrvvm.default.svc.default.svc.cluster.local. udp 126 false 1232" NXDOMAIN qr,aa,rd 208 0.000180101s [INFO] 10.244.0.80:60881 - 49574 "A IN 10-244-0-85.dataplane-admin-cnmp-cnmp-chart-gtw-m4tfz-qrvvm.default.svc.svc.cluster.local. udp 118 false 1232" NXDOMAIN qr,aa,rd 200 0.0000641s [INFO] 10.244.0.80:51817 - 50629 "AAAA IN 10-244-0-85.dataplane-admin-cnmp-cnmp-chart-gtw-m4tfz-qrvvm.default.svc.svc.cluster.local. udp 118 false 1232" NXDOMAIN qr,aa,rd 200 0.0000481s [INFO] 10.244.0.80:57417 - 20222 "A IN 10-244-0-85.dataplane-admin-cnmp-cnmp-chart-gtw-m4tfz-qrvvm.default.svc.cluster.local. udp 114 false 1232" NOERROR qr,aa,rd 204 0.0000689s [INFO] 10.244.0.80:45385 - 38040 "AAAA IN 10-244-0-85.dataplane-admin-cnmp-cnmp-chart-gtw-m4tfz-qrvvm.default.svc.cluster.local. udp 114 false 1232" NOERROR qr,aa,rd 196 0.0000691s [INFO] 10.244.0.80:41987 - 56247 "AAAA IN kong-hf.konghq.com.default.svc.cluster.local. udp 73 false 1232" NXDOMAIN qr,aa,rd 155 0.000130601s [INFO] 10.244.0.80:45997 - 30953 "A IN kong-hf.konghq.com.default.svc.cluster.local. udp 73 false 1232" NXDOMAIN qr,aa,rd 155 0.000200202s [INFO] 10.244.0.80:49677 - 86 "AAAA IN kong-hf.konghq.com.svc.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.0000738s [INFO] 10.244.0.80:41362 - 53867 "A IN kong-hf.konghq.com.svc.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000109101s [INFO] 10.244.0.80:51434 - 2479 "AAAA IN kong-hf.konghq.com.cluster.local. udp 61 false 1232" NXDOMAIN qr,aa,rd 143 0.0000473s [INFO] 10.244.0.80:57899 - 60440 "A IN kong-hf.konghq.com.cluster.local. udp 61 false 1232" NXDOMAIN qr,aa,rd 143 0.000050501s [INFO] 10.244.0.80:36287 - 43357 "AAAA IN kong-hf.konghq.com. udp 47 false 1232" NOERROR qr,rd,ra 36 0.000721607s [INFO] 10.244.0.80:40498 - 24965 "A IN kong-hf.konghq.com. udp 47 false 1232" NOERROR qr,rd,ra 104 0.027265837s [INFO] 10.244.0.84:40960 - 20381 "AAAA IN cnmp-keycloak-headless.default.svc.cluster.local.default.svc.cluster.local. udp 92 false 512" NXDOMAIN qr,aa,rd 185 0.000121401s [INFO] 10.244.0.84:40960 - 58526 "A IN cnmp-keycloak-headless.default.svc.cluster.local.default.svc.cluster.local. udp 92 false 512" NXDOMAIN qr,aa,rd 185 0.000171701s [INFO] 10.244.0.84:38895 - 35570 "AAAA IN cnmp-keycloak-headless.default.svc.cluster.local.svc.cluster.local. udp 84 false 512" NXDOMAIN qr,aa,rd 177 0.000061501s [INFO] 10.244.0.84:38895 - 34032 "A IN cnmp-keycloak-headless.default.svc.cluster.local.svc.cluster.local. udp 84 false 512" NXDOMAIN qr,aa,rd 177 0.000055201s [INFO] 10.244.0.84:51405 - 28935 "AAAA IN cnmp-keycloak-headless.default.svc.cluster.local.cluster.local. udp 80 false 512" NXDOMAIN qr,aa,rd 173 0.0000537s [INFO] 10.244.0.84:51405 - 5547 "A IN cnmp-keycloak-headless.default.svc.cluster.local.cluster.local. udp 80 false 512" NXDOMAIN qr,aa,rd 173 0.000088001s [INFO] 10.244.0.84:58280 - 29342 "AAAA IN cnmp-keycloak-headless.default.svc.cluster.local. udp 66 false 512" NOERROR qr,aa,rd 159 0.000070701s [INFO] 10.244.0.84:58280 - 16935 "A IN cnmp-keycloak-headless.default.svc.cluster.local. udp 66 false 512" NOERROR qr,aa,rd 130 0.0000759s [INFO] 10.244.0.80:56336 - 54650 "AAAA IN 10-244-0-85.dataplane-admin-cnmp-cnmp-chart-gtw-m4tfz-qrvvm.default.svc.default.svc.cluster.local. udp 126 false 1232" NXDOMAIN qr,aa,rd 208 0.0000529s [INFO] 10.244.0.80:51694 - 47870 "A IN 10-244-0-85.dataplane-admin-cnmp-cnmp-chart-gtw-m4tfz-qrvvm.default.svc.default.svc.cluster.local. udp 126 false 1232" NXDOMAIN qr,aa,rd 208 0.0000302s [INFO] 10.244.0.80:33350 - 57289 "AAAA IN 10-244-0-85.dataplane-admin-cnmp-cnmp-chart-gtw-m4tfz-qrvvm.default.svc.svc.cluster.local. udp 118 false 1232" NXDOMAIN qr,aa,rd 200 0.0000241s [INFO] 10.244.0.80:38668 - 35308 "A IN 10-244-0-85.dataplane-admin-cnmp-cnmp-chart-gtw-m4tfz-qrvvm.default.svc.svc.cluster.local. udp 118 false 1232" NXDOMAIN qr,aa,rd 200 0.0000276s [INFO] 10.244.0.80:47814 - 9564 "AAAA IN 10-244-0-85.dataplane-admin-cnmp-cnmp-chart-gtw-m4tfz-qrvvm.default.svc.cluster.local. udp 114 false 1232" NOERROR qr,aa,rd 196 0.0000543s [INFO] 10.244.0.80:60500 - 9229 "A IN 10-244-0-85.dataplane-admin-cnmp-cnmp-chart-gtw-m4tfz-qrvvm.default.svc.cluster.local. udp 114 false 1232" NOERROR qr,aa,rd 204 0.0000411s [INFO] 10.244.0.85:59900 - 55499 "SRV IN kong-hf.konghq.com.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000130501s [INFO] 10.244.0.85:49490 - 15334 "SRV IN kong-hf.konghq.com.svc.cluster.local. udp 54 false 512" NXDOMAIN qr,aa,rd 147 0.0000534s [INFO] 10.244.0.85:60124 - 30589 "SRV IN kong-hf.konghq.com.cluster.local. udp 50 false 512" NXDOMAIN qr,aa,rd 143 0.000044901s [INFO] 10.244.0.85:50228 - 45097 "SRV IN kong-hf.konghq.com. udp 36 false 512" NOERROR qr,rd,ra 133 0.027794242s [INFO] 10.244.0.85:49789 - 10888 "A IN kong-hf.konghq.com.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.0000543s [INFO] 10.244.0.85:37835 - 64399 "A IN kong-hf.konghq.com.svc.cluster.local. udp 54 false 512" NXDOMAIN qr,aa,rd 147 0.000045801s [INFO] 10.244.0.85:43935 - 5189 "A IN kong-hf.konghq.com.cluster.local. udp 50 false 512" NXDOMAIN qr,aa,rd 143 0.0000436s [INFO] 10.244.0.85:46829 - 13951 "A IN kong-hf.konghq.com. udp 36 false 512" NOERROR qr,aa,rd,ra 104 0.0000475s [INFO] 10.244.0.85:44332 - 2172 "A IN kong-hf.konghq.com. udp 36 false 512" NOERROR qr,rd,ra 104 0.001040709s ==> coredns [a8ead16e9305] <== [INFO] SIGTERM: Shutting down servers then terminating [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server [INFO] plugin/kubernetes: waiting for Kubernetes API before starting server [WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API .:53 [INFO] plugin/reload: Running configuration SHA512 = b87dcf6bfb51c5cfb3f9c8a0d7a5953778afc259e72563777d9c6118b606f39dbe33d6395a5792478452a819e4a204caccc3d28dce81905f1557c188cb04e663 CoreDNS-1.11.1 linux/amd64, go1.20.7, ae2bbc2 [INFO] plugin/health: Going into lameduck mode for 5s [INFO] 127.0.0.1:35969 - 62784 "HINFO IN 498699160972150162.450616272882034773. udp 55 false 512" NOERROR qr,rd,ra 130 0.010904406s ==> describe nodes <== Name: minikube Roles: control-plane Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=minikube kubernetes.io/os=linux minikube.k8s.io/commit=5883c09216182566a63dff4c326a6fc9ed2982ff minikube.k8s.io/name=minikube minikube.k8s.io/primary=true minikube.k8s.io/updated_at=2024_08_15T13_58_02_0700 minikube.k8s.io/version=v1.33.1 node-role.kubernetes.io/control-plane= node.kubernetes.io/exclude-from-external-load-balancers= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Thu, 15 Aug 2024 17:57:59 +0000 Taints: Unschedulable: false Lease: HolderIdentity: minikube AcquireTime: RenewTime: Fri, 16 Aug 2024 13:52:16 +0000 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Fri, 16 Aug 2024 13:51:14 +0000 Thu, 15 Aug 2024 17:57:58 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Fri, 16 Aug 2024 13:51:14 +0000 Thu, 15 Aug 2024 17:57:58 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Fri, 16 Aug 2024 13:51:14 +0000 Thu, 15 Aug 2024 17:57:58 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Fri, 16 Aug 2024 13:51:14 +0000 Thu, 15 Aug 2024 21:46:13 +0000 KubeletReady kubelet is posting ready status Addresses: InternalIP: 192.168.1.103 Hostname: minikube Capacity: cpu: 2 ephemeral-storage: 17734596Ki hugepages-2Mi: 0 memory: 5923996Ki pods: 110 Allocatable: cpu: 2 ephemeral-storage: 17734596Ki hugepages-2Mi: 0 memory: 5923996Ki pods: 110 System Info: Machine ID: 68795ecd3edc4bb9aa0f27fe91d40746 System UUID: af5d1b84-ec16-9241-af88-add706e9ef8a Boot ID: 352276f5-6d64-4844-a80e-824b9efa5508 Kernel Version: 5.10.207 OS Image: Buildroot 2023.02.9 Operating System: linux Architecture: amd64 Container Runtime Version: docker://26.0.2 Kubelet Version: v1.30.0 Kube-Proxy Version: v1.30.0 PodCIDR: 10.244.0.0/24 PodCIDRs: 10.244.0.0/24 Non-terminated Pods: (21 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age --------- ---- ------------ ---------- --------------- ------------- --- default cnmp-app-b84fb7ccf-t5tpj 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 13h default cnmp-cert-manager-cainjector-77999c57f4-m977l 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 13h default cnmp-cert-manager-fd565d748-rsr6g 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 13h default cnmp-cert-manager-webhook-667c4b7c54-fs62f 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 13h default cnmp-keycloak-0 500m (25%!)(MISSING) 750m (37%!)(MISSING) 512Mi (8%!)(MISSING) 768Mi (13%!)(MISSING) 12h default cnmp-kgo-controller-manager-54df87f4d4-zk6vx 15m (0%!)(MISSING) 1 (50%!)(MISSING) 192Mi (3%!)(MISSING) 384Mi (6%!)(MISSING) 13h default cnmp-oauth2-proxy-8d6c6ff98-lxcrs 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 13h default cnmp-postgresql-0 100m (5%!)(MISSING) 150m (7%!)(MISSING) 128Mi (2%!)(MISSING) 192Mi (3%!)(MISSING) 13h default controlplane-cnmp-cnmp-chart-gtw-4ntfl-7ghtg-5c6686c665-r78qq 100m (5%!)(MISSING) 200m (10%!)(MISSING) 20Mi (0%!)(MISSING) 100Mi (1%!)(MISSING) 13h default dataplane-cnmp-cnmp-chart-gtw-m4tfz-rqg8f-5cc6994589-xqfm5 100m (5%!)(MISSING) 1 (50%!)(MISSING) 20Mi (0%!)(MISSING) 1000Mi (17%!)(MISSING) 13h default minio-operator-7b8bb7db5b-krq5f 200m (10%!)(MISSING) 0 (0%!)(MISSING) 256Mi (4%!)(MISSING) 0 (0%!)(MISSING) 13h default minio-operator-7b8bb7db5b-nfsbm 200m (10%!)(MISSING) 0 (0%!)(MISSING) 256Mi (4%!)(MISSING) 0 (0%!)(MISSING) 13h kube-system coredns-7db6d8ff4d-l9cjz 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (1%!)(MISSING) 170Mi (2%!)(MISSING) 19h kube-system etcd-minikube 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (1%!)(MISSING) 0 (0%!)(MISSING) 15h kube-system kube-apiserver-minikube 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 15h kube-system kube-controller-manager-minikube 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 19h kube-system kube-proxy-b8lgl 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 19h kube-system kube-scheduler-minikube 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 19h kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 19h kubernetes-dashboard dashboard-metrics-scraper-b5fc48f67-2ngr5 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 17h kubernetes-dashboard kubernetes-dashboard-779776cb65-bqxdz 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 17h Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 1965m (98%!)(MISSING) 3100m (155%!)(MISSING) memory 1554Mi (26%!)(MISSING) 2614Mi (45%!)(MISSING) ephemeral-storage 1100Mi (6%!)(MISSING) 4Gi (23%!)(MISSING) hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Starting 6m16s kube-proxy Normal Starting 6m20s kubelet Starting kubelet. Normal NodeHasSufficientMemory 6m20s (x8 over 6m20s) kubelet Node minikube status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 6m20s (x8 over 6m20s) kubelet Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 6m20s (x7 over 6m20s) kubelet Node minikube status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 6m20s kubelet Updated Node Allocatable limit across pods Normal RegisteredNode 6m4s node-controller Node minikube event: Registered Node minikube in Controller ==> dmesg <== [Aug15 22:14] systemd-fstab-generator[693]: Ignoring "noauto" option for root device [ +0.113224] systemd-fstab-generator[706]: Ignoring "noauto" option for root device [Aug15 22:15] systemd-fstab-generator[1078]: Ignoring "noauto" option for root device [ +0.040819] kauditd_printk_skb: 73 callbacks suppressed [ +1.185890] systemd-fstab-generator[1117]: Ignoring "noauto" option for root device [ +0.166361] systemd-fstab-generator[1129]: Ignoring "noauto" option for root device [ +0.185422] systemd-fstab-generator[1143]: Ignoring "noauto" option for root device [ +2.731927] systemd-fstab-generator[1347]: Ignoring "noauto" option for root device [ +0.166993] systemd-fstab-generator[1359]: Ignoring "noauto" option for root device [ +0.183253] systemd-fstab-generator[1371]: Ignoring "noauto" option for root device [ +0.383643] systemd-fstab-generator[1386]: Ignoring "noauto" option for root device [ +0.045224] kauditd_printk_skb: 183 callbacks suppressed [ +1.514965] systemd-fstab-generator[1493]: Ignoring "noauto" option for root device [ +4.106879] systemd-fstab-generator[1634]: Ignoring "noauto" option for root device [ +0.039434] kauditd_printk_skb: 34 callbacks suppressed [ +21.764428] kauditd_printk_skb: 52 callbacks suppressed [Aug15 22:16] systemd-fstab-generator[5315]: Ignoring "noauto" option for root device [ +0.120200] kauditd_printk_skb: 167 callbacks suppressed [ +6.473099] kauditd_printk_skb: 12 callbacks suppressed [Aug15 22:17] kauditd_printk_skb: 7 callbacks suppressed [Aug15 22:24] kauditd_printk_skb: 5 callbacks suppressed [Aug15 22:30] kauditd_printk_skb: 21 callbacks suppressed [ +16.594182] kauditd_printk_skb: 20 callbacks suppressed [Aug15 22:31] kauditd_printk_skb: 8 callbacks suppressed [Aug15 22:46] kauditd_printk_skb: 5 callbacks suppressed [ +5.214278] kauditd_printk_skb: 200 callbacks suppressed [Aug15 22:47] kauditd_printk_skb: 2 callbacks suppressed [Aug15 22:58] kauditd_printk_skb: 64 callbacks suppressed [Aug15 23:48] kauditd_printk_skb: 21 callbacks suppressed [Aug16 00:04] kauditd_printk_skb: 78 callbacks suppressed [Aug16 00:05] kauditd_printk_skb: 20 callbacks suppressed [Aug16 00:09] kauditd_printk_skb: 16 callbacks suppressed [ +32.672486] kauditd_printk_skb: 5 callbacks suppressed [ +10.617782] kauditd_printk_skb: 200 callbacks suppressed [Aug16 00:10] kauditd_printk_skb: 34 callbacks suppressed [Aug16 00:11] kauditd_printk_skb: 3 callbacks suppressed [Aug16 00:12] kauditd_printk_skb: 2 callbacks suppressed [Aug16 00:13] kauditd_printk_skb: 2 callbacks suppressed [ +56.168245] kauditd_printk_skb: 14 callbacks suppressed [Aug16 00:14] kauditd_printk_skb: 76 callbacks suppressed [Aug16 01:07] kauditd_printk_skb: 27 callbacks suppressed [Aug16 02:26] hrtimer: interrupt took 3662441 ns [Aug16 13:45] systemd-fstab-generator[402190]: Ignoring "noauto" option for root device [ +1.111346] systemd-fstab-generator[402226]: Ignoring "noauto" option for root device [ +0.269954] systemd-fstab-generator[402238]: Ignoring "noauto" option for root device [ +0.287361] systemd-fstab-generator[402252]: Ignoring "noauto" option for root device [ +5.416353] kauditd_printk_skb: 132 callbacks suppressed [ +7.785031] systemd-fstab-generator[403956]: Ignoring "noauto" option for root device [ +0.201582] systemd-fstab-generator[403968]: Ignoring "noauto" option for root device [ +0.203951] systemd-fstab-generator[403980]: Ignoring "noauto" option for root device [ +0.420072] systemd-fstab-generator[404065]: Ignoring "noauto" option for root device [ +1.512359] kauditd_printk_skb: 179 callbacks suppressed [ +0.078038] systemd-fstab-generator[405394]: Ignoring "noauto" option for root device [ +5.246710] kauditd_printk_skb: 308 callbacks suppressed [Aug16 13:46] kauditd_printk_skb: 13 callbacks suppressed [ +2.076128] systemd-fstab-generator[410345]: Ignoring "noauto" option for root device [ +3.691459] kauditd_printk_skb: 68 callbacks suppressed [ +4.865672] systemd-fstab-generator[411223]: Ignoring "noauto" option for root device [ +0.293275] kauditd_printk_skb: 42 callbacks suppressed [ +6.698005] kauditd_printk_skb: 12 callbacks suppressed ==> etcd [a67a428613f7] <== {"level":"warn","ts":"2024-08-16T13:45:52.288016Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."} {"level":"info","ts":"2024-08-16T13:45:52.288072Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.1.103:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://192.168.1.103:2380","--initial-cluster=minikube=https://192.168.1.103:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.1.103:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.1.103:2380","--name=minikube","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]} {"level":"info","ts":"2024-08-16T13:45:52.288122Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"} {"level":"warn","ts":"2024-08-16T13:45:52.288137Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."} {"level":"info","ts":"2024-08-16T13:45:52.288142Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.1.103:2380"]} {"level":"info","ts":"2024-08-16T13:45:52.288158Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]} {"level":"info","ts":"2024-08-16T13:45:52.28861Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.1.103:2379"]} {"level":"info","ts":"2024-08-16T13:45:52.288675Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"minikube","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.1.103:2380"],"listen-peer-urls":["https://192.168.1.103:2380"],"advertise-client-urls":["https://192.168.1.103:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.1.103:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"} {"level":"info","ts":"2024-08-16T13:45:52.304411Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"15.60653ms"} ==> etcd [b6bfd1733ef1] <== {"level":"warn","ts":"2024-08-16T13:46:05.467921Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."} {"level":"info","ts":"2024-08-16T13:46:05.469086Z","caller":"etcdmain/etcd.go:73","msg":"Running: ","args":["etcd","--advertise-client-urls=https://192.168.1.103:2379","--cert-file=/var/lib/minikube/certs/etcd/server.crt","--client-cert-auth=true","--data-dir=/var/lib/minikube/etcd","--experimental-initial-corrupt-check=true","--experimental-watch-progress-notify-interval=5s","--initial-advertise-peer-urls=https://192.168.1.103:2380","--initial-cluster=minikube=https://192.168.1.103:2380","--key-file=/var/lib/minikube/certs/etcd/server.key","--listen-client-urls=https://127.0.0.1:2379,https://192.168.1.103:2379","--listen-metrics-urls=http://127.0.0.1:2381","--listen-peer-urls=https://192.168.1.103:2380","--name=minikube","--peer-cert-file=/var/lib/minikube/certs/etcd/peer.crt","--peer-client-cert-auth=true","--peer-key-file=/var/lib/minikube/certs/etcd/peer.key","--peer-trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt","--proxy-refresh-interval=70000","--snapshot-count=10000","--trusted-ca-file=/var/lib/minikube/certs/etcd/ca.crt"]} {"level":"info","ts":"2024-08-16T13:46:05.46917Z","caller":"etcdmain/etcd.go:116","msg":"server has been already initialized","data-dir":"/var/lib/minikube/etcd","dir-type":"member"} {"level":"warn","ts":"2024-08-16T13:46:05.469227Z","caller":"embed/config.go:679","msg":"Running http and grpc server on single port. This is not recommended for production."} {"level":"info","ts":"2024-08-16T13:46:05.469259Z","caller":"embed/etcd.go:127","msg":"configuring peer listeners","listen-peer-urls":["https://192.168.1.103:2380"]} {"level":"info","ts":"2024-08-16T13:46:05.469308Z","caller":"embed/etcd.go:494","msg":"starting with peer TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/peer.crt, key = /var/lib/minikube/certs/etcd/peer.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]} {"level":"info","ts":"2024-08-16T13:46:05.469755Z","caller":"embed/etcd.go:135","msg":"configuring client listeners","listen-client-urls":["https://127.0.0.1:2379","https://192.168.1.103:2379"]} {"level":"info","ts":"2024-08-16T13:46:05.469881Z","caller":"embed/etcd.go:308","msg":"starting an etcd server","etcd-version":"3.5.12","git-sha":"e7b3bb6cc","go-version":"go1.20.13","go-os":"linux","go-arch":"amd64","max-cpu-set":2,"max-cpu-available":2,"member-initialized":true,"name":"minikube","data-dir":"/var/lib/minikube/etcd","wal-dir":"","wal-dir-dedicated":"","member-dir":"/var/lib/minikube/etcd/member","force-new-cluster":false,"heartbeat-interval":"100ms","election-timeout":"1s","initial-election-tick-advance":true,"snapshot-count":10000,"max-wals":5,"max-snapshots":5,"snapshot-catchup-entries":5000,"initial-advertise-peer-urls":["https://192.168.1.103:2380"],"listen-peer-urls":["https://192.168.1.103:2380"],"advertise-client-urls":["https://192.168.1.103:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.1.103:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"],"cors":["*"],"host-whitelist":["*"],"initial-cluster":"","initial-cluster-state":"new","initial-cluster-token":"","quota-backend-bytes":2147483648,"max-request-bytes":1572864,"max-concurrent-streams":4294967295,"pre-vote":true,"initial-corrupt-check":true,"corrupt-check-time-interval":"0s","compact-check-time-enabled":false,"compact-check-time-interval":"1m0s","auto-compaction-mode":"periodic","auto-compaction-retention":"0s","auto-compaction-interval":"0s","discovery-url":"","discovery-proxy":"","downgrade-check-interval":"5s"} {"level":"info","ts":"2024-08-16T13:46:05.476738Z","caller":"etcdserver/backend.go:81","msg":"opened backend db","path":"/var/lib/minikube/etcd/member/snap/db","took":"6.652756ms"} {"level":"info","ts":"2024-08-16T13:46:06.367932Z","caller":"etcdserver/server.go:511","msg":"recovered v2 store from snapshot","snapshot-index":150015,"snapshot-size":"8.2 kB"} {"level":"info","ts":"2024-08-16T13:46:06.367987Z","caller":"etcdserver/server.go:524","msg":"recovered v3 backend from snapshot","backend-size-bytes":20201472,"backend-size":"20 MB","backend-size-in-use-bytes":6135808,"backend-size-in-use":"6.1 MB"} {"level":"info","ts":"2024-08-16T13:46:06.450534Z","caller":"etcdserver/raft.go:530","msg":"restarting local member","cluster-id":"b27c1ce736eb2c61","local-member-id":"543df1ffee8cbcde","commit-index":151367} {"level":"info","ts":"2024-08-16T13:46:06.450704Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"543df1ffee8cbcde switched to configuration voters=(6070273954286451934)"} {"level":"info","ts":"2024-08-16T13:46:06.450736Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"543df1ffee8cbcde became follower at term 5"} {"level":"info","ts":"2024-08-16T13:46:06.450747Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 543df1ffee8cbcde [peers: [543df1ffee8cbcde], term: 5, commit: 151367, applied: 150015, lastindex: 151367, lastterm: 5]"} {"level":"info","ts":"2024-08-16T13:46:06.450858Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"} {"level":"info","ts":"2024-08-16T13:46:06.450875Z","caller":"membership/cluster.go:278","msg":"recovered/added member from store","cluster-id":"b27c1ce736eb2c61","local-member-id":"543df1ffee8cbcde","recovered-remote-peer-id":"543df1ffee8cbcde","recovered-remote-peer-urls":["https://172.26.143.14:2380"]} {"level":"info","ts":"2024-08-16T13:46:06.450881Z","caller":"membership/cluster.go:287","msg":"set cluster version from store","cluster-version":"3.5"} {"level":"warn","ts":"2024-08-16T13:46:06.454685Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"} {"level":"info","ts":"2024-08-16T13:46:06.456382Z","caller":"mvcc/kvstore.go:341","msg":"restored last compact revision","meta-bucket-name":"meta","meta-bucket-name-key":"finishedCompactRev","restored-compact-revision":135206} {"level":"info","ts":"2024-08-16T13:46:06.459819Z","caller":"mvcc/kvstore.go:407","msg":"kvstore restored","current-rev":136436} {"level":"info","ts":"2024-08-16T13:46:06.462984Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"} {"level":"info","ts":"2024-08-16T13:46:06.466207Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"543df1ffee8cbcde","timeout":"7s"} {"level":"info","ts":"2024-08-16T13:46:06.466896Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"543df1ffee8cbcde"} {"level":"info","ts":"2024-08-16T13:46:06.466935Z","caller":"etcdserver/server.go:851","msg":"starting etcd server","local-member-id":"543df1ffee8cbcde","local-server-version":"3.5.12","cluster-id":"b27c1ce736eb2c61","cluster-version":"3.5"} {"level":"info","ts":"2024-08-16T13:46:06.467102Z","caller":"etcdserver/server.go:744","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"543df1ffee8cbcde","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"} {"level":"info","ts":"2024-08-16T13:46:06.4672Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"} {"level":"info","ts":"2024-08-16T13:46:06.467282Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"} {"level":"info","ts":"2024-08-16T13:46:06.46735Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"} {"level":"info","ts":"2024-08-16T13:46:06.468368Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]} {"level":"info","ts":"2024-08-16T13:46:06.468468Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"543df1ffee8cbcde","initial-advertise-peer-urls":["https://192.168.1.103:2380"],"listen-peer-urls":["https://192.168.1.103:2380"],"advertise-client-urls":["https://192.168.1.103:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.1.103:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]} {"level":"info","ts":"2024-08-16T13:46:06.46849Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"} {"level":"info","ts":"2024-08-16T13:46:06.468563Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.1.103:2380"} {"level":"info","ts":"2024-08-16T13:46:06.468577Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.1.103:2380"} {"level":"info","ts":"2024-08-16T13:46:06.851117Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"543df1ffee8cbcde is starting a new election at term 5"} {"level":"info","ts":"2024-08-16T13:46:06.851154Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"543df1ffee8cbcde became pre-candidate at term 5"} {"level":"info","ts":"2024-08-16T13:46:06.851179Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"543df1ffee8cbcde received MsgPreVoteResp from 543df1ffee8cbcde at term 5"} {"level":"info","ts":"2024-08-16T13:46:06.851189Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"543df1ffee8cbcde became candidate at term 6"} {"level":"info","ts":"2024-08-16T13:46:06.851194Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"543df1ffee8cbcde received MsgVoteResp from 543df1ffee8cbcde at term 6"} {"level":"info","ts":"2024-08-16T13:46:06.8512Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"543df1ffee8cbcde became leader at term 6"} {"level":"info","ts":"2024-08-16T13:46:06.851205Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 543df1ffee8cbcde elected leader 543df1ffee8cbcde at term 6"} {"level":"info","ts":"2024-08-16T13:46:06.858241Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"543df1ffee8cbcde","local-member-attributes":"{Name:minikube ClientURLs:[https://192.168.1.103:2379]}","request-path":"/0/members/543df1ffee8cbcde/attributes","cluster-id":"b27c1ce736eb2c61","publish-timeout":"7s"} {"level":"info","ts":"2024-08-16T13:46:06.85831Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"} {"level":"info","ts":"2024-08-16T13:46:06.858365Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"} {"level":"info","ts":"2024-08-16T13:46:06.858523Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"} {"level":"info","ts":"2024-08-16T13:46:06.858554Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"} {"level":"info","ts":"2024-08-16T13:46:06.859513Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.1.103:2379"} {"level":"info","ts":"2024-08-16T13:46:06.859719Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"} ==> kernel <== 13:52:24 up 15:38, 0 users, load average: 0.03, 0.73, 0.61 Linux minikube 5.10.207 #1 SMP Thu May 9 02:07:35 UTC 2024 x86_64 GNU/Linux PRETTY_NAME="Buildroot 2023.02.9" ==> kube-apiserver [3e6dd8663c0f] <== I0816 13:46:07.682822 1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator] I0816 13:46:07.682833 1 policy_source.go:224] refreshing policies I0816 13:46:07.712117 1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller I0816 13:46:07.712148 1 cache.go:39] Caches are synced for AvailableConditionController controller I0816 13:46:07.712923 1 apf_controller.go:379] Running API Priority and Fairness config worker I0816 13:46:07.712932 1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process I0816 13:46:07.713050 1 handler.go:286] Adding GroupVersion cert-manager.io v1 to ResourceManager I0816 13:46:07.713098 1 handler.go:286] Adding GroupVersion configuration.konghq.com v1beta1 to ResourceManager I0816 13:46:07.713112 1 handler.go:286] Adding GroupVersion acme.cert-manager.io v1 to ResourceManager I0816 13:46:07.713175 1 handler.go:286] Adding GroupVersion sts.min.io v1alpha1 to ResourceManager I0816 13:46:07.713202 1 handler.go:286] Adding GroupVersion sts.min.io v1beta1 to ResourceManager I0816 13:46:07.713213 1 handler.go:286] Adding GroupVersion minio.min.io v2 to ResourceManager I0816 13:46:07.713230 1 handler.go:286] Adding GroupVersion gateway.networking.k8s.io v1 to ResourceManager I0816 13:46:07.713243 1 handler.go:286] Adding GroupVersion gateway.networking.k8s.io v1beta1 to ResourceManager I0816 13:46:07.713274 1 handler.go:286] Adding GroupVersion configuration.konghq.com v1alpha1 to ResourceManager I0816 13:46:07.713322 1 handler.go:286] Adding GroupVersion job.min.io v1alpha1 to ResourceManager I0816 13:46:07.713416 1 handler.go:286] Adding GroupVersion configuration.konghq.com v1 to ResourceManager I0816 13:46:07.713447 1 handler.go:286] Adding GroupVersion gateway-operator.konghq.com v1alpha1 to ResourceManager I0816 13:46:07.713466 1 handler.go:286] Adding GroupVersion gateway-operator.konghq.com v1beta1 to ResourceManager I0816 13:46:07.713516 1 cache.go:39] Caches are synced for APIServiceRegistrationController controller I0816 13:46:07.718058 1 shared_informer.go:320] Caches are synced for crd-autoregister I0816 13:46:07.718614 1 aggregator.go:165] initial CRD sync complete... I0816 13:46:07.718639 1 autoregister_controller.go:141] Starting autoregister controller I0816 13:46:07.718644 1 cache.go:32] Waiting for caches to sync for autoregister controller I0816 13:46:07.718647 1 cache.go:39] Caches are synced for autoregister controller I0816 13:46:07.737979 1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io I0816 13:46:07.779091 1 handler_discovery.go:447] Starting ResourceDiscoveryManager I0816 13:46:08.616428 1 storage_scheduling.go:111] all system priority classes are created successfully or already exist. E0816 13:46:08.919039 1 storage.go:475] Address {10.244.0.43 0xc00ec39580 0xc00fb7f960} isn't valid (pod ip(s) doesn't match endpoint ip, skipping: [{10.244.0.93}] vs 10.244.0.43 (kubernetes-dashboard/dashboard-metrics-scraper-b5fc48f67-2ngr5)) E0816 13:46:08.919060 1 storage.go:485] Failed to find a valid address, skipping subset: &{[{10.244.0.43 0xc00ec39580 0xc00fb7f960}] [] [{ 8000 TCP }]} I0816 13:46:09.674100 1 controller.go:615] quota admission added evaluator for: serviceaccounts I0816 13:46:09.688967 1 controller.go:615] quota admission added evaluator for: deployments.apps W0816 13:46:12.823652 1 dispatcher.go:205] Failed calling webhook, failing open services.validation.ingress-controller.konghq.com: failed calling webhook "services.validation.ingress-controller.konghq.com": failed to call webhook: Post "https://controlplane-webhook-cnmp-cnmp-chart-gtw-4ntfl-jlw82.default.svc:8080/?timeout=10s": dial tcp 10.106.214.108:8080: connect: no route to host E0816 13:46:12.823673 1 dispatcher.go:213] failed calling webhook "services.validation.ingress-controller.konghq.com": failed to call webhook: Post "https://controlplane-webhook-cnmp-cnmp-chart-gtw-4ntfl-jlw82.default.svc:8080/?timeout=10s": dial tcp 10.106.214.108:8080: connect: no route to host I0816 13:46:12.834723 1 trace.go:236] Trace[875608324]: "Update" accept:application/json, */*,audit-id:29a032ce-f35e-421a-83aa-6fd752895f2a,client:192.168.1.103,api-group:,api-version:v1,name:kube-dns,subresource:,namespace:kube-system,protocol:HTTP/2.0,resource:services,scope:resource,url:/api/v1/namespaces/kube-system/services/kube-dns,user-agent:kubeadm/v1.30.0 (linux/amd64) kubernetes/7c48c2b,verb:PUT (16-Aug-2024 13:46:09.706) (total time: 3127ms): Trace[875608324]: ["GuaranteedUpdate etcd3" audit-id:29a032ce-f35e-421a-83aa-6fd752895f2a,key:/services/specs/kube-system/kube-dns,type:*core.Service,resource:services 3127ms (13:46:09.706) Trace[875608324]: ---"About to Encode" 3117ms (13:46:12.824)] Trace[875608324]: ["Call validating webhook" configuration:cnmp-cnmp-chart-gtw-4ntfl,webhook:services.validation.ingress-controller.konghq.com,resource:/v1, Resource=services,subresource:,operation:UPDATE,UID:0086f786-b8a4-40fe-9ffd-747e5db7e771 3127ms (13:46:09.707)] Trace[875608324]: [3.127857151s] [3.127857151s] END I0816 13:46:12.850569 1 controller.go:615] quota admission added evaluator for: daemonsets.apps I0816 13:46:12.893007 1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io I0816 13:46:12.898481 1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io I0816 13:46:19.996711 1 controller.go:615] quota admission added evaluator for: endpoints I0816 13:46:20.100482 1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io W0816 13:46:21.651034 1 dispatcher.go:205] Failed calling webhook, failing open secrets.credentials.validation.ingress-controller.konghq.com: failed calling webhook "secrets.credentials.validation.ingress-controller.konghq.com": failed to call webhook: Post "https://controlplane-webhook-cnmp-cnmp-chart-gtw-4ntfl-jlw82.default.svc:8080/?timeout=10s": dial tcp 10.106.214.108:8080: connect: no route to host E0816 13:46:21.651067 1 dispatcher.go:213] failed calling webhook "secrets.credentials.validation.ingress-controller.konghq.com": failed to call webhook: Post "https://controlplane-webhook-cnmp-cnmp-chart-gtw-4ntfl-jlw82.default.svc:8080/?timeout=10s": dial tcp 10.106.214.108:8080: connect: no route to host W0816 13:46:21.651205 1 dispatcher.go:205] Failed calling webhook, failing open secrets.plugins.validation.ingress-controller.konghq.com: failed calling webhook "secrets.plugins.validation.ingress-controller.konghq.com": failed to call webhook: Post "https://controlplane-webhook-cnmp-cnmp-chart-gtw-4ntfl-jlw82.default.svc:8080/?timeout=10s": dial tcp 10.106.214.108:8080: connect: no route to host E0816 13:46:21.651241 1 dispatcher.go:213] failed calling webhook "secrets.plugins.validation.ingress-controller.konghq.com": failed to call webhook: Post "https://controlplane-webhook-cnmp-cnmp-chart-gtw-4ntfl-jlw82.default.svc:8080/?timeout=10s": dial tcp 10.106.214.108:8080: connect: no route to host I0816 13:46:21.654515 1 trace.go:236] Trace[1315130476]: "Patch" accept:application/json,audit-id:772671d4-b922-4952-a9bd-97b13fc53ce5,client:127.0.0.1,api-group:,api-version:v1,name:kubernetes-dashboard-csrf,subresource:,namespace:kubernetes-dashboard,protocol:HTTP/2.0,resource:secrets,scope:resource,url:/api/v1/namespaces/kubernetes-dashboard/secrets/kubernetes-dashboard-csrf,user-agent:kubectl/v1.30.0 (linux/amd64) kubernetes/7c48c2b,verb:PATCH (16-Aug-2024 13:46:18.563) (total time: 3091ms): Trace[1315130476]: ["GuaranteedUpdate etcd3" audit-id:772671d4-b922-4952-a9bd-97b13fc53ce5,key:/secrets/kubernetes-dashboard/kubernetes-dashboard-csrf,type:*core.Secret,resource:secrets 3090ms (13:46:18.563) Trace[1315130476]: ---"About to Encode" 3088ms (13:46:21.652)] Trace[1315130476]: ["Call validating webhook" configuration:cnmp-cnmp-chart-gtw-4ntfl,webhook:secrets.plugins.validation.ingress-controller.konghq.com,resource:/v1, Resource=secrets,subresource:,operation:UPDATE,UID:ab792bce-37ba-4f82-9b31-b479624b309c 3090ms (13:46:18.564)] Trace[1315130476]: ["Call validating webhook" configuration:cnmp-cnmp-chart-gtw-4ntfl,webhook:secrets.credentials.validation.ingress-controller.konghq.com,resource:/v1, Resource=secrets,subresource:,operation:UPDATE,UID:8d788592-a9c1-4709-8c40-6cde38ec8991 3090ms (13:46:18.564)] Trace[1315130476]: [3.091011568s] [3.091011568s] END W0816 13:46:24.461719 1 dispatcher.go:205] Failed calling webhook, failing open services.validation.ingress-controller.konghq.com: failed calling webhook "services.validation.ingress-controller.konghq.com": failed to call webhook: Post "https://controlplane-webhook-cnmp-cnmp-chart-gtw-4ntfl-jlw82.default.svc:8080/?timeout=10s": dial tcp 10.106.214.108:8080: connect: connection refused E0816 13:46:24.461745 1 dispatcher.go:213] failed calling webhook "services.validation.ingress-controller.konghq.com": failed to call webhook: Post "https://controlplane-webhook-cnmp-cnmp-chart-gtw-4ntfl-jlw82.default.svc:8080/?timeout=10s": dial tcp 10.106.214.108:8080: connect: connection refused W0816 13:46:24.705982 1 dispatcher.go:217] Failed calling webhook, failing closed gateway-operator-validation.konghq.com: failed calling webhook "gateway-operator-validation.konghq.com": failed to call webhook: Post "https://gateway-operator-validating-webhook.default.svc:443/validate?timeout=5s": dial tcp 10.109.131.239:443: connect: connection refused W0816 13:46:24.719915 1 dispatcher.go:217] Failed calling webhook, failing closed gateway-operator-validation.konghq.com: failed calling webhook "gateway-operator-validation.konghq.com": failed to call webhook: Post "https://gateway-operator-validating-webhook.default.svc:443/validate?timeout=5s": dial tcp 10.109.131.239:443: connect: connection refused W0816 13:46:29.724816 1 dispatcher.go:217] Failed calling webhook, failing closed gateway-operator-validation.konghq.com: failed calling webhook "gateway-operator-validation.konghq.com": failed to call webhook: Post "https://gateway-operator-validating-webhook.default.svc:443/validate?timeout=5s": dial tcp 10.109.131.239:443: connect: connection refused I0816 13:46:34.731363 1 controller.go:615] quota admission added evaluator for: dataplanes.gateway-operator.konghq.com ==> kube-apiserver [aeb942683261] <== I0816 13:45:52.356775 1 options.go:221] external host was not specified, using 192.168.1.103 I0816 13:45:52.360317 1 server.go:148] Version: v1.30.0 I0816 13:45:52.360338 1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" I0816 13:45:53.111298 1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook. I0816 13:45:53.111312 1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota. I0816 13:45:53.111390 1 instance.go:299] Using reconciler: lease I0816 13:45:53.111635 1 shared_informer.go:313] Waiting for caches to sync for node_authorizer I0816 13:45:53.111675 1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator] W0816 13:45:53.112888 1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused" W0816 13:45:53.113081 1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused" W0816 13:45:53.113104 1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused" ==> kube-controller-manager [798274a40fd1] <== ==> kube-controller-manager [a7205e26ed9e] <== I0816 13:46:19.914826 1 shared_informer.go:320] Caches are synced for TTL after finished I0816 13:46:19.919854 1 shared_informer.go:320] Caches are synced for namespace I0816 13:46:19.921830 1 shared_informer.go:320] Caches are synced for node I0816 13:46:19.921916 1 range_allocator.go:175] "Sending events to api server" logger="node-ipam-controller" I0816 13:46:19.921980 1 range_allocator.go:179] "Starting range CIDR allocator" logger="node-ipam-controller" I0816 13:46:19.922021 1 shared_informer.go:313] Waiting for caches to sync for cidrallocator I0816 13:46:19.922060 1 shared_informer.go:320] Caches are synced for cidrallocator I0816 13:46:19.922174 1 shared_informer.go:320] Caches are synced for disruption I0816 13:46:19.924888 1 shared_informer.go:320] Caches are synced for PV protection I0816 13:46:19.926423 1 shared_informer.go:320] Caches are synced for job I0816 13:46:19.927554 1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving I0816 13:46:19.927647 1 shared_informer.go:320] Caches are synced for certificate-csrsigning-legacy-unknown I0816 13:46:19.927616 1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client I0816 13:46:19.930832 1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status I0816 13:46:19.930976 1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client I0816 13:46:19.933816 1 shared_informer.go:320] Caches are synced for ReplicationController I0816 13:46:19.937832 1 shared_informer.go:320] Caches are synced for cronjob I0816 13:46:19.941376 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/cnmp-kgo-controller-manager-54df87f4d4" duration="32.667477ms" I0816 13:46:19.941534 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/cnmp-kgo-controller-manager-54df87f4d4" duration="59µs" I0816 13:46:19.943101 1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner I0816 13:46:19.945568 1 shared_informer.go:320] Caches are synced for deployment I0816 13:46:19.949180 1 shared_informer.go:320] Caches are synced for endpoint I0816 13:46:19.950488 1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator I0816 13:46:19.955108 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/controlplane-cnmp-cnmp-chart-gtw-4ntfl-7ghtg-5c6686c665" duration="46.09359ms" I0816 13:46:19.957224 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/controlplane-cnmp-cnmp-chart-gtw-4ntfl-7ghtg-5c6686c665" duration="37.9µs" I0816 13:46:19.951685 1 shared_informer.go:320] Caches are synced for TTL I0816 13:46:19.955142 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/dataplane-cnmp-cnmp-chart-gtw-m4tfz-rqg8f-5cc6994589" duration="46.03959ms" I0816 13:46:19.957791 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/dataplane-cnmp-cnmp-chart-gtw-m4tfz-rqg8f-5cc6994589" duration="27µs" I0816 13:46:19.957745 1 shared_informer.go:320] Caches are synced for certificate-csrapproving I0816 13:46:19.956324 1 shared_informer.go:320] Caches are synced for taint-eviction-controller I0816 13:46:19.965846 1 shared_informer.go:320] Caches are synced for HPA I0816 13:46:19.977054 1 shared_informer.go:320] Caches are synced for daemon sets I0816 13:46:20.041702 1 shared_informer.go:320] Caches are synced for bootstrap_signer I0816 13:46:20.044855 1 shared_informer.go:320] Caches are synced for crt configmap I0816 13:46:20.064467 1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring I0816 13:46:20.077379 1 shared_informer.go:320] Caches are synced for endpoint_slice I0816 13:46:20.079305 1 shared_informer.go:320] Caches are synced for ephemeral I0816 13:46:20.108700 1 shared_informer.go:320] Caches are synced for stateful set I0816 13:46:20.111232 1 shared_informer.go:320] Caches are synced for persistent volume I0816 13:46:20.122157 1 shared_informer.go:320] Caches are synced for attach detach I0816 13:46:20.123498 1 shared_informer.go:320] Caches are synced for expand I0816 13:46:20.162762 1 shared_informer.go:320] Caches are synced for PVC protection I0816 13:46:20.201605 1 shared_informer.go:320] Caches are synced for resource quota I0816 13:46:20.210137 1 shared_informer.go:320] Caches are synced for taint I0816 13:46:20.210204 1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone="" I0816 13:46:20.210252 1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="minikube" I0816 13:46:20.210325 1 node_lifecycle_controller.go:1073] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal" I0816 13:46:20.258100 1 shared_informer.go:320] Caches are synced for resource quota I0816 13:46:20.590696 1 shared_informer.go:320] Caches are synced for garbage collector I0816 13:46:20.635496 1 shared_informer.go:320] Caches are synced for garbage collector I0816 13:46:20.635526 1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller" I0816 13:46:30.182363 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/cnmp-kgo-controller-manager-54df87f4d4" duration="19.830168ms" I0816 13:46:30.183509 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/cnmp-kgo-controller-manager-54df87f4d4" duration="40µs" I0816 13:51:19.462588 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/controlplane-cnmp-cnmp-chart-gtw-4ntfl-7ghtg-5c6686c665" duration="29.6µs" I0816 13:51:26.174435 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/controlplane-cnmp-cnmp-chart-gtw-4ntfl-7ghtg-5c6686c665" duration="31.4µs" I0816 13:51:38.642631 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/controlplane-cnmp-cnmp-chart-gtw-4ntfl-7ghtg-5c6686c665" duration="33.001µs" I0816 13:51:46.381705 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/controlplane-cnmp-cnmp-chart-gtw-4ntfl-7ghtg-5c6686c665" duration="7.132062ms" I0816 13:51:46.381810 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/controlplane-cnmp-cnmp-chart-gtw-4ntfl-7ghtg-5c6686c665" duration="41.401µs" I0816 13:52:07.891247 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/dataplane-cnmp-cnmp-chart-gtw-m4tfz-rqg8f-5cc6994589" duration="8.073571ms" I0816 13:52:07.891550 1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/dataplane-cnmp-cnmp-chart-gtw-m4tfz-rqg8f-5cc6994589" duration="29.501µs" ==> kube-proxy [519c1956680a] <== I0816 13:46:08.320892 1 server_linux.go:69] "Using iptables proxy" I0816 13:46:08.365010 1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.168.1.103"] I0816 13:46:08.462404 1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6" I0816 13:46:08.463568 1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4" I0816 13:46:08.463618 1 server_linux.go:165] "Using iptables Proxier" I0816 13:46:08.466133 1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" I0816 13:46:08.477966 1 server.go:872] "Version info" version="v1.30.0" I0816 13:46:08.478210 1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" I0816 13:46:08.478772 1 config.go:192] "Starting service config controller" I0816 13:46:08.478889 1 shared_informer.go:313] Waiting for caches to sync for service config I0816 13:46:08.478938 1 config.go:101] "Starting endpoint slice config controller" I0816 13:46:08.479057 1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config I0816 13:46:08.479906 1 config.go:319] "Starting node config controller" I0816 13:46:08.484444 1 shared_informer.go:313] Waiting for caches to sync for node config I0816 13:46:08.579851 1 shared_informer.go:320] Caches are synced for endpoint slice config I0816 13:46:08.579933 1 shared_informer.go:320] Caches are synced for service config I0816 13:46:08.585041 1 shared_informer.go:320] Caches are synced for node config ==> kube-proxy [624b4b48c7bf] <== I0816 13:45:50.859929 1 server_linux.go:69] "Using iptables proxy" E0816 13:45:50.863874 1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/minikube\": dial tcp 192.168.1.103:8443: connect: connection refused" E0816 13:45:52.171698 1 server.go:1051] "Failed to retrieve node info" err="Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/minikube\": dial tcp 192.168.1.103:8443: connect: connection refused" ==> kube-scheduler [33fb99024cce] <== I0816 13:45:51.697192 1 serving.go:380] Generated self-signed cert in-memory ==> kube-scheduler [37b758cb6c7e] <== E0816 13:45:57.656572 1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.1.103:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.1.103:8443: connect: connection refused W0816 13:45:57.806076 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.1.103:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.1.103:8443: connect: connection refused E0816 13:45:57.806104 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.1.103:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.1.103:8443: connect: connection refused W0816 13:45:58.045765 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.1.103:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.1.103:8443: connect: connection refused E0816 13:45:58.045826 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.1.103:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.1.103:8443: connect: connection refused W0816 13:45:58.337363 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.1.103:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.1.103:8443: connect: connection refused E0816 13:45:58.337402 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.1.103:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.1.103:8443: connect: connection refused W0816 13:45:58.567205 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.1.103:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.1.103:8443: connect: connection refused E0816 13:45:58.567263 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.1.103:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.1.103:8443: connect: connection refused W0816 13:45:58.637167 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.1.103:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.1.103:8443: connect: connection refused E0816 13:45:58.637213 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.1.103:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.1.103:8443: connect: connection refused W0816 13:45:58.652619 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.1.103:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.1.103:8443: connect: connection refused E0816 13:45:58.652638 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.1.103:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.1.103:8443: connect: connection refused W0816 13:45:58.682237 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.1.103:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.1.103:8443: connect: connection refused E0816 13:45:58.682274 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.1.103:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.1.103:8443: connect: connection refused W0816 13:45:58.752893 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.1.103:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.1.103:8443: connect: connection refused E0816 13:45:58.752933 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.1.103:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.1.103:8443: connect: connection refused W0816 13:45:58.871674 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.1.103:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.1.103:8443: connect: connection refused E0816 13:45:58.871743 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.1.103:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.1.103:8443: connect: connection refused W0816 13:45:58.970268 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.1.103:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.1.103:8443: connect: connection refused E0816 13:45:58.970310 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.1.103:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.1.103:8443: connect: connection refused W0816 13:45:59.019363 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.1.103:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.1.103:8443: connect: connection refused E0816 13:45:59.019448 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.1.103:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.1.103:8443: connect: connection refused W0816 13:45:59.127375 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.1.103:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.1.103:8443: connect: connection refused E0816 13:45:59.127442 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.1.103:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.1.103:8443: connect: connection refused W0816 13:45:59.371614 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.1.103:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.1.103:8443: connect: connection refused E0816 13:45:59.371637 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.1.103:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.1.103:8443: connect: connection refused W0816 13:45:59.380017 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.1.103:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.1.103:8443: connect: connection refused E0816 13:45:59.380074 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.1.103:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.1.103:8443: connect: connection refused W0816 13:46:01.144995 1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: Get "https://192.168.1.103:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.1.103:8443: connect: connection refused E0816 13:46:01.145020 1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.1.103:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.1.103:8443: connect: connection refused W0816 13:46:01.418467 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: Get "https://192.168.1.103:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.1.103:8443: connect: connection refused E0816 13:46:01.418507 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: Get "https://192.168.1.103:8443/api/v1/replicationcontrollers?limit=500&resourceVersion=0": dial tcp 192.168.1.103:8443: connect: connection refused W0816 13:46:01.882427 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://192.168.1.103:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.1.103:8443: connect: connection refused E0816 13:46:01.882478 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://192.168.1.103:8443/api/v1/nodes?limit=500&resourceVersion=0": dial tcp 192.168.1.103:8443: connect: connection refused W0816 13:46:02.246350 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: Get "https://192.168.1.103:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.1.103:8443: connect: connection refused E0816 13:46:02.246378 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.1.103:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.1.103:8443: connect: connection refused W0816 13:46:02.283008 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: Get "https://192.168.1.103:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.1.103:8443: connect: connection refused E0816 13:46:02.283045 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: Get "https://192.168.1.103:8443/api/v1/persistentvolumes?limit=500&resourceVersion=0": dial tcp 192.168.1.103:8443: connect: connection refused W0816 13:46:02.733727 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: Get "https://192.168.1.103:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.1.103:8443: connect: connection refused E0816 13:46:02.733752 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.1.103:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.1.103:8443: connect: connection refused W0816 13:46:02.873921 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: Get "https://192.168.1.103:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.1.103:8443: connect: connection refused E0816 13:46:02.874080 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: Get "https://192.168.1.103:8443/api/v1/pods?fieldSelector=status.phase%3DSucceeded%!C(MISSING)status.phase%3DFailed&limit=500&resourceVersion=0": dial tcp 192.168.1.103:8443: connect: connection refused W0816 13:46:03.028559 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: Get "https://192.168.1.103:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.1.103:8443: connect: connection refused E0816 13:46:03.028596 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.1.103:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.1.103:8443: connect: connection refused W0816 13:46:03.173506 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: Get "https://192.168.1.103:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.1.103:8443: connect: connection refused E0816 13:46:03.173543 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: Get "https://192.168.1.103:8443/apis/apps/v1/statefulsets?limit=500&resourceVersion=0": dial tcp 192.168.1.103:8443: connect: connection refused W0816 13:46:03.481045 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: Get "https://192.168.1.103:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.1.103:8443: connect: connection refused E0816 13:46:03.481085 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: Get "https://192.168.1.103:8443/apis/storage.k8s.io/v1/storageclasses?limit=500&resourceVersion=0": dial tcp 192.168.1.103:8443: connect: connection refused W0816 13:46:03.701282 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: Get "https://192.168.1.103:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.1.103:8443: connect: connection refused E0816 13:46:03.701319 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://192.168.1.103:8443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 192.168.1.103:8443: connect: connection refused W0816 13:46:03.737722 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://192.168.1.103:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.1.103:8443: connect: connection refused E0816 13:46:03.737774 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.1.103:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.1.103:8443: connect: connection refused W0816 13:46:03.965529 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.1.103:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.1.103:8443: connect: connection refused E0816 13:46:03.965565 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: Get "https://192.168.1.103:8443/api/v1/persistentvolumeclaims?limit=500&resourceVersion=0": dial tcp 192.168.1.103:8443: connect: connection refused W0816 13:46:04.338369 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://192.168.1.103:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.1.103:8443: connect: connection refused E0816 13:46:04.338403 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.1.103:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.1.103:8443: connect: connection refused W0816 13:46:04.476269 1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: Get "https://192.168.1.103:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.1.103:8443: connect: connection refused E0816 13:46:04.476300 1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: Get "https://192.168.1.103:8443/apis/apps/v1/replicasets?limit=500&resourceVersion=0": dial tcp 192.168.1.103:8443: connect: connection refused I0816 13:46:07.881846 1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file ==> kubelet <== Aug 16 13:46:07 minikube kubelet[410352]: I0816 13:46:07.791085 410352 topology_manager.go:215] "Topology Admit Handler" podUID="fef04079-fe6c-435e-a8c1-7f930278e60a" podNamespace="default" podName="cnmp-keycloak-0" Aug 16 13:46:07 minikube kubelet[410352]: I0816 13:46:07.813571 410352 kubelet_node_status.go:112] "Node was previously registered" node="minikube" Aug 16 13:46:07 minikube kubelet[410352]: I0816 13:46:07.813704 410352 kubelet_node_status.go:76] "Successfully registered node" node="minikube" Aug 16 13:46:07 minikube kubelet[410352]: I0816 13:46:07.814391 410352 kuberuntime_manager.go:1523] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24" Aug 16 13:46:07 minikube kubelet[410352]: I0816 13:46:07.814904 410352 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24" Aug 16 13:46:07 minikube kubelet[410352]: I0816 13:46:07.901879 410352 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world" Aug 16 13:46:07 minikube kubelet[410352]: I0816 13:46:07.959076 410352 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/61e112d0-de76-467d-8ba5-ab6f6f49bbf6-tmp\") pod \"storage-provisioner\" (UID: \"61e112d0-de76-467d-8ba5-ab6f6f49bbf6\") " pod="kube-system/storage-provisioner" Aug 16 13:46:07 minikube kubelet[410352]: I0816 13:46:07.959171 410352 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f13f6c1b-d84b-4ce9-a8e3-99c853e6aec4\" (UniqueName: \"kubernetes.io/host-path/22de8066-1b9c-40ed-976d-508902453d8d-pvc-f13f6c1b-d84b-4ce9-a8e3-99c853e6aec4\") pod \"cnmp-postgresql-0\" (UID: \"22de8066-1b9c-40ed-976d-508902453d8d\") " pod="default/cnmp-postgresql-0" Aug 16 13:46:07 minikube kubelet[410352]: I0816 13:46:07.959271 410352 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/cdc977a3-4771-4ebd-8900-0af885aaa3bd-xtables-lock\") pod \"kube-proxy-b8lgl\" (UID: \"cdc977a3-4771-4ebd-8900-0af885aaa3bd\") " pod="kube-system/kube-proxy-b8lgl" Aug 16 13:46:07 minikube kubelet[410352]: I0816 13:46:07.959358 410352 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/cdc977a3-4771-4ebd-8900-0af885aaa3bd-lib-modules\") pod \"kube-proxy-b8lgl\" (UID: \"cdc977a3-4771-4ebd-8900-0af885aaa3bd\") " pod="kube-system/kube-proxy-b8lgl" Aug 16 13:46:08 minikube kubelet[410352]: I0816 13:46:08.090621 410352 scope.go:117] "RemoveContainer" containerID="624b4b48c7bf4c20cb5161760c181c5b33c711f2f2f5bac0f8d645bc7ea5083f" Aug 16 13:46:08 minikube kubelet[410352]: I0816 13:46:08.091722 410352 scope.go:117] "RemoveContainer" containerID="9349a683fccce13cb5997b42f5e13466d93bae5659188d5002c58bbdab933f5f" Aug 16 13:46:08 minikube kubelet[410352]: I0816 13:46:08.091969 410352 scope.go:117] "RemoveContainer" containerID="e1708865252e5cef4284cd368f3b41c719b2597dd47dc3a4f94fbf8b4bca3b1f" Aug 16 13:46:08 minikube kubelet[410352]: I0816 13:46:08.092166 410352 scope.go:117] "RemoveContainer" containerID="27fa4c323f99e55a30ac347cf9906dcbbd240fa72a30d46c19fec5bab472e487" Aug 16 13:46:08 minikube kubelet[410352]: I0816 13:46:08.092691 410352 scope.go:117] "RemoveContainer" containerID="d17fe204b53d070c76a8df5d60a8e4ddd034ed985d4923e99b57cf576f11c6fe" Aug 16 13:46:08 minikube kubelet[410352]: I0816 13:46:08.093297 410352 scope.go:117] "RemoveContainer" containerID="f9e7fe38abc43ba757cc63a74c9e80f02938a4d6ae827180d6d10710e2e14e42" Aug 16 13:46:08 minikube kubelet[410352]: I0816 13:46:08.094353 410352 scope.go:117] "RemoveContainer" containerID="4c0396e33d2453e2593918cc9f7b5e1fde07ab3972813b5434000d71be172f97" Aug 16 13:46:08 minikube kubelet[410352]: I0816 13:46:08.097919 410352 scope.go:117] "RemoveContainer" containerID="e2c2c96549ef23ca4a418e04c9399367c79ef4c71fb8fdd88a7faf9023b3e8c4" Aug 16 13:46:08 minikube kubelet[410352]: I0816 13:46:08.524120 410352 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="99b296ba3d986ad32249339d86c09c42595cb5920d4dbaed5de7a15f699a865b" Aug 16 13:46:08 minikube kubelet[410352]: E0816 13:46:08.634963 410352 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"kube-apiserver-minikube\" already exists" pod="kube-system/kube-apiserver-minikube" Aug 16 13:46:08 minikube kubelet[410352]: E0816 13:46:08.643436 410352 kubelet.go:1928] "Failed creating a mirror pod for" err="pods \"etcd-minikube\" already exists" pod="kube-system/etcd-minikube" Aug 16 13:46:11 minikube kubelet[410352]: I0816 13:46:11.271284 410352 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 16 13:46:13 minikube kubelet[410352]: I0816 13:46:13.636144 410352 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 16 13:46:16 minikube kubelet[410352]: I0816 13:46:16.363838 410352 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 16 13:47:04 minikube kubelet[410352]: E0816 13:47:04.837240 410352 iptables.go:577] "Could not set up iptables canary" err=< Aug 16 13:47:04 minikube kubelet[410352]: error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option. Aug 16 13:47:04 minikube kubelet[410352]: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?) Aug 16 13:47:04 minikube kubelet[410352]: Perhaps ip6tables or your kernel needs to be upgraded. Aug 16 13:47:04 minikube kubelet[410352]: > table="nat" chain="KUBE-KUBELET-CANARY" Aug 16 13:48:04 minikube kubelet[410352]: E0816 13:48:04.835979 410352 iptables.go:577] "Could not set up iptables canary" err=< Aug 16 13:48:04 minikube kubelet[410352]: error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option. Aug 16 13:48:04 minikube kubelet[410352]: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?) Aug 16 13:48:04 minikube kubelet[410352]: Perhaps ip6tables or your kernel needs to be upgraded. Aug 16 13:48:04 minikube kubelet[410352]: > table="nat" chain="KUBE-KUBELET-CANARY" Aug 16 13:49:04 minikube kubelet[410352]: E0816 13:49:04.836357 410352 iptables.go:577] "Could not set up iptables canary" err=< Aug 16 13:49:04 minikube kubelet[410352]: error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option. Aug 16 13:49:04 minikube kubelet[410352]: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?) Aug 16 13:49:04 minikube kubelet[410352]: Perhaps ip6tables or your kernel needs to be upgraded. Aug 16 13:49:04 minikube kubelet[410352]: > table="nat" chain="KUBE-KUBELET-CANARY" Aug 16 13:50:04 minikube kubelet[410352]: E0816 13:50:04.836868 410352 iptables.go:577] "Could not set up iptables canary" err=< Aug 16 13:50:04 minikube kubelet[410352]: error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option. Aug 16 13:50:04 minikube kubelet[410352]: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?) Aug 16 13:50:04 minikube kubelet[410352]: Perhaps ip6tables or your kernel needs to be upgraded. Aug 16 13:50:04 minikube kubelet[410352]: > table="nat" chain="KUBE-KUBELET-CANARY" Aug 16 13:51:04 minikube kubelet[410352]: E0816 13:51:04.838699 410352 iptables.go:577] "Could not set up iptables canary" err=< Aug 16 13:51:04 minikube kubelet[410352]: error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option. Aug 16 13:51:04 minikube kubelet[410352]: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?) Aug 16 13:51:04 minikube kubelet[410352]: Perhaps ip6tables or your kernel needs to be upgraded. Aug 16 13:51:04 minikube kubelet[410352]: > table="nat" chain="KUBE-KUBELET-CANARY" Aug 16 13:51:19 minikube kubelet[410352]: I0816 13:51:19.450942 410352 scope.go:117] "RemoveContainer" containerID="f9e7fe38abc43ba757cc63a74c9e80f02938a4d6ae827180d6d10710e2e14e42" Aug 16 13:51:19 minikube kubelet[410352]: I0816 13:51:19.451155 410352 scope.go:117] "RemoveContainer" containerID="8431e0a7c43ea07f818018df8785354bbb0a5529b77dc68c03af5bbf16b1cc02" Aug 16 13:51:19 minikube kubelet[410352]: E0816 13:51:19.451837 410352 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=controller pod=controlplane-cnmp-cnmp-chart-gtw-4ntfl-7ghtg-5c6686c665-r78qq_default(88cf00e0-69ba-4cce-a4f3-b26f704f1b8e)\"" pod="default/controlplane-cnmp-cnmp-chart-gtw-4ntfl-7ghtg-5c6686c665-r78qq" podUID="88cf00e0-69ba-4cce-a4f3-b26f704f1b8e" Aug 16 13:51:26 minikube kubelet[410352]: I0816 13:51:26.164706 410352 scope.go:117] "RemoveContainer" containerID="8431e0a7c43ea07f818018df8785354bbb0a5529b77dc68c03af5bbf16b1cc02" Aug 16 13:51:26 minikube kubelet[410352]: E0816 13:51:26.165066 410352 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=controller pod=controlplane-cnmp-cnmp-chart-gtw-4ntfl-7ghtg-5c6686c665-r78qq_default(88cf00e0-69ba-4cce-a4f3-b26f704f1b8e)\"" pod="default/controlplane-cnmp-cnmp-chart-gtw-4ntfl-7ghtg-5c6686c665-r78qq" podUID="88cf00e0-69ba-4cce-a4f3-b26f704f1b8e" Aug 16 13:51:37 minikube kubelet[410352]: I0816 13:51:37.828994 410352 scope.go:117] "RemoveContainer" containerID="8431e0a7c43ea07f818018df8785354bbb0a5529b77dc68c03af5bbf16b1cc02" Aug 16 13:52:04 minikube kubelet[410352]: E0816 13:52:04.836282 410352 iptables.go:577] "Could not set up iptables canary" err=< Aug 16 13:52:04 minikube kubelet[410352]: error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option. Aug 16 13:52:04 minikube kubelet[410352]: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?) Aug 16 13:52:04 minikube kubelet[410352]: Perhaps ip6tables or your kernel needs to be upgraded. Aug 16 13:52:04 minikube kubelet[410352]: > table="nat" chain="KUBE-KUBELET-CANARY" ==> kubernetes-dashboard [2d855d5da74b] <== 2024/08/16 13:46:08 Starting overwatch 2024/08/16 13:46:08 Using namespace: kubernetes-dashboard 2024/08/16 13:46:08 Using in-cluster config to connect to apiserver 2024/08/16 13:46:08 Using secret token for csrf signing 2024/08/16 13:46:08 Initializing csrf token from kubernetes-dashboard-csrf secret 2024/08/16 13:46:08 Successful initial request to the apiserver, version: v1.30.0 2024/08/16 13:46:08 Generating JWE encryption key 2024/08/16 13:46:08 New synchronizer has been registered: kubernetes-dashboard-key-holder-kubernetes-dashboard. Starting 2024/08/16 13:46:08 Starting secret synchronizer for kubernetes-dashboard-key-holder in namespace kubernetes-dashboard 2024/08/16 13:46:08 Initializing JWE encryption key from synchronized object 2024/08/16 13:46:08 Creating in-cluster Sidecar client 2024/08/16 13:46:08 Serving insecurely on HTTP port: 9090 2024/08/16 13:46:08 Metric client health check failed: the server is currently unable to handle the request (get services dashboard-metrics-scraper). Retrying in 30 seconds. 2024/08/16 13:46:38 Successful request to sidecar ==> kubernetes-dashboard [9349a683fccc] <== 2024/08/16 13:45:54 Starting overwatch panic: Get "https://10.96.0.1:443/api/v1/namespaces/kubernetes-dashboard/secrets/kubernetes-dashboard-csrf": dial tcp 10.96.0.1:443: connect: connection refused goroutine 1 [running]: github.com/kubernetes/dashboard/src/app/backend/client/csrf.(*csrfTokenManager).init(0xc0004cfae8) /home/runner/work/dashboard/dashboard/src/app/backend/client/csrf/manager.go:41 +0x30e github.com/kubernetes/dashboard/src/app/backend/client/csrf.NewCsrfTokenManager(...) /home/runner/work/dashboard/dashboard/src/app/backend/client/csrf/manager.go:66 github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).initCSRFKey(0xc0001c4780) /home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:527 +0x94 github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).init(0x19aba3a?) /home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:495 +0x32 github.com/kubernetes/dashboard/src/app/backend/client.NewClientManager(...) /home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:594 main.main() /home/runner/work/dashboard/dashboard/src/app/backend/dashboard.go:96 +0x1cf 2024/08/16 13:45:54 Using namespace: kubernetes-dashboard 2024/08/16 13:45:54 Using in-cluster config to connect to apiserver 2024/08/16 13:45:54 Using secret token for csrf signing 2024/08/16 13:45:54 Initializing csrf token from kubernetes-dashboard-csrf secret ==> storage-provisioner [b3904ae55b6f] <== I0816 13:46:08.687678 1 storage_provisioner.go:116] Initializing the minikube storage provisioner... I0816 13:46:08.700376 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service! I0816 13:46:08.701823 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath... I0816 13:46:26.117167 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath I0816 13:46:26.117355 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"17e31045-658f-4c03-8502-af271be46e86", APIVersion:"v1", ResourceVersion:"136629", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube_6e26a28d-1375-4da7-9312-d6c1820b1e54 became leader I0816 13:46:26.117378 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_minikube_6e26a28d-1375-4da7-9312-d6c1820b1e54! I0816 13:46:26.218382 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_minikube_6e26a28d-1375-4da7-9312-d6c1820b1e54! ==> storage-provisioner [d659309e0093] <== I0816 13:45:53.625219 1 storage_provisioner.go:116] Initializing the minikube storage provisioner... F0816 13:45:53.626909 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: connect: connection refused