* * ==> Audit <== * |---------|------------------------|----------|---------|---------|-------------------------------|-------------------------------| | Command | Args | Profile | User | Version | Start Time | End Time | |---------|------------------------|----------|---------|---------|-------------------------------|-------------------------------| | start | --driver=docker | minikube | meliwex | v1.25.2 | Sun, 17 Apr 2022 08:24:46 +04 | Sun, 17 Apr 2022 08:42:24 +04 | | addons | enable metrics-server | minikube | meliwex | v1.25.2 | Sun, 17 Apr 2022 08:56:49 +04 | Sun, 17 Apr 2022 08:57:57 +04 | | addons | disable metrics-server | minikube | meliwex | v1.25.2 | Sun, 17 Apr 2022 09:01:13 +04 | Sun, 17 Apr 2022 09:01:14 +04 | | addons | enable metrics-server | minikube | meliwex | v1.25.2 | Sun, 17 Apr 2022 09:03:47 +04 | Sun, 17 Apr 2022 09:04:14 +04 | | addons | disable metrics-server | minikube | meliwex | v1.25.2 | Sun, 17 Apr 2022 09:15:57 +04 | Sun, 17 Apr 2022 09:17:00 +04 | | addons | enable metrics-server | minikube | meliwex | v1.25.2 | Sun, 17 Apr 2022 09:30:44 +04 | Sun, 17 Apr 2022 10:16:17 +04 | | addons | delete metrics-server | minikube | meliwex | v1.25.2 | Sun, 17 Apr 2022 10:34:12 +04 | Sun, 17 Apr 2022 10:34:12 +04 | | addons | delete metrics-server | minikube | meliwex | v1.25.2 | Sun, 17 Apr 2022 10:34:44 +04 | Sun, 17 Apr 2022 10:34:44 +04 | | addons | disable metrics-server | minikube | meliwex | v1.25.2 | Sun, 17 Apr 2022 10:35:01 +04 | Sun, 17 Apr 2022 10:36:03 +04 | | delete | | minikube | meliwex | v1.25.2 | Sun, 17 Apr 2022 10:49:26 +04 | Sun, 17 Apr 2022 10:49:58 +04 | | start | --driver=docker | minikube | meliwex | v1.25.2 | Sun, 17 Apr 2022 10:51:58 +04 | Sun, 17 Apr 2022 11:01:26 +04 | | addons | enable metrics-server | minikube | meliwex | v1.25.2 | Sun, 17 Apr 2022 11:01:51 +04 | Sun, 17 Apr 2022 11:02:29 +04 | |---------|------------------------|----------|---------|---------|-------------------------------|-------------------------------| * * ==> Last Start <== * Log file created at: 2022/04/17 10:51:58 Running on machine: vmubuntu Binary: Built with gc go1.17.7 for linux/amd64 Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg I0417 10:51:58.575753 23558 out.go:297] Setting OutFile to fd 1 ... I0417 10:51:58.575849 23558 out.go:349] isatty.IsTerminal(1) = true I0417 10:51:58.575852 23558 out.go:310] Setting ErrFile to fd 2... I0417 10:51:58.575856 23558 out.go:349] isatty.IsTerminal(2) = true I0417 10:51:58.575943 23558 root.go:315] Updating PATH: /home/meliwex/.minikube/bin I0417 10:51:58.576149 23558 out.go:304] Setting JSON to false I0417 10:51:58.576840 23558 start.go:112] hostinfo: {"hostname":"vmubuntu","uptime":6295,"bootTime":1650172024,"procs":197,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.4.0-107-generic","kernelArch":"x86_64","virtualizationSystem":"vbox","virtualizationRole":"guest","hostId":"169838bb-a5ba-4c7b-ae01-bb108494d44b"} I0417 10:51:58.576912 23558 start.go:122] virtualization: vbox guest I0417 10:51:58.893136 23558 out.go:176] ๐Ÿ˜„ minikube v1.25.2 on Ubuntu 20.04 (vbox/amd64) I0417 10:51:58.894378 23558 notify.go:193] Checking for updates... I0417 10:51:58.894864 23558 driver.go:344] Setting default libvirt URI to qemu:///system I0417 10:51:59.068429 23558 docker.go:132] docker version: linux-20.10.14 I0417 10:51:59.068551 23558 cli_runner.go:133] Run: docker system info --format "{{json .}}" I0417 10:52:01.267441 23558 cli_runner.go:186] Completed: docker system info --format "{{json .}}": (2.198837766s) I0417 10:52:01.268699 23558 info.go:263] docker info: {ID:KLQH:H5VG:4VV6:S2TF:S6DJ:OI4P:NL5R:P4IC:VPDF:VKGM:ZFN5:W7CK Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:34 SystemTime:2022-04-17 10:51:59.113009686 +0400 +04 LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.4.0-107-generic OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:2079461376 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:vmubuntu Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:}} I0417 10:52:01.269224 23558 docker.go:237] overlay module found I0417 10:52:01.536282 23558 out.go:176] โœจ Using the docker driver based on user configuration I0417 10:52:01.537671 23558 start.go:281] selected driver: docker I0417 10:52:01.537691 23558 start.go:798] validating driver "docker" against I0417 10:52:01.537742 23558 start.go:809] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:} I0417 10:52:01.549530 23558 cli_runner.go:133] Run: docker system info --format "{{json .}}" I0417 10:52:01.733865 23558 info.go:263] docker info: {ID:KLQH:H5VG:4VV6:S2TF:S6DJ:OI4P:NL5R:P4IC:VPDF:VKGM:ZFN5:W7CK Containers:0 ContainersRunning:0 ContainersPaused:0 ContainersStopped:0 Images:7 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus: Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization: Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:false NFd:24 OomKillDisable:true NGoroutines:34 SystemTime:2022-04-17 10:52:01.61771674 +0400 +04 LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.4.0-107-generic OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:2 MemTotal:2079461376 GenericResources: DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:vmubuntu Labels:[] ExperimentalBuild:false ServerVersion:20.10.14 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:3df54a852345ae127d1fa3092b95168e4a88e2f8 Expected:3df54a852345ae127d1fa3092b95168e4a88e2f8} RuncCommit:{ID:v1.0.3-0-gf46b6ba Expected:v1.0.3-0-gf46b6ba} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Docker Buildx Vendor:Docker Inc. Version:v0.8.1-docker] map[Name:scan Path:/usr/libexec/docker/cli-plugins/docker-scan SchemaVersion:0.1.0 ShortDescription:Docker Scan Vendor:Docker Inc. Version:v0.17.0]] Warnings:}} I0417 10:52:01.734043 23558 start_flags.go:288] no existing cluster config was found, will generate one from the flags I0417 10:52:02.023142 23558 out.go:176] W0417 10:52:02.023612 23558 out.go:241] ๐Ÿงฏ The requested memory allocation of 1983MiB does not leave room for system overhead (total system memory: 1983MiB). You may face stability issues. W0417 10:52:02.024132 23558 out.go:241] ๐Ÿ’ก Suggestion: Start minikube with less memory allocated: 'minikube start --memory=1983mb' I0417 10:52:02.382006 23558 out.go:176] I0417 10:52:02.382259 23558 start_flags.go:369] Using suggested 1983MB memory alloc based on sys=1983MB, container=1983MB I0417 10:52:02.511344 23558 start_flags.go:397] setting extra-config: kubelet.housekeeping-interval=5m I0417 10:52:02.511522 23558 start_flags.go:813] Wait components to verify : map[apiserver:true system_pods:true] I0417 10:52:02.511655 23558 cni.go:93] Creating CNI manager for "" I0417 10:52:02.511669 23558 cni.go:167] CNI unnecessary in this configuration, recommending no CNI I0417 10:52:02.511689 23558 start_flags.go:302] config: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:1983 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/meliwex:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} I0417 10:52:02.890133 23558 out.go:176] ๐Ÿ‘ Starting control plane node minikube in cluster minikube I0417 10:52:02.898303 23558 cache.go:120] Beginning downloading kic base image for docker with docker I0417 10:52:03.166953 23558 out.go:176] ๐Ÿšœ Pulling base image ... I0417 10:52:03.183113 23558 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime docker I0417 10:52:03.183124 23558 image.go:75] Checking for gcr.io/k8s-minikube/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 in local docker daemon I0417 10:52:03.183267 23558 preload.go:148] Found local preload: /home/meliwex/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4 I0417 10:52:03.183286 23558 cache.go:57] Caching tarball of preloaded images I0417 10:52:03.192736 23558 preload.go:174] Found /home/meliwex/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download I0417 10:52:03.192806 23558 cache.go:60] Finished verifying existence of preloaded tar for v1.23.3 on docker I0417 10:52:03.193958 23558 profile.go:148] Saving config to /home/meliwex/.minikube/profiles/minikube/config.json ... I0417 10:52:03.194023 23558 lock.go:35] WriteFile acquiring /home/meliwex/.minikube/profiles/minikube/config.json: {Name:mk0ae44c0292e4521e031573f681b21bb8b0aaec Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0417 10:52:03.303366 23558 image.go:79] Found gcr.io/k8s-minikube/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 in local docker daemon, skipping pull I0417 10:52:03.303407 23558 cache.go:142] gcr.io/k8s-minikube/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 exists in daemon, skipping load I0417 10:52:03.303422 23558 cache.go:208] Successfully downloaded all kic artifacts I0417 10:52:03.303471 23558 start.go:313] acquiring machines lock for minikube: {Name:mk40b126510bd8916fc3e0287d8281576eedf672 Clock:{} Delay:500ms Timeout:10m0s Cancel:} I0417 10:52:03.303615 23558 start.go:317] acquired machines lock for "minikube" in 119.057ยตs I0417 10:52:03.303649 23558 start.go:89] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:1983 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/meliwex:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:docker ControlPlane:true Worker:true} I0417 10:52:03.303839 23558 start.go:126] createHost starting for "" (driver="docker") I0417 10:52:03.587472 23558 out.go:203] ๐Ÿ”ฅ Creating docker container (CPUs=2, Memory=1983MB) ... I0417 10:52:03.589264 23558 start.go:160] libmachine.API.Create for "minikube" (driver="docker") I0417 10:52:03.841561 23558 client.go:168] LocalClient.Create starting I0417 10:52:03.843199 23558 main.go:130] libmachine: Reading certificate data from /home/meliwex/.minikube/certs/ca.pem I0417 10:52:03.858498 23558 main.go:130] libmachine: Decoding PEM data... I0417 10:52:03.858685 23558 main.go:130] libmachine: Parsing certificate... I0417 10:52:03.858876 23558 main.go:130] libmachine: Reading certificate data from /home/meliwex/.minikube/certs/cert.pem I0417 10:52:03.858957 23558 main.go:130] libmachine: Decoding PEM data... I0417 10:52:03.858989 23558 main.go:130] libmachine: Parsing certificate... I0417 10:52:03.860265 23558 cli_runner.go:133] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" W0417 10:52:03.920094 23558 cli_runner.go:180] docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" returned with exit code 1 I0417 10:52:03.920146 23558 network_create.go:254] running [docker network inspect minikube] to gather additional debugging logs... I0417 10:52:03.920158 23558 cli_runner.go:133] Run: docker network inspect minikube W0417 10:52:03.952288 23558 cli_runner.go:180] docker network inspect minikube returned with exit code 1 I0417 10:52:03.952309 23558 network_create.go:257] error running [docker network inspect minikube]: docker network inspect minikube: exit status 1 stdout: [] stderr: Error: No such network: minikube I0417 10:52:03.952330 23558 network_create.go:259] output of [docker network inspect minikube]: -- stdout -- [] -- /stdout -- ** stderr ** Error: No such network: minikube ** /stderr ** I0417 10:52:03.952376 23558 cli_runner.go:133] Run: docker network inspect bridge --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0417 10:52:03.985673 23558 network.go:288] reserving subnet 192.168.49.0 for 1m0s: &{mu:{state:0 sema:0} read:{v:{m:map[] amended:true}} dirty:map[192.168.49.0:0xc00088ea40] misses:0} I0417 10:52:03.985705 23558 network.go:235] using free private subnet 192.168.49.0/24: &{IP:192.168.49.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.49.0/24 Gateway:192.168.49.1 ClientMin:192.168.49.2 ClientMax:192.168.49.254 Broadcast:192.168.49.255 Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:}} I0417 10:52:03.985717 23558 network_create.go:106] attempt to create docker network minikube 192.168.49.0/24 with gateway 192.168.49.1 and MTU of 1500 ... I0417 10:52:03.985765 23558 cli_runner.go:133] Run: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true minikube I0417 10:52:06.218855 23558 cli_runner.go:186] Completed: docker network create --driver=bridge --subnet=192.168.49.0/24 --gateway=192.168.49.1 -o --ip-masq -o --icc -o com.docker.network.driver.mtu=1500 --label=created_by.minikube.sigs.k8s.io=true minikube: (2.233063964s) I0417 10:52:06.218871 23558 network_create.go:90] docker network minikube 192.168.49.0/24 created I0417 10:52:06.218888 23558 kic.go:106] calculated static IP "192.168.49.2" for the "minikube" container I0417 10:52:06.218940 23558 cli_runner.go:133] Run: docker ps -a --format {{.Names}} I0417 10:52:06.251565 23558 cli_runner.go:133] Run: docker volume create minikube --label name.minikube.sigs.k8s.io=minikube --label created_by.minikube.sigs.k8s.io=true I0417 10:52:06.903403 23558 oci.go:102] Successfully created a docker volume minikube I0417 10:52:06.903575 23558 cli_runner.go:133] Run: docker run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 -d /var/lib I0417 10:52:26.178847 23558 cli_runner.go:186] Completed: docker run --rm --name minikube-preload-sidecar --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --entrypoint /usr/bin/test -v minikube:/var gcr.io/k8s-minikube/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 -d /var/lib: (19.27521434s) I0417 10:52:26.178869 23558 oci.go:106] Successfully prepared a docker volume minikube I0417 10:52:26.178936 23558 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime docker I0417 10:52:26.178954 23558 kic.go:179] Starting extracting preloaded images to volume ... I0417 10:52:26.179012 23558 cli_runner.go:133] Run: docker run --rm --entrypoint /usr/bin/tar -v /home/meliwex/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 -I lz4 -xf /preloaded.tar -C /extractDir I0417 10:53:08.903020 23558 cli_runner.go:186] Completed: docker run --rm --entrypoint /usr/bin/tar -v /home/meliwex/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v17-v1.23.3-docker-overlay2-amd64.tar.lz4:/preloaded.tar:ro -v minikube:/extractDir gcr.io/k8s-minikube/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 -I lz4 -xf /preloaded.tar -C /extractDir: (42.723842538s) I0417 10:53:08.903137 23558 kic.go:188] duration metric: took 42.724145 seconds to extract preloaded images to volume W0417 10:53:08.903333 23558 oci.go:135] Your kernel does not support swap limit capabilities or the cgroup is not mounted. W0417 10:53:08.903363 23558 oci.go:119] Your kernel does not support memory limit capabilities or the cgroup is not mounted. I0417 10:53:08.903650 23558 cli_runner.go:133] Run: docker info --format "'{{json .SecurityOptions}}'" I0417 10:53:09.296808 23558 cli_runner.go:133] Run: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 I0417 10:53:20.279917 23558 cli_runner.go:186] Completed: docker run -d -t --privileged --device /dev/fuse --security-opt seccomp=unconfined --tmpfs /tmp --tmpfs /run -v /lib/modules:/lib/modules:ro --hostname minikube --name minikube --label created_by.minikube.sigs.k8s.io=true --label name.minikube.sigs.k8s.io=minikube --label role.minikube.sigs.k8s.io= --label mode.minikube.sigs.k8s.io=minikube --network minikube --ip 192.168.49.2 --volume minikube:/var --security-opt apparmor=unconfined --cpus=2 -e container=docker --expose 8443 --publish=127.0.0.1::8443 --publish=127.0.0.1::22 --publish=127.0.0.1::2376 --publish=127.0.0.1::5000 --publish=127.0.0.1::32443 gcr.io/k8s-minikube/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2: (10.983010979s) I0417 10:53:20.280172 23558 cli_runner.go:133] Run: docker container inspect minikube --format={{.State.Running}} I0417 10:53:20.328524 23558 cli_runner.go:133] Run: docker container inspect minikube --format={{.State.Status}} I0417 10:53:20.366528 23558 cli_runner.go:133] Run: docker exec minikube stat /var/lib/dpkg/alternatives/iptables I0417 10:53:21.021452 23558 oci.go:281] the created container "minikube" has a running status. I0417 10:53:21.021486 23558 kic.go:210] Creating ssh key for kic: /home/meliwex/.minikube/machines/minikube/id_rsa... I0417 10:53:21.107860 23558 kic_runner.go:191] docker (temp): /home/meliwex/.minikube/machines/minikube/id_rsa.pub --> /home/docker/.ssh/authorized_keys (381 bytes) I0417 10:53:21.249587 23558 cli_runner.go:133] Run: docker container inspect minikube --format={{.State.Status}} I0417 10:53:21.284718 23558 kic_runner.go:93] Run: chown docker:docker /home/docker/.ssh/authorized_keys I0417 10:53:21.284731 23558 kic_runner.go:114] Args: [docker exec --privileged minikube chown docker:docker /home/docker/.ssh/authorized_keys] I0417 10:53:22.413341 23558 kic_runner.go:123] Done: [docker exec --privileged minikube chown docker:docker /home/docker/.ssh/authorized_keys]: (1.128584326s) I0417 10:53:22.413462 23558 cli_runner.go:133] Run: docker container inspect minikube --format={{.State.Status}} I0417 10:53:22.449785 23558 machine.go:88] provisioning docker machine ... I0417 10:53:22.449814 23558 ubuntu.go:169] provisioning hostname "minikube" I0417 10:53:22.449861 23558 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0417 10:53:22.483203 23558 main.go:130] libmachine: Using SSH client type: native I0417 10:53:22.483423 23558 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a12c0] 0x7a43a0 [] 0s} 127.0.0.1 49162 } I0417 10:53:22.483431 23558 main.go:130] libmachine: About to run SSH command: sudo hostname minikube && echo "minikube" | sudo tee /etc/hostname I0417 10:53:22.483965 23558 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51824->127.0.0.1:49162: read: connection reset by peer I0417 10:53:25.486823 23558 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51828->127.0.0.1:49162: read: connection reset by peer I0417 10:53:28.490007 23558 main.go:130] libmachine: Error dialing TCP: ssh: handshake failed: read tcp 127.0.0.1:51832->127.0.0.1:49162: read: connection reset by peer I0417 10:53:34.522260 23558 main.go:130] libmachine: SSH cmd err, output: : minikube I0417 10:53:34.522496 23558 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0417 10:53:34.580409 23558 main.go:130] libmachine: Using SSH client type: native I0417 10:53:34.580543 23558 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a12c0] 0x7a43a0 [] 0s} 127.0.0.1 49162 } I0417 10:53:34.580554 23558 main.go:130] libmachine: About to run SSH command: if ! grep -xq '.*\sminikube' /etc/hosts; then if grep -xq '127.0.1.1\s.*' /etc/hosts; then sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 minikube/g' /etc/hosts; else echo '127.0.1.1 minikube' | sudo tee -a /etc/hosts; fi fi I0417 10:53:34.763029 23558 main.go:130] libmachine: SSH cmd err, output: : I0417 10:53:34.763116 23558 ubuntu.go:175] set auth options {CertDir:/home/meliwex/.minikube CaCertPath:/home/meliwex/.minikube/certs/ca.pem CaPrivateKeyPath:/home/meliwex/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/meliwex/.minikube/machines/server.pem ServerKeyPath:/home/meliwex/.minikube/machines/server-key.pem ClientKeyPath:/home/meliwex/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/meliwex/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/meliwex/.minikube} I0417 10:53:34.763195 23558 ubuntu.go:177] setting up certificates I0417 10:53:34.763217 23558 provision.go:83] configureAuth start I0417 10:53:34.763366 23558 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0417 10:53:34.825508 23558 provision.go:138] copyHostCerts I0417 10:53:34.825580 23558 exec_runner.go:144] found /home/meliwex/.minikube/ca.pem, removing ... I0417 10:53:34.825586 23558 exec_runner.go:207] rm: /home/meliwex/.minikube/ca.pem I0417 10:53:34.825636 23558 exec_runner.go:151] cp: /home/meliwex/.minikube/certs/ca.pem --> /home/meliwex/.minikube/ca.pem (1078 bytes) I0417 10:53:34.825707 23558 exec_runner.go:144] found /home/meliwex/.minikube/cert.pem, removing ... I0417 10:53:34.825710 23558 exec_runner.go:207] rm: /home/meliwex/.minikube/cert.pem I0417 10:53:34.825730 23558 exec_runner.go:151] cp: /home/meliwex/.minikube/certs/cert.pem --> /home/meliwex/.minikube/cert.pem (1123 bytes) I0417 10:53:34.825772 23558 exec_runner.go:144] found /home/meliwex/.minikube/key.pem, removing ... I0417 10:53:34.825774 23558 exec_runner.go:207] rm: /home/meliwex/.minikube/key.pem I0417 10:53:34.825792 23558 exec_runner.go:151] cp: /home/meliwex/.minikube/certs/key.pem --> /home/meliwex/.minikube/key.pem (1679 bytes) I0417 10:53:34.842447 23558 provision.go:112] generating server cert: /home/meliwex/.minikube/machines/server.pem ca-key=/home/meliwex/.minikube/certs/ca.pem private-key=/home/meliwex/.minikube/certs/ca-key.pem org=meliwex.minikube san=[192.168.49.2 127.0.0.1 localhost 127.0.0.1 minikube minikube] I0417 10:53:34.939413 23558 provision.go:172] copyRemoteCerts I0417 10:53:34.939453 23558 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker I0417 10:53:34.939520 23558 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0417 10:53:34.984468 23558 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49162 SSHKeyPath:/home/meliwex/.minikube/machines/minikube/id_rsa Username:docker} I0417 10:53:35.139317 23558 ssh_runner.go:362] scp /home/meliwex/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes) I0417 10:53:35.251639 23558 ssh_runner.go:362] scp /home/meliwex/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes) I0417 10:53:35.310348 23558 ssh_runner.go:362] scp /home/meliwex/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes) I0417 10:53:35.335354 23558 provision.go:86] duration metric: configureAuth took 572.125693ms I0417 10:53:35.335367 23558 ubuntu.go:193] setting minikube options for container-runtime I0417 10:53:35.335536 23558 config.go:176] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.3 I0417 10:53:35.335574 23558 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0417 10:53:35.436063 23558 main.go:130] libmachine: Using SSH client type: native I0417 10:53:35.436190 23558 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a12c0] 0x7a43a0 [] 0s} 127.0.0.1 49162 } I0417 10:53:35.436197 23558 main.go:130] libmachine: About to run SSH command: df --output=fstype / | tail -n 1 I0417 10:53:35.629026 23558 main.go:130] libmachine: SSH cmd err, output: : overlay I0417 10:53:35.629060 23558 ubuntu.go:71] root file system type: overlay I0417 10:53:35.630292 23558 provision.go:309] Updating docker unit: /lib/systemd/system/docker.service ... I0417 10:53:35.630455 23558 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0417 10:53:35.791769 23558 main.go:130] libmachine: Using SSH client type: native I0417 10:53:35.791946 23558 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a12c0] 0x7a43a0 [] 0s} 127.0.0.1 49162 } I0417 10:53:35.792003 23558 main.go:130] libmachine: About to run SSH command: sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP \$MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target " | sudo tee /lib/systemd/system/docker.service.new I0417 10:53:35.935039 23558 main.go:130] libmachine: SSH cmd err, output: : [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target Requires=docker.socket StartLimitBurst=3 StartLimitIntervalSec=60 [Service] Type=notify Restart=on-failure # This file is a systemd drop-in unit that inherits from the base dockerd configuration. # The base configuration already specifies an 'ExecStart=...' command. The first directive # here is to clear out that command inherited from the base configuration. Without this, # the command from the base configuration and the command specified here are treated as # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd # will catch this invalid input and refuse to start the service with an error like: # Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other # container runtimes. If left unlimited, it may result in OOM issues with MySQL. ExecStart= ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. LimitNOFILE=infinity LimitNPROC=infinity LimitCORE=infinity # Uncomment TasksMax if your systemd version supports it. # Only systemd 226 and above support this version. TasksMax=infinity TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process [Install] WantedBy=multi-user.target I0417 10:53:35.935105 23558 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0417 10:53:35.979001 23558 main.go:130] libmachine: Using SSH client type: native I0417 10:53:35.979134 23558 main.go:130] libmachine: &{{{ 0 [] [] []} docker [0x7a12c0] 0x7a43a0 [] 0s} 127.0.0.1 49162 } I0417 10:53:35.979146 23558 main.go:130] libmachine: About to run SSH command: sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; } I0417 10:53:55.407188 23558 main.go:130] libmachine: SSH cmd err, output: : --- /lib/systemd/system/docker.service 2021-12-13 11:43:42.000000000 +0000 +++ /lib/systemd/system/docker.service.new 2022-04-17 06:53:35.933035303 +0000 @@ -1,30 +1,32 @@ [Unit] Description=Docker Application Container Engine Documentation=https://docs.docker.com +BindsTo=containerd.service After=network-online.target firewalld.service containerd.service Wants=network-online.target -Requires=docker.socket containerd.service +Requires=docker.socket +StartLimitBurst=3 +StartLimitIntervalSec=60 [Service] Type=notify -# the default is not to use systemd for cgroups because the delegate issues still -# exists and systemd currently does not support the cgroup feature set required -# for containers run by docker -ExecStart=/usr/bin/dockerd -H fd:// --containerd=/run/containerd/containerd.sock -ExecReload=/bin/kill -s HUP $MAINPID -TimeoutSec=0 -RestartSec=2 -Restart=always - -# Note that StartLimit* options were moved from "Service" to "Unit" in systemd 229. -# Both the old, and new location are accepted by systemd 229 and up, so using the old location -# to make them work for either version of systemd. -StartLimitBurst=3 +Restart=on-failure -# Note that StartLimitInterval was renamed to StartLimitIntervalSec in systemd 230. -# Both the old, and new name are accepted by systemd 230 and up, so using the old name to make -# this option work for either version of systemd. -StartLimitInterval=60s + + +# This file is a systemd drop-in unit that inherits from the base dockerd configuration. +# The base configuration already specifies an 'ExecStart=...' command. The first directive +# here is to clear out that command inherited from the base configuration. Without this, +# the command from the base configuration and the command specified here are treated as +# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd +# will catch this invalid input and refuse to start the service with an error like: +# Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services. + +# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other +# container runtimes. If left unlimited, it may result in OOM issues with MySQL. +ExecStart= +ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=docker --insecure-registry 10.96.0.0/12 +ExecReload=/bin/kill -s HUP $MAINPID # Having non-zero Limit*s causes performance problems due to accounting overhead # in the kernel. We recommend using cgroups to do container-local accounting. @@ -32,16 +34,16 @@ LimitNPROC=infinity LimitCORE=infinity -# Comment TasksMax if your systemd version does not support it. -# Only systemd 226 and above support this option. +# Uncomment TasksMax if your systemd version supports it. +# Only systemd 226 and above support this version. TasksMax=infinity +TimeoutStartSec=0 # set delegate yes so that systemd does not reset the cgroups of docker containers Delegate=yes # kill only the docker process, not all processes in the cgroup KillMode=process -OOMScoreAdjust=-500 [Install] WantedBy=multi-user.target Synchronizing state of docker.service with SysV service script with /lib/systemd/systemd-sysv-install. Executing: /lib/systemd/systemd-sysv-install enable docker I0417 10:53:55.407206 23558 machine.go:91] provisioned docker machine in 32.957411465s I0417 10:53:55.407213 23558 client.go:171] LocalClient.Create took 1m51.565637355s I0417 10:53:55.407221 23558 start.go:168] duration metric: libmachine.API.Create for "minikube" took 1m51.817975946s I0417 10:53:55.407227 23558 start.go:267] post-start starting for "minikube" (driver="docker") I0417 10:53:55.407230 23558 start.go:277] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs] I0417 10:53:55.407276 23558 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs I0417 10:53:55.407309 23558 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0417 10:53:55.454733 23558 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49162 SSHKeyPath:/home/meliwex/.minikube/machines/minikube/id_rsa Username:docker} I0417 10:53:55.622572 23558 ssh_runner.go:195] Run: cat /etc/os-release I0417 10:53:55.633716 23558 main.go:130] libmachine: Couldn't set key PRIVACY_POLICY_URL, no corresponding struct field found I0417 10:53:55.633748 23558 main.go:130] libmachine: Couldn't set key VERSION_CODENAME, no corresponding struct field found I0417 10:53:55.633767 23558 main.go:130] libmachine: Couldn't set key UBUNTU_CODENAME, no corresponding struct field found I0417 10:53:55.633778 23558 info.go:137] Remote host: Ubuntu 20.04.2 LTS I0417 10:53:55.633817 23558 filesync.go:126] Scanning /home/meliwex/.minikube/addons for local assets ... I0417 10:53:55.653908 23558 filesync.go:126] Scanning /home/meliwex/.minikube/files for local assets ... I0417 10:53:55.655022 23558 start.go:270] post-start completed in 247.778471ms I0417 10:53:55.656015 23558 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0417 10:53:55.714522 23558 profile.go:148] Saving config to /home/meliwex/.minikube/profiles/minikube/config.json ... I0417 10:53:55.714814 23558 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'" I0417 10:53:55.714865 23558 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0417 10:53:55.748449 23558 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49162 SSHKeyPath:/home/meliwex/.minikube/machines/minikube/id_rsa Username:docker} I0417 10:53:55.880023 23558 start.go:129] duration metric: createHost completed in 1m52.576151462s I0417 10:53:55.880059 23558 start.go:80] releasing machines lock for "minikube", held for 1m52.576429543s I0417 10:53:55.880276 23558 cli_runner.go:133] Run: docker container inspect -f "{{range .NetworkSettings.Networks}}{{.IPAddress}},{{.GlobalIPv6Address}}{{end}}" minikube I0417 10:53:55.931613 23558 ssh_runner.go:195] Run: systemctl --version I0417 10:53:55.931649 23558 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0417 10:53:55.931669 23558 ssh_runner.go:195] Run: curl -sS -m 2 https://k8s.gcr.io/ I0417 10:53:55.931707 23558 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0417 10:53:55.991054 23558 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49162 SSHKeyPath:/home/meliwex/.minikube/machines/minikube/id_rsa Username:docker} I0417 10:53:55.993298 23558 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49162 SSHKeyPath:/home/meliwex/.minikube/machines/minikube/id_rsa Username:docker} I0417 10:53:56.144575 23558 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd I0417 10:53:56.930033 23558 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0417 10:53:56.976904 23558 cruntime.go:272] skipping containerd shutdown because we are bound to it I0417 10:53:56.977183 23558 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio I0417 10:53:57.029549 23558 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/dockershim.sock image-endpoint: unix:///var/run/dockershim.sock " | sudo tee /etc/crictl.yaml" I0417 10:53:57.052640 23558 ssh_runner.go:195] Run: sudo systemctl unmask docker.service I0417 10:53:57.172529 23558 ssh_runner.go:195] Run: sudo systemctl enable docker.socket I0417 10:53:57.269342 23558 ssh_runner.go:195] Run: sudo systemctl cat docker.service I0417 10:53:57.280836 23558 ssh_runner.go:195] Run: sudo systemctl daemon-reload I0417 10:53:57.403993 23558 ssh_runner.go:195] Run: sudo systemctl start docker I0417 10:53:57.416228 23558 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0417 10:53:58.846751 23558 ssh_runner.go:235] Completed: docker version --format {{.Server.Version}}: (1.430465812s) I0417 10:53:58.846889 23558 ssh_runner.go:195] Run: docker version --format {{.Server.Version}} I0417 10:53:59.162153 23558 out.go:203] ๐Ÿณ Preparing Kubernetes v1.23.3 on Docker 20.10.12 ... I0417 10:53:59.162459 23558 cli_runner.go:133] Run: docker network inspect minikube --format "{"Name": "{{.Name}}","Driver": "{{.Driver}}","Subnet": "{{range .IPAM.Config}}{{.Subnet}}{{end}}","Gateway": "{{range .IPAM.Config}}{{.Gateway}}{{end}}","MTU": {{if (index .Options "com.docker.network.driver.mtu")}}{{(index .Options "com.docker.network.driver.mtu")}}{{else}}0{{end}}, "ContainerIPs": [{{range $k,$v := .Containers }}"{{$v.IPv4Address}}",{{end}}]}" I0417 10:53:59.194449 23558 ssh_runner.go:195] Run: grep 192.168.49.1 host.minikube.internal$ /etc/hosts I0417 10:53:59.381631 23558 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.49.1 host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0417 10:53:59.720462 23558 out.go:176] โ–ช kubelet.housekeeping-interval=5m I0417 10:53:59.734526 23558 preload.go:132] Checking if preload exists for k8s version v1.23.3 and runtime docker I0417 10:53:59.734764 23558 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0417 10:53:59.815355 23558 docker.go:606] Got preloaded images: -- stdout -- k8s.gcr.io/kube-apiserver:v1.23.3 k8s.gcr.io/kube-controller-manager:v1.23.3 k8s.gcr.io/kube-scheduler:v1.23.3 k8s.gcr.io/kube-proxy:v1.23.3 k8s.gcr.io/etcd:3.5.1-0 k8s.gcr.io/coredns/coredns:v1.8.6 k8s.gcr.io/pause:3.6 kubernetesui/dashboard:v2.3.1 kubernetesui/metrics-scraper:v1.0.7 gcr.io/k8s-minikube/storage-provisioner:v5 -- /stdout -- I0417 10:53:59.815365 23558 docker.go:537] Images already preloaded, skipping extraction I0417 10:53:59.815411 23558 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}} I0417 10:53:59.847877 23558 docker.go:606] Got preloaded images: -- stdout -- k8s.gcr.io/kube-apiserver:v1.23.3 k8s.gcr.io/kube-proxy:v1.23.3 k8s.gcr.io/kube-scheduler:v1.23.3 k8s.gcr.io/kube-controller-manager:v1.23.3 k8s.gcr.io/etcd:3.5.1-0 k8s.gcr.io/coredns/coredns:v1.8.6 k8s.gcr.io/pause:3.6 kubernetesui/dashboard:v2.3.1 kubernetesui/metrics-scraper:v1.0.7 gcr.io/k8s-minikube/storage-provisioner:v5 -- /stdout -- I0417 10:53:59.847888 23558 cache_images.go:84] Images are preloaded, skipping loading I0417 10:53:59.847934 23558 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}} I0417 10:54:02.057736 23558 ssh_runner.go:235] Completed: docker info --format {{.CgroupDriver}}: (2.209783819s) I0417 10:54:02.057776 23558 cni.go:93] Creating CNI manager for "" I0417 10:54:02.057783 23558 cni.go:167] CNI unnecessary in this configuration, recommending no CNI I0417 10:54:02.057796 23558 kubeadm.go:87] Using pod CIDR: 10.244.0.0/16 I0417 10:54:02.057812 23558 kubeadm.go:158] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.49.2 APIServerPort:8443 KubernetesVersion:v1.23.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:minikube NodeName:minikube DNSDomain:cluster.local CRISocket:/var/run/dockershim.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.49.2"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NoTaintMaster:true NodeIP:192.168.49.2 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[]} I0417 10:54:02.057946 23558 kubeadm.go:162] kubeadm config: apiVersion: kubeadm.k8s.io/v1beta3 kind: InitConfiguration localAPIEndpoint: advertiseAddress: 192.168.49.2 bindPort: 8443 bootstrapTokens: - groups: - system:bootstrappers:kubeadm:default-node-token ttl: 24h0m0s usages: - signing - authentication nodeRegistration: criSocket: /var/run/dockershim.sock name: "minikube" kubeletExtraArgs: node-ip: 192.168.49.2 taints: [] --- apiVersion: kubeadm.k8s.io/v1beta3 kind: ClusterConfiguration apiServer: certSANs: ["127.0.0.1", "localhost", "192.168.49.2"] extraArgs: enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota" controllerManager: extraArgs: allocate-node-cidrs: "true" leader-elect: "false" scheduler: extraArgs: leader-elect: "false" certificatesDir: /var/lib/minikube/certs clusterName: mk controlPlaneEndpoint: control-plane.minikube.internal:8443 etcd: local: dataDir: /var/lib/minikube/etcd extraArgs: proxy-refresh-interval: "70000" kubernetesVersion: v1.23.3 networking: dnsDomain: cluster.local podSubnet: "10.244.0.0/16" serviceSubnet: 10.96.0.0/12 --- apiVersion: kubelet.config.k8s.io/v1beta1 kind: KubeletConfiguration authentication: x509: clientCAFile: /var/lib/minikube/certs/ca.crt cgroupDriver: cgroupfs clusterDomain: "cluster.local" # disable disk resource management by default imageGCHighThresholdPercent: 100 evictionHard: nodefs.available: "0%!"(MISSING) nodefs.inodesFree: "0%!"(MISSING) imagefs.available: "0%!"(MISSING) failSwapOn: false staticPodPath: /etc/kubernetes/manifests --- apiVersion: kubeproxy.config.k8s.io/v1alpha1 kind: KubeProxyConfiguration clusterCIDR: "10.244.0.0/16" metricsBindAddress: 0.0.0.0:10249 conntrack: maxPerCore: 0 # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established" tcpEstablishedTimeout: 0s # Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close" tcpCloseWaitTimeout: 0s I0417 10:54:02.058033 23558 kubeadm.go:936] kubelet [Unit] Wants=docker.socket [Service] ExecStart= ExecStart=/var/lib/minikube/binaries/v1.23.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=docker --hostname-override=minikube --housekeeping-interval=5m --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.49.2 [Install] config: {KubernetesVersion:v1.23.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} I0417 10:54:02.058108 23558 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.23.3 I0417 10:54:02.091098 23558 binaries.go:44] Found k8s binaries, skipping transfer I0417 10:54:02.091154 23558 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube I0417 10:54:02.100770 23558 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (361 bytes) I0417 10:54:02.156345 23558 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes) I0417 10:54:02.190200 23558 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2030 bytes) I0417 10:54:02.216929 23558 ssh_runner.go:195] Run: grep 192.168.49.2 control-plane.minikube.internal$ /etc/hosts I0417 10:54:02.220972 23558 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.49.2 control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts"" I0417 10:54:02.233414 23558 certs.go:54] Setting up /home/meliwex/.minikube/profiles/minikube for IP: 192.168.49.2 I0417 10:54:02.245275 23558 certs.go:182] skipping minikubeCA CA generation: /home/meliwex/.minikube/ca.key I0417 10:54:02.245822 23558 certs.go:182] skipping proxyClientCA CA generation: /home/meliwex/.minikube/proxy-client-ca.key I0417 10:54:02.245882 23558 certs.go:302] generating minikube-user signed cert: /home/meliwex/.minikube/profiles/minikube/client.key I0417 10:54:02.245891 23558 crypto.go:68] Generating cert /home/meliwex/.minikube/profiles/minikube/client.crt with IP's: [] I0417 10:54:02.598597 23558 crypto.go:156] Writing cert to /home/meliwex/.minikube/profiles/minikube/client.crt ... I0417 10:54:02.598611 23558 lock.go:35] WriteFile acquiring /home/meliwex/.minikube/profiles/minikube/client.crt: {Name:mk296eaaf7b0dd8a7c9452c5c90ec51a4e0464fa Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0417 10:54:02.598775 23558 crypto.go:164] Writing key to /home/meliwex/.minikube/profiles/minikube/client.key ... I0417 10:54:02.598781 23558 lock.go:35] WriteFile acquiring /home/meliwex/.minikube/profiles/minikube/client.key: {Name:mkc7fce5e0af0f3c1c5d3d95329c48c6df98b069 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0417 10:54:02.598846 23558 certs.go:302] generating minikube signed cert: /home/meliwex/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 I0417 10:54:02.598854 23558 crypto.go:68] Generating cert /home/meliwex/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 with IP's: [192.168.49.2 10.96.0.1 127.0.0.1 10.0.0.1] I0417 10:54:02.784002 23558 crypto.go:156] Writing cert to /home/meliwex/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 ... I0417 10:54:02.784014 23558 lock.go:35] WriteFile acquiring /home/meliwex/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2: {Name:mkbed6707a23f37a332c3a99ffb0d5b4fb7ac6dc Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0417 10:54:02.784139 23558 crypto.go:164] Writing key to /home/meliwex/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 ... I0417 10:54:02.784144 23558 lock.go:35] WriteFile acquiring /home/meliwex/.minikube/profiles/minikube/apiserver.key.dd3b5fb2: {Name:mkc34667378e8378260e8df908674e93c8cf2109 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0417 10:54:02.784199 23558 certs.go:320] copying /home/meliwex/.minikube/profiles/minikube/apiserver.crt.dd3b5fb2 -> /home/meliwex/.minikube/profiles/minikube/apiserver.crt I0417 10:54:02.784251 23558 certs.go:324] copying /home/meliwex/.minikube/profiles/minikube/apiserver.key.dd3b5fb2 -> /home/meliwex/.minikube/profiles/minikube/apiserver.key I0417 10:54:02.784307 23558 certs.go:302] generating aggregator signed cert: /home/meliwex/.minikube/profiles/minikube/proxy-client.key I0417 10:54:02.784316 23558 crypto.go:68] Generating cert /home/meliwex/.minikube/profiles/minikube/proxy-client.crt with IP's: [] I0417 10:54:02.934639 23558 crypto.go:156] Writing cert to /home/meliwex/.minikube/profiles/minikube/proxy-client.crt ... I0417 10:54:02.934653 23558 lock.go:35] WriteFile acquiring /home/meliwex/.minikube/profiles/minikube/proxy-client.crt: {Name:mkc93fb724253bd3d9ac917932c902573f8d1dc7 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0417 10:54:02.934823 23558 crypto.go:164] Writing key to /home/meliwex/.minikube/profiles/minikube/proxy-client.key ... I0417 10:54:02.934829 23558 lock.go:35] WriteFile acquiring /home/meliwex/.minikube/profiles/minikube/proxy-client.key: {Name:mk8002fbc9ac30c8cfe00a097bc46c008afedc29 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0417 10:54:02.942475 23558 certs.go:388] found cert: /home/meliwex/.minikube/certs/home/meliwex/.minikube/certs/ca-key.pem (1679 bytes) I0417 10:54:02.942520 23558 certs.go:388] found cert: /home/meliwex/.minikube/certs/home/meliwex/.minikube/certs/ca.pem (1078 bytes) I0417 10:54:02.942541 23558 certs.go:388] found cert: /home/meliwex/.minikube/certs/home/meliwex/.minikube/certs/cert.pem (1123 bytes) I0417 10:54:02.942559 23558 certs.go:388] found cert: /home/meliwex/.minikube/certs/home/meliwex/.minikube/certs/key.pem (1679 bytes) I0417 10:54:02.943077 23558 ssh_runner.go:362] scp /home/meliwex/.minikube/profiles/minikube/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1399 bytes) I0417 10:54:02.964312 23558 ssh_runner.go:362] scp /home/meliwex/.minikube/profiles/minikube/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes) I0417 10:54:02.984023 23558 ssh_runner.go:362] scp /home/meliwex/.minikube/profiles/minikube/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes) I0417 10:54:03.003780 23558 ssh_runner.go:362] scp /home/meliwex/.minikube/profiles/minikube/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes) I0417 10:54:03.023478 23558 ssh_runner.go:362] scp /home/meliwex/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes) I0417 10:54:03.043759 23558 ssh_runner.go:362] scp /home/meliwex/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes) I0417 10:54:03.063319 23558 ssh_runner.go:362] scp /home/meliwex/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes) I0417 10:54:03.083122 23558 ssh_runner.go:362] scp /home/meliwex/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes) I0417 10:54:03.102633 23558 ssh_runner.go:362] scp /home/meliwex/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes) I0417 10:54:03.141920 23558 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes) I0417 10:54:03.157663 23558 ssh_runner.go:195] Run: openssl version I0417 10:54:03.248607 23558 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem" I0417 10:54:03.286265 23558 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem I0417 10:54:03.290339 23558 certs.go:431] hashing: -rw-r--r-- 1 root root 1111 Apr 17 04:34 /usr/share/ca-certificates/minikubeCA.pem I0417 10:54:03.290375 23558 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem I0417 10:54:03.320946 23558 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0" I0417 10:54:03.332650 23558 kubeadm.go:391] StartCluster: {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.30@sha256:02c921df998f95e849058af14de7045efc3954d90320967418a0d1f182bbc0b2 Memory:1983 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[{Component:kubelet Key:housekeeping-interval Value:5m}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/meliwex:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false} I0417 10:54:03.332768 23558 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}} I0417 10:54:03.510243 23558 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd I0417 10:54:03.738602 23558 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml I0417 10:54:03.758647 23558 kubeadm.go:221] ignoring SystemVerification for kubeadm because of docker driver I0417 10:54:03.758701 23558 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf I0417 10:54:03.770436 23558 kubeadm.go:152] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2 stdout: stderr: ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory I0417 10:54:03.770462 23558 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.23.3:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,Mem,SystemVerification,FileContent--proc-sys-net-bridge-bridge-nf-call-iptables" I0417 10:54:08.902031 23558 out.go:203] โ–ช Generating certificates and keys ... I0417 10:54:12.084738 23558 out.go:203] โ–ช Booting up control plane ... I0417 10:57:55.327954 23558 out.go:203] โ–ช Configuring RBAC rules ... I0417 10:58:12.688188 23558 cni.go:93] Creating CNI manager for "" I0417 10:58:12.688231 23558 cni.go:167] CNI unnecessary in this configuration, recommending no CNI I0417 10:58:12.945614 23558 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj" I0417 10:58:13.055106 23558 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl label nodes minikube.k8s.io/version=v1.25.2 minikube.k8s.io/commit=362d5fdc0a3dbee389b3d3f1034e8023e72bd3a7 minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2022_04_17T10_58_12_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig I0417 10:58:13.418609 23558 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.23.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig I0417 10:58:14.665164 23558 ssh_runner.go:235] Completed: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj": (1.719487851s) I0417 10:58:14.795638 23558 ops.go:34] apiserver oom_adj: -16 I0417 10:59:42.186371 23558 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.23.3/kubectl label nodes minikube.k8s.io/version=v1.25.2 minikube.k8s.io/commit=362d5fdc0a3dbee389b3d3f1034e8023e72bd3a7 minikube.k8s.io/name=minikube minikube.k8s.io/updated_at=2022_04_17T10_58_12_0700 minikube.k8s.io/primary=true --all --overwrite --kubeconfig=/var/lib/minikube/kubeconfig: (1m29.131232219s) I0417 10:59:42.186418 23558 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.23.3/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig: (1m28.767781212s) I0417 10:59:42.186427 23558 kubeadm.go:1020] duration metric: took 1m29.49804673s to wait for elevateKubeSystemPrivileges. I0417 10:59:42.186435 23558 kubeadm.go:393] StartCluster complete in 5m38.853795349s I0417 10:59:42.485788 23558 settings.go:142] acquiring lock: {Name:mk8e4fa775b57aa9899a67249ea9062610a8a216 Clock:{} Delay:500ms Timeout:1m0s Cancel:} I0417 10:59:42.485899 23558 settings.go:150] Updating kubeconfig: /home/meliwex/.kube/config I0417 10:59:43.197777 23558 lock.go:35] WriteFile acquiring /home/meliwex/.kube/config: {Name:mka3924c2e431f5de4fc2f46fbb6657cc7922c60 Clock:{} Delay:500ms Timeout:1m0s Cancel:} W0417 10:59:55.703718 23558 kapi.go:233] failed rescaling deployment, will retry: Operation cannot be fulfilled on deployments.apps "coredns": the object has been modified; please apply your changes to the latest version and try again I0417 10:59:59.370570 23558 kapi.go:244] deployment "coredns" in namespace "kube-system" and context "minikube" rescaled to 1 I0417 10:59:59.370729 23558 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml" I0417 10:59:59.746341 23558 start.go:208] Will wait 6m0s for node &{Name: IP:192.168.49.2 Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:docker ControlPlane:true Worker:true} I0417 11:00:00.561389 23558 out.go:176] ๐Ÿ”Ž Verifying Kubernetes components... I0417 11:00:00.561681 23558 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet I0417 10:59:59.756298 23558 config.go:176] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.3 I0417 11:00:00.149118 23558 addons.go:415] enableAddons start: toEnable=map[], additional=[] I0417 11:00:00.562377 23558 addons.go:65] Setting storage-provisioner=true in profile "minikube" I0417 11:00:00.562424 23558 addons.go:153] Setting addon storage-provisioner=true in "minikube" W0417 11:00:00.562438 23558 addons.go:165] addon storage-provisioner should already be in state true I0417 11:00:00.562518 23558 host.go:66] Checking if "minikube" exists ... I0417 11:00:00.563386 23558 addons.go:65] Setting default-storageclass=true in profile "minikube" I0417 11:00:00.563450 23558 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "minikube" I0417 11:00:00.760868 23558 cli_runner.go:133] Run: docker container inspect minikube --format={{.State.Status}} I0417 11:00:00.889087 23558 cli_runner.go:133] Run: docker container inspect minikube --format={{.State.Status}} I0417 11:00:02.260243 23558 out.go:176] โ–ช Using image gcr.io/k8s-minikube/storage-provisioner:v5 I0417 11:00:02.260713 23558 addons.go:348] installing /etc/kubernetes/addons/storage-provisioner.yaml I0417 11:00:02.260744 23558 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes) I0417 11:00:02.260907 23558 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0417 11:00:01.686831 23558 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml": (2.316049633s) I0417 11:00:02.273737 23558 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.49.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.23.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -" I0417 11:00:01.686863 23558 ssh_runner.go:235] Completed: sudo systemctl is-active --quiet service kubelet: (1.125164163s) I0417 11:00:02.494014 23558 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49162 SSHKeyPath:/home/meliwex/.minikube/machines/minikube/id_rsa Username:docker} I0417 11:00:02.495066 23558 addons.go:153] Setting addon default-storageclass=true in "minikube" W0417 11:00:02.495089 23558 addons.go:165] addon default-storageclass should already be in state true I0417 11:00:02.495147 23558 host.go:66] Checking if "minikube" exists ... I0417 11:00:02.496208 23558 cli_runner.go:133] Run: docker container inspect minikube --format={{.State.Status}} I0417 11:00:02.567449 23558 addons.go:348] installing /etc/kubernetes/addons/storageclass.yaml I0417 11:00:02.567460 23558 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes) I0417 11:00:02.567505 23558 cli_runner.go:133] Run: docker container inspect -f "'{{(index (index .NetworkSettings.Ports "22/tcp") 0).HostPort}}'" minikube I0417 11:00:02.612437 23558 api_server.go:51] waiting for apiserver process to appear ... I0417 11:00:02.612496 23558 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.* I0417 11:00:02.614878 23558 sshutil.go:53] new ssh client: &{IP:127.0.0.1 Port:49162 SSHKeyPath:/home/meliwex/.minikube/machines/minikube/id_rsa Username:docker} I0417 11:00:05.978247 23558 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.23.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed '/^ forward . \/etc\/resolv.conf.*/i \ hosts {\n 192.168.49.1 host.minikube.internal\n fallthrough\n }' | sudo /var/lib/minikube/binaries/v1.23.3/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.704480019s) I0417 11:00:05.978257 23558 ssh_runner.go:235] Completed: sudo pgrep -xnf kube-apiserver.*minikube.*: (3.365751271s) I0417 11:00:05.978267 23558 api_server.go:71] duration metric: took 6.231784138s to wait for apiserver process to appear ... I0417 11:00:05.978272 23558 api_server.go:87] waiting for apiserver healthz status ... I0417 11:00:05.978289 23558 start.go:777] {"host.minikube.internal": 192.168.49.1} host record injected into CoreDNS I0417 11:00:06.219690 23558 api_server.go:240] Checking apiserver healthz at https://192.168.49.2:8443/healthz ... I0417 11:00:07.051439 23558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml I0417 11:00:07.052560 23558 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml I0417 11:00:07.436878 23558 api_server.go:266] https://192.168.49.2:8443/healthz returned 200: ok I0417 11:00:07.440243 23558 api_server.go:140] control plane version: v1.23.3 I0417 11:00:07.440255 23558 api_server.go:130] duration metric: took 1.461980074s to wait for apiserver health ... I0417 11:00:07.440263 23558 system_pods.go:43] waiting for kube-system pods to appear ... I0417 11:00:09.106820 23558 system_pods.go:59] 7 kube-system pods found I0417 11:00:09.893438 23558 system_pods.go:61] "coredns-64897985d-798tz" [5788904b-94e4-4d37-9cb2-ba927c9f4552] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0417 11:00:09.893481 23558 system_pods.go:61] "coredns-64897985d-9rksw" [bc334033-6a36-4e24-8005-1569a10a174a] Pending / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns]) I0417 11:00:09.893507 23558 system_pods.go:61] "etcd-minikube" [fa78e616-bfbf-43af-9b19-d487904ea68f] Running I0417 11:00:09.893535 23558 system_pods.go:61] "kube-apiserver-minikube" [a5badf71-0662-49b1-9463-a4c958e6eca8] Running I0417 11:00:09.893553 23558 system_pods.go:61] "kube-controller-manager-minikube" [f1ee1440-f372-446e-873a-e5f6167e6055] Running I0417 11:00:09.893574 23558 system_pods.go:61] "kube-proxy-wvv7n" [8bf6a8ee-4b02-4218-a684-30d908c49d63] Pending / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy]) I0417 11:00:09.893593 23558 system_pods.go:61] "kube-scheduler-minikube" [40f35e9e-dab2-4dd7-a3f3-1aab97d983ed] Running I0417 11:00:09.893613 23558 system_pods.go:74] duration metric: took 2.453338598s to wait for pod list to return data ... I0417 11:00:09.893643 23558 kubeadm.go:548] duration metric: took 10.147149575s to wait for : map[apiserver:true system_pods:true] ... I0417 11:00:09.893697 23558 node_conditions.go:102] verifying NodePressure condition ... I0417 11:00:10.486805 23558 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.4353409s) I0417 11:00:11.587407 23558 node_conditions.go:122] node storage ephemeral capacity is 28829732Ki I0417 11:00:11.587453 23558 node_conditions.go:123] node cpu capacity is 2 I0417 11:00:11.587486 23558 node_conditions.go:105] duration metric: took 1.693777908s to run NodePressure ... I0417 11:00:11.587513 23558 start.go:213] waiting for startup goroutines ... I0417 11:00:22.444278 23558 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.23.3/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (15.391674976s) I0417 11:00:24.870509 23558 out.go:176] ๐ŸŒŸ Enabled addons: default-storageclass, storage-provisioner I0417 11:00:24.870691 23558 addons.go:417] enableAddons completed in 25.254458542s I0417 11:01:25.939237 23558 start.go:496] kubectl: 1.23.5, cluster: 1.23.3 (minor skew: 0) I0417 11:01:26.738526 23558 out.go:176] ๐Ÿ„ Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default * * ==> Docker <== * -- Logs begin at Sun 2022-04-17 06:53:26 UTC, end at Sun 2022-04-17 07:42:25 UTC. -- Apr 17 06:53:35 minikube dockerd[225]: time="2022-04-17T06:53:35.891111305Z" level=info msg="parsed scheme: \"unix\"" module=grpc Apr 17 06:53:35 minikube dockerd[225]: time="2022-04-17T06:53:35.891237502Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Apr 17 06:53:35 minikube dockerd[225]: time="2022-04-17T06:53:35.891417661Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Apr 17 06:53:35 minikube dockerd[225]: time="2022-04-17T06:53:35.891505507Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Apr 17 06:53:35 minikube dockerd[225]: time="2022-04-17T06:53:35.996911553Z" level=info msg="parsed scheme: \"unix\"" module=grpc Apr 17 06:53:35 minikube dockerd[225]: time="2022-04-17T06:53:35.996941709Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Apr 17 06:53:35 minikube dockerd[225]: time="2022-04-17T06:53:35.996969787Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Apr 17 06:53:35 minikube dockerd[225]: time="2022-04-17T06:53:35.996980839Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Apr 17 06:53:36 minikube dockerd[225]: time="2022-04-17T06:53:36.639949623Z" level=info msg="[graphdriver] using prior storage driver: overlay2" Apr 17 06:53:38 minikube systemd[1]: docker.service: Current command vanished from the unit file, execution of the command list won't be resumed. Apr 17 06:53:39 minikube dockerd[225]: time="2022-04-17T06:53:39.001568193Z" level=warning msg="Your kernel does not support swap memory limit" Apr 17 06:53:39 minikube dockerd[225]: time="2022-04-17T06:53:39.001675237Z" level=warning msg="Your kernel does not support CPU realtime scheduler" Apr 17 06:53:39 minikube dockerd[225]: time="2022-04-17T06:53:39.001702635Z" level=warning msg="Your kernel does not support cgroup blkio weight" Apr 17 06:53:39 minikube dockerd[225]: time="2022-04-17T06:53:39.001726583Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Apr 17 06:53:39 minikube dockerd[225]: time="2022-04-17T06:53:39.016473281Z" level=info msg="Loading containers: start." Apr 17 06:53:42 minikube dockerd[225]: time="2022-04-17T06:53:42.966651870Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Apr 17 06:53:43 minikube dockerd[225]: time="2022-04-17T06:53:43.142718153Z" level=info msg="Processing signal 'terminated'" Apr 17 06:53:45 minikube dockerd[225]: time="2022-04-17T06:53:45.297265689Z" level=info msg="Loading containers: done." Apr 17 06:53:46 minikube dockerd[225]: time="2022-04-17T06:53:46.361776389Z" level=info msg="Docker daemon" commit=459d0df graphdriver(s)=overlay2 version=20.10.12 Apr 17 06:53:46 minikube dockerd[225]: time="2022-04-17T06:53:46.363596286Z" level=info msg="Daemon has completed initialization" Apr 17 06:53:48 minikube dockerd[225]: time="2022-04-17T06:53:48.879701669Z" level=info msg="API listen on /run/docker.sock" Apr 17 06:53:48 minikube dockerd[225]: time="2022-04-17T06:53:48.882787884Z" level=info msg="stopping event stream following graceful shutdown" error="" module=libcontainerd namespace=moby Apr 17 06:53:48 minikube dockerd[225]: time="2022-04-17T06:53:48.883361266Z" level=info msg="Daemon shutdown complete" Apr 17 06:53:48 minikube dockerd[225]: time="2022-04-17T06:53:48.883493355Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby Apr 17 06:53:48 minikube systemd[1]: docker.service: Succeeded. Apr 17 06:53:48 minikube systemd[1]: Stopped Docker Application Container Engine. Apr 17 06:53:48 minikube systemd[1]: Starting Docker Application Container Engine... Apr 17 06:53:48 minikube dockerd[449]: time="2022-04-17T06:53:48.968091904Z" level=info msg="Starting up" Apr 17 06:53:48 minikube dockerd[449]: time="2022-04-17T06:53:48.985501262Z" level=info msg="parsed scheme: \"unix\"" module=grpc Apr 17 06:53:48 minikube dockerd[449]: time="2022-04-17T06:53:48.985533801Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Apr 17 06:53:48 minikube dockerd[449]: time="2022-04-17T06:53:48.985564336Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Apr 17 06:53:48 minikube dockerd[449]: time="2022-04-17T06:53:48.985573571Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Apr 17 06:53:48 minikube dockerd[449]: time="2022-04-17T06:53:48.986582753Z" level=info msg="parsed scheme: \"unix\"" module=grpc Apr 17 06:53:48 minikube dockerd[449]: time="2022-04-17T06:53:48.986700091Z" level=info msg="scheme \"unix\" not registered, fallback to default scheme" module=grpc Apr 17 06:53:48 minikube dockerd[449]: time="2022-04-17T06:53:48.986823675Z" level=info msg="ccResolverWrapper: sending update to cc: {[{unix:///run/containerd/containerd.sock 0 }] }" module=grpc Apr 17 06:53:48 minikube dockerd[449]: time="2022-04-17T06:53:48.986899597Z" level=info msg="ClientConn switching balancer to \"pick_first\"" module=grpc Apr 17 06:53:49 minikube dockerd[449]: time="2022-04-17T06:53:49.626684198Z" level=info msg="[graphdriver] using prior storage driver: overlay2" Apr 17 06:53:50 minikube dockerd[449]: time="2022-04-17T06:53:50.693762141Z" level=warning msg="Your kernel does not support swap memory limit" Apr 17 06:53:50 minikube dockerd[449]: time="2022-04-17T06:53:50.693854362Z" level=warning msg="Your kernel does not support CPU realtime scheduler" Apr 17 06:53:50 minikube dockerd[449]: time="2022-04-17T06:53:50.693879230Z" level=warning msg="Your kernel does not support cgroup blkio weight" Apr 17 06:53:50 minikube dockerd[449]: time="2022-04-17T06:53:50.693897245Z" level=warning msg="Your kernel does not support cgroup blkio weight_device" Apr 17 06:53:50 minikube dockerd[449]: time="2022-04-17T06:53:50.695524333Z" level=info msg="Loading containers: start." Apr 17 06:53:53 minikube dockerd[449]: time="2022-04-17T06:53:53.104448034Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address" Apr 17 06:53:54 minikube dockerd[449]: time="2022-04-17T06:53:54.547047342Z" level=info msg="Loading containers: done." Apr 17 06:53:54 minikube dockerd[449]: time="2022-04-17T06:53:54.887396331Z" level=info msg="Docker daemon" commit=459d0df graphdriver(s)=overlay2 version=20.10.12 Apr 17 06:53:54 minikube dockerd[449]: time="2022-04-17T06:53:54.887622476Z" level=info msg="Daemon has completed initialization" Apr 17 06:53:55 minikube systemd[1]: Started Docker Application Container Engine. Apr 17 06:53:55 minikube dockerd[449]: time="2022-04-17T06:53:55.423326584Z" level=info msg="API listen on [::]:2376" Apr 17 06:53:55 minikube dockerd[449]: time="2022-04-17T06:53:55.439338214Z" level=info msg="API listen on /var/run/docker.sock" Apr 17 06:55:32 minikube dockerd[449]: time="2022-04-17T06:55:32.123545356Z" level=info msg="ignoring event" container=60c66ec31f55e523d1dd438ea3c0f2cc46188ae9c0606b8b0c3409efd947973e module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Apr 17 06:56:07 minikube dockerd[449]: time="2022-04-17T06:56:07.528965047Z" level=info msg="ignoring event" container=57a40f45f9e6f56db4020cd32b90314bcdded5b6fbceb2419aae7c57fb79d7fa module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Apr 17 06:56:40 minikube dockerd[449]: time="2022-04-17T06:56:40.748778346Z" level=info msg="ignoring event" container=d4f9c95fe1a584bd1cf637d03d14ab0849b4aecea0659804c4b8c376dc394282 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Apr 17 06:57:23 minikube dockerd[449]: time="2022-04-17T06:57:23.043404360Z" level=info msg="ignoring event" container=8c38a276b7661562588243048372ecef274f1e4c9c65c55a8f036b979c1b3bdb module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Apr 17 07:01:01 minikube dockerd[449]: time="2022-04-17T07:01:01.958440173Z" level=warning msg="Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap." Apr 17 07:01:02 minikube dockerd[449]: time="2022-04-17T07:01:02.613939514Z" level=warning msg="Your kernel does not support swap limit capabilities or the cgroup is not mounted. Memory limited without swap." Apr 17 07:01:50 minikube dockerd[449]: time="2022-04-17T07:01:50.213907033Z" level=info msg="Container failed to exit within 30s of signal 15 - using the force" container=77eca80cb0bcb1e5dd76b1d156ab8818cc8a38e29c2c13b3334884a385be6d5f Apr 17 07:01:51 minikube dockerd[449]: time="2022-04-17T07:01:51.660731858Z" level=info msg="ignoring event" container=77eca80cb0bcb1e5dd76b1d156ab8818cc8a38e29c2c13b3334884a385be6d5f module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Apr 17 07:01:56 minikube dockerd[449]: time="2022-04-17T07:01:56.087344114Z" level=info msg="ignoring event" container=06bc5c788b108c447a80dbd70b67b262df3aa472776855f0408e8235afed1ce2 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Apr 17 07:02:02 minikube dockerd[449]: time="2022-04-17T07:02:02.014262253Z" level=info msg="ignoring event" container=f1222f61a7c04b1c5386500b5597ec28dd3066712c856a182363bdce11021050 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete" Apr 17 07:02:45 minikube dockerd[449]: time="2022-04-17T07:02:45.226421487Z" level=warning msg="reference for unknown type: " digest="sha256:dbc33d7d35d2a9cc5ab402005aa7a0d13be6192f3550c7d42cba8d2d5e3a5d62" remote="k8s.gcr.io/metrics-server/metrics-server@sha256:dbc33d7d35d2a9cc5ab402005aa7a0d13be6192f3550c7d42cba8d2d5e3a5d62" * * ==> container status <== * CONTAINER IMAGE CREATED STATE NAME ATTEMPT POD ID 47099365f68c9 nginx@sha256:2275af0f20d71b293916f1958f8497f987b8d8fd8113df54635f2a5915002bf1 18 minutes ago Running nginx 0 c8cbc07ea4152 5ce45a2f0c426 k8s.gcr.io/metrics-server/metrics-server@sha256:dbc33d7d35d2a9cc5ab402005aa7a0d13be6192f3550c7d42cba8d2d5e3a5d62 39 minutes ago Running metrics-server 0 26eec9c79cc6e af825b533544b 6e38f40d628db 40 minutes ago Running storage-provisioner 1 f2d72af2b0398 c909b2ccb881f a4ca41631cc7a 41 minutes ago Running coredns 0 7508f17e62e31 f1222f61a7c04 6e38f40d628db 41 minutes ago Exited storage-provisioner 0 f2d72af2b0398 58951328fc5ba 9b7cc99821098 42 minutes ago Running kube-proxy 0 d3527419979b7 2554115d0c2f6 b07520cd7ab76 44 minutes ago Running kube-controller-manager 4 301a08cb20601 8c38a276b7661 b07520cd7ab76 45 minutes ago Exited kube-controller-manager 3 301a08cb20601 bbc7672c876ba 99a3486be4f28 47 minutes ago Running kube-scheduler 0 13318e314d8ef fc44a3163370f f40be0088a83e 47 minutes ago Running kube-apiserver 0 fd64c5eb0dbf7 5d2f9672797c3 25f8c7f3da61c 47 minutes ago Running etcd 0 52a7d73117838 * * ==> coredns [c909b2ccb881] <== * .:53 [INFO] plugin/reload: Running configuration MD5 = cec3c60eb1cc4909fd4579a8d79ea031 CoreDNS-1.8.6 linux/amd64, go1.17.1, 13a9191 * * ==> describe nodes <== * Name: minikube Roles: control-plane,master Labels: beta.kubernetes.io/arch=amd64 beta.kubernetes.io/os=linux kubernetes.io/arch=amd64 kubernetes.io/hostname=minikube kubernetes.io/os=linux minikube.k8s.io/commit=362d5fdc0a3dbee389b3d3f1034e8023e72bd3a7 minikube.k8s.io/name=minikube minikube.k8s.io/primary=true minikube.k8s.io/updated_at=2022_04_17T10_58_12_0700 minikube.k8s.io/version=v1.25.2 node-role.kubernetes.io/control-plane= node-role.kubernetes.io/master= node.kubernetes.io/exclude-from-external-load-balancers= Annotations: kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock node.alpha.kubernetes.io/ttl: 0 volumes.kubernetes.io/controller-managed-attach-detach: true CreationTimestamp: Sun, 17 Apr 2022 06:55:23 +0000 Taints: Unschedulable: false Lease: HolderIdentity: minikube AcquireTime: RenewTime: Sun, 17 Apr 2022 07:43:12 +0000 Conditions: Type Status LastHeartbeatTime LastTransitionTime Reason Message ---- ------ ----------------- ------------------ ------ ------- MemoryPressure False Sun, 17 Apr 2022 07:40:05 +0000 Sun, 17 Apr 2022 06:55:23 +0000 KubeletHasSufficientMemory kubelet has sufficient memory available DiskPressure False Sun, 17 Apr 2022 07:40:05 +0000 Sun, 17 Apr 2022 06:55:23 +0000 KubeletHasNoDiskPressure kubelet has no disk pressure PIDPressure False Sun, 17 Apr 2022 07:40:05 +0000 Sun, 17 Apr 2022 06:55:23 +0000 KubeletHasSufficientPID kubelet has sufficient PID available Ready True Sun, 17 Apr 2022 07:40:05 +0000 Sun, 17 Apr 2022 06:55:35 +0000 KubeletReady kubelet is posting ready status Addresses: InternalIP: 192.168.49.2 Hostname: minikube Capacity: cpu: 2 ephemeral-storage: 28829732Ki hugepages-2Mi: 0 memory: 2030724Ki pods: 110 Allocatable: cpu: 2 ephemeral-storage: 28829732Ki hugepages-2Mi: 0 memory: 2030724Ki pods: 110 System Info: Machine ID: b6a262faae404a5db719705fd34b5c8b System UUID: adf7598f-60d2-43b0-9ffe-a876466b17b8 Boot ID: 972e6616-77fb-41e8-975c-0d570163bd80 Kernel Version: 5.4.0-107-generic OS Image: Ubuntu 20.04.2 LTS Operating System: linux Architecture: amd64 Container Runtime Version: docker://20.10.12 Kubelet Version: v1.23.3 Kube-Proxy Version: v1.23.3 PodCIDR: 10.244.0.0/24 PodCIDRs: 10.244.0.0/24 Non-terminated Pods: (9 in total) Namespace Name CPU Requests CPU Limits Memory Requests Memory Limits Age --------- ---- ------------ ---------- --------------- ------------- --- default dpl-test-6fb55dc999-lmxfm 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 20m kube-system coredns-64897985d-9rksw 100m (5%!)(MISSING) 0 (0%!)(MISSING) 70Mi (3%!)(MISSING) 170Mi (8%!)(MISSING) 43m kube-system etcd-minikube 100m (5%!)(MISSING) 0 (0%!)(MISSING) 100Mi (5%!)(MISSING) 0 (0%!)(MISSING) 46m kube-system kube-apiserver-minikube 250m (12%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 46m kube-system kube-controller-manager-minikube 200m (10%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 47m kube-system kube-proxy-wvv7n 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 43m kube-system kube-scheduler-minikube 100m (5%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 46m kube-system metrics-server-6b76bd68b6-6tk7p 100m (5%!)(MISSING) 0 (0%!)(MISSING) 300Mi (15%!)(MISSING) 0 (0%!)(MISSING) 41m kube-system storage-provisioner 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 0 (0%!)(MISSING) 42m Allocated resources: (Total limits may be over 100 percent, i.e., overcommitted.) Resource Requests Limits -------- -------- ------ cpu 850m (42%!)(MISSING) 0 (0%!)(MISSING) memory 470Mi (23%!)(MISSING) 170Mi (8%!)(MISSING) ephemeral-storage 0 (0%!)(MISSING) 0 (0%!)(MISSING) hugepages-2Mi 0 (0%!)(MISSING) 0 (0%!)(MISSING) Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Starting 41m kube-proxy Normal Starting 45m kubelet Starting kubelet. Normal NodeHasSufficientMemory 45m kubelet Node minikube status is now: NodeHasSufficientMemory Normal NodeHasNoDiskPressure 45m kubelet Node minikube status is now: NodeHasNoDiskPressure Normal NodeHasSufficientPID 45m kubelet Node minikube status is now: NodeHasSufficientPID Normal NodeAllocatableEnforced 45m kubelet Updated Node Allocatable limit across pods * * ==> dmesg <== * [Apr17 05:07] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge. [ +0.600983] platform eisa.0: EISA: Cannot allocate resource for mainboard [ +0.000062] platform eisa.0: Cannot allocate resource for EISA slot 1 [ +0.000000] platform eisa.0: Cannot allocate resource for EISA slot 2 [ +0.000000] platform eisa.0: Cannot allocate resource for EISA slot 3 [ +0.000000] platform eisa.0: Cannot allocate resource for EISA slot 4 [ +0.000000] platform eisa.0: Cannot allocate resource for EISA slot 5 [ +0.000000] platform eisa.0: Cannot allocate resource for EISA slot 6 [ +0.000000] platform eisa.0: Cannot allocate resource for EISA slot 7 [ +0.000000] platform eisa.0: Cannot allocate resource for EISA slot 8 [ +0.312491] [drm:vmw_host_log [vmwgfx]] *ERROR* Failed to send host log message. [ +0.001594] [drm:vmw_host_log [vmwgfx]] *ERROR* Failed to send host log message. [ +10.395958] vboxguest: loading out-of-tree module taints kernel. [ +0.004384] vgdrvHeartbeatInit: Setting up heartbeat to trigger every 2000 milliseconds [ +0.000939] vboxguest: Successfully loaded version 6.1.26_Ubuntu r145957 [ +0.000023] vboxguest: misc device minor 58, IRQ 20, I/O port d040, MMIO at 00000000f0400000 (size 0x400000) [ +10.644412] systemd-journald[389]: File /var/log/journal/169838bba5ba4c7bae01bb108494d44b/system.journal corrupted or uncleanly shut down, renaming and replacing. [ +27.396649] kauditd_printk_skb: 1 callbacks suppressed [Apr17 05:08] systemd-journald[389]: File /var/log/journal/169838bba5ba4c7bae01bb108494d44b/user-1000.journal corrupted or uncleanly shut down, renaming and replacing. [Apr17 05:45] e1000 0000:00:03.0 enp0s3: Reset adapter * * ==> etcd [5d2f9672797c] <== * {"level":"info","ts":"2022-04-17T07:43:10.155Z","caller":"traceutil/trace.go:171","msg":"trace[916632835] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2312; }","duration":"473.094817ms","start":"2022-04-17T07:43:09.682Z","end":"2022-04-17T07:43:10.155Z","steps":["trace[916632835] 'range keys from in-memory index tree' (duration: 472.712887ms)"],"step_count":1} {"level":"warn","ts":"2022-04-17T07:43:10.155Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-04-17T07:43:09.682Z","time spent":"473.219614ms","remote":"127.0.0.1:53680","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "} {"level":"warn","ts":"2022-04-17T07:43:11.011Z","caller":"etcdserver/v3_server.go:815","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128012398217816131,"retry-timeout":"500ms"} {"level":"info","ts":"2022-04-17T07:43:11.278Z","caller":"traceutil/trace.go:171","msg":"trace[991483899] linearizableReadLoop","detail":"{readStateIndex:2887; appliedIndex:2887; }","duration":"767.465963ms","start":"2022-04-17T07:43:10.511Z","end":"2022-04-17T07:43:11.278Z","steps":["trace[991483899] 'read index received' (duration: 767.447914ms)","trace[991483899] 'applied index is now lower than readState.Index' (duration: 15.789ยตs)"],"step_count":2} {"level":"warn","ts":"2022-04-17T07:43:11.279Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"767.92972ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/clusterrolebindings/\" range_end:\"/registry/clusterrolebindings0\" count_only:true ","response":"range_response_count:0 size:7"} {"level":"info","ts":"2022-04-17T07:43:11.279Z","caller":"traceutil/trace.go:171","msg":"trace[1667015029] range","detail":"{range_begin:/registry/clusterrolebindings/; range_end:/registry/clusterrolebindings0; response_count:0; response_revision:2313; }","duration":"768.057022ms","start":"2022-04-17T07:43:10.511Z","end":"2022-04-17T07:43:11.279Z","steps":["trace[1667015029] 'agreement among raft nodes before linearized reading' (duration: 767.760989ms)"],"step_count":1} {"level":"warn","ts":"2022-04-17T07:43:11.279Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-04-17T07:43:10.511Z","time spent":"768.160494ms","remote":"127.0.0.1:53702","response type":"/etcdserverpb.KV/Range","request count":0,"request size":66,"response count":53,"response size":31,"request content":"key:\"/registry/clusterrolebindings/\" range_end:\"/registry/clusterrolebindings0\" count_only:true "} {"level":"warn","ts":"2022-04-17T07:43:11.280Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"587.627291ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"} {"level":"info","ts":"2022-04-17T07:43:11.280Z","caller":"traceutil/trace.go:171","msg":"trace[1668387860] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2313; }","duration":"587.76986ms","start":"2022-04-17T07:43:10.692Z","end":"2022-04-17T07:43:11.280Z","steps":["trace[1668387860] 'agreement among raft nodes before linearized reading' (duration: 587.592803ms)"],"step_count":1} {"level":"warn","ts":"2022-04-17T07:43:11.280Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-04-17T07:43:10.692Z","time spent":"587.847385ms","remote":"127.0.0.1:53680","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "} {"level":"info","ts":"2022-04-17T07:43:12.415Z","caller":"traceutil/trace.go:171","msg":"trace[371049000] linearizableReadLoop","detail":"{readStateIndex:2888; appliedIndex:2888; }","duration":"100.454254ms","start":"2022-04-17T07:43:12.314Z","end":"2022-04-17T07:43:12.415Z","steps":["trace[371049000] 'read index received' (duration: 100.353103ms)","trace[371049000] 'applied index is now lower than readState.Index' (duration: 97.364ยตs)"],"step_count":2} {"level":"warn","ts":"2022-04-17T07:43:13.310Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"463.875508ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/namespaces/default\" ","response":"range_response_count:1 size:343"} {"level":"info","ts":"2022-04-17T07:43:13.310Z","caller":"traceutil/trace.go:171","msg":"trace[1325805272] range","detail":"{range_begin:/registry/namespaces/default; range_end:; response_count:1; response_revision:2314; }","duration":"464.077199ms","start":"2022-04-17T07:43:12.846Z","end":"2022-04-17T07:43:13.310Z","steps":["trace[1325805272] 'range keys from in-memory index tree' (duration: 463.56847ms)"],"step_count":1} {"level":"warn","ts":"2022-04-17T07:43:13.310Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-04-17T07:43:12.846Z","time spent":"464.197973ms","remote":"127.0.0.1:53626","response type":"/etcdserverpb.KV/Range","request count":0,"request size":30,"response count":1,"response size":367,"request content":"key:\"/registry/namespaces/default\" "} {"level":"warn","ts":"2022-04-17T07:43:13.311Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"996.929442ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1111"} {"level":"info","ts":"2022-04-17T07:43:13.311Z","caller":"traceutil/trace.go:171","msg":"trace[1815586214] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:2314; }","duration":"997.05448ms","start":"2022-04-17T07:43:12.314Z","end":"2022-04-17T07:43:13.311Z","steps":["trace[1815586214] 'agreement among raft nodes before linearized reading' (duration: 101.049485ms)","trace[1815586214] 'range keys from in-memory index tree' (duration: 895.798341ms)"],"step_count":2} {"level":"warn","ts":"2022-04-17T07:43:13.312Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-04-17T07:43:12.314Z","time spent":"997.162057ms","remote":"127.0.0.1:53628","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1135,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "} {"level":"warn","ts":"2022-04-17T07:43:13.312Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"617.994753ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"} {"level":"info","ts":"2022-04-17T07:43:13.312Z","caller":"traceutil/trace.go:171","msg":"trace[415871190] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2314; }","duration":"618.092332ms","start":"2022-04-17T07:43:12.694Z","end":"2022-04-17T07:43:13.312Z","steps":["trace[415871190] 'range keys from in-memory index tree' (duration: 617.486325ms)"],"step_count":1} {"level":"warn","ts":"2022-04-17T07:43:13.312Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-04-17T07:43:12.694Z","time spent":"618.297047ms","remote":"127.0.0.1:53680","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "} {"level":"warn","ts":"2022-04-17T07:43:14.031Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"449.533212ms","expected-duration":"100ms","prefix":"","request":"header: txn: success:> failure: >>","response":"size:16"} {"level":"info","ts":"2022-04-17T07:43:14.031Z","caller":"traceutil/trace.go:171","msg":"trace[1232245049] linearizableReadLoop","detail":"{readStateIndex:2891; appliedIndex:2889; }","duration":"595.978592ms","start":"2022-04-17T07:43:13.435Z","end":"2022-04-17T07:43:14.031Z","steps":["trace[1232245049] 'read index received' (duration: 146.138765ms)","trace[1232245049] 'applied index is now lower than readState.Index' (duration: 449.83895ms)"],"step_count":2} {"level":"info","ts":"2022-04-17T07:43:14.031Z","caller":"traceutil/trace.go:171","msg":"trace[1889037849] transaction","detail":"{read_only:false; response_revision:2315; number_of_response:1; }","duration":"692.941546ms","start":"2022-04-17T07:43:13.338Z","end":"2022-04-17T07:43:14.031Z","steps":["trace[1889037849] 'process raft request' (duration: 243.021544ms)","trace[1889037849] 'compare' (duration: 449.416774ms)"],"step_count":2} {"level":"warn","ts":"2022-04-17T07:43:14.031Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-04-17T07:43:13.338Z","time spent":"693.041372ms","remote":"127.0.0.1:53600","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":119,"response count":0,"response size":40,"request content":"compare: success:> failure: >"} {"level":"warn","ts":"2022-04-17T07:43:14.031Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"335.524129ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"} {"level":"info","ts":"2022-04-17T07:43:14.032Z","caller":"traceutil/trace.go:171","msg":"trace[898950715] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2316; }","duration":"335.704738ms","start":"2022-04-17T07:43:13.696Z","end":"2022-04-17T07:43:14.032Z","steps":["trace[898950715] 'agreement among raft nodes before linearized reading' (duration: 335.501658ms)"],"step_count":1} {"level":"warn","ts":"2022-04-17T07:43:14.032Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"249.65287ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/secrets/\" range_end:\"/registry/secrets0\" count_only:true ","response":"range_response_count:0 size:7"} {"level":"info","ts":"2022-04-17T07:43:14.032Z","caller":"traceutil/trace.go:171","msg":"trace[370618478] range","detail":"{range_begin:/registry/secrets/; range_end:/registry/secrets0; response_count:0; response_revision:2316; }","duration":"249.692297ms","start":"2022-04-17T07:43:13.782Z","end":"2022-04-17T07:43:14.032Z","steps":["trace[370618478] 'agreement among raft nodes before linearized reading' (duration: 249.625269ms)"],"step_count":1} {"level":"warn","ts":"2022-04-17T07:43:14.032Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-04-17T07:43:13.696Z","time spent":"335.868718ms","remote":"127.0.0.1:53680","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "} {"level":"warn","ts":"2022-04-17T07:43:14.032Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"596.807399ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" limit:500 ","response":"range_response_count:1 size:4761"} {"level":"info","ts":"2022-04-17T07:43:14.032Z","caller":"traceutil/trace.go:171","msg":"trace[1140796715] range","detail":"{range_begin:/registry/minions/; range_end:/registry/minions0; response_count:1; response_revision:2316; }","duration":"597.413946ms","start":"2022-04-17T07:43:13.435Z","end":"2022-04-17T07:43:14.032Z","steps":["trace[1140796715] 'agreement among raft nodes before linearized reading' (duration: 596.782655ms)"],"step_count":1} {"level":"warn","ts":"2022-04-17T07:43:14.033Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-04-17T07:43:13.435Z","time spent":"597.859233ms","remote":"127.0.0.1:53630","response type":"/etcdserverpb.KV/Range","request count":0,"request size":43,"response count":1,"response size":4785,"request content":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" limit:500 "} {"level":"info","ts":"2022-04-17T07:43:14.032Z","caller":"traceutil/trace.go:171","msg":"trace[405725540] transaction","detail":"{read_only:false; response_revision:2316; number_of_response:1; }","duration":"686.194999ms","start":"2022-04-17T07:43:13.345Z","end":"2022-04-17T07:43:14.032Z","steps":["trace[405725540] 'process raft request' (duration: 685.294273ms)"],"step_count":1} {"level":"warn","ts":"2022-04-17T07:43:14.033Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-04-17T07:43:13.345Z","time spent":"687.482126ms","remote":"127.0.0.1:53628","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1095,"response count":0,"response size":40,"request content":"compare: success:> failure: >"} {"level":"warn","ts":"2022-04-17T07:43:15.150Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"455.583564ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"} {"level":"info","ts":"2022-04-17T07:43:15.150Z","caller":"traceutil/trace.go:171","msg":"trace[1108721610] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2316; }","duration":"455.803358ms","start":"2022-04-17T07:43:14.694Z","end":"2022-04-17T07:43:15.150Z","steps":["trace[1108721610] 'range keys from in-memory index tree' (duration: 455.201437ms)"],"step_count":1} {"level":"warn","ts":"2022-04-17T07:43:15.150Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-04-17T07:43:14.694Z","time spent":"455.972548ms","remote":"127.0.0.1:53680","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "} {"level":"warn","ts":"2022-04-17T07:43:15.150Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"690.048033ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/\" range_end:\"/registry/pods0\" limit:500 ","response":"range_response_count:9 size:42550"} {"level":"info","ts":"2022-04-17T07:43:15.150Z","caller":"traceutil/trace.go:171","msg":"trace[1718073490] range","detail":"{range_begin:/registry/pods/; range_end:/registry/pods0; response_count:9; response_revision:2316; }","duration":"690.220849ms","start":"2022-04-17T07:43:14.460Z","end":"2022-04-17T07:43:15.150Z","steps":["trace[1718073490] 'range keys from in-memory index tree' (duration: 689.711312ms)"],"step_count":1} {"level":"warn","ts":"2022-04-17T07:43:15.150Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-04-17T07:43:14.460Z","time spent":"690.335182ms","remote":"127.0.0.1:53634","response type":"/etcdserverpb.KV/Range","request count":0,"request size":37,"response count":9,"response size":42574,"request content":"key:\"/registry/pods/\" range_end:\"/registry/pods0\" limit:500 "} {"level":"warn","ts":"2022-04-17T07:43:17.201Z","caller":"etcdserver/v3_server.go:815","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128012398217816158,"retry-timeout":"500ms"} {"level":"warn","ts":"2022-04-17T07:43:17.702Z","caller":"etcdserver/v3_server.go:815","msg":"waiting for ReadIndex response took too long, retrying","sent-request-id":8128012398217816158,"retry-timeout":"500ms"} {"level":"warn","ts":"2022-04-17T07:43:17.765Z","caller":"wal/wal.go:802","msg":"slow fdatasync","took":"1.704018526s","expected-duration":"1s"} {"level":"info","ts":"2022-04-17T07:43:17.766Z","caller":"traceutil/trace.go:171","msg":"trace[394103271] linearizableReadLoop","detail":"{readStateIndex:2892; appliedIndex:2892; }","duration":"1.068431242s","start":"2022-04-17T07:43:16.697Z","end":"2022-04-17T07:43:17.766Z","steps":["trace[394103271] 'read index received' (duration: 1.068414459s)","trace[394103271] 'applied index is now lower than readState.Index' (duration: 14.252ยตs)"],"step_count":2} {"level":"warn","ts":"2022-04-17T07:43:17.860Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"252.477355ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/endpointslices/\" range_end:\"/registry/endpointslices0\" count_only:true ","response":"range_response_count:0 size:7"} {"level":"warn","ts":"2022-04-17T07:43:17.860Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"1.1621393s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"} {"level":"info","ts":"2022-04-17T07:43:17.860Z","caller":"traceutil/trace.go:171","msg":"trace[1544171804] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2317; }","duration":"1.162392465s","start":"2022-04-17T07:43:16.697Z","end":"2022-04-17T07:43:17.860Z","steps":["trace[1544171804] 'agreement among raft nodes before linearized reading' (duration: 1.068811682s)","trace[1544171804] 'range keys from in-memory index tree' (duration: 93.276063ms)"],"step_count":2} {"level":"warn","ts":"2022-04-17T07:43:17.860Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-04-17T07:43:16.697Z","time spent":"1.16254505s","remote":"127.0.0.1:53680","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "} {"level":"info","ts":"2022-04-17T07:43:17.860Z","caller":"traceutil/trace.go:171","msg":"trace[538740096] range","detail":"{range_begin:/registry/endpointslices/; range_end:/registry/endpointslices0; response_count:0; response_revision:2317; }","duration":"252.708021ms","start":"2022-04-17T07:43:17.607Z","end":"2022-04-17T07:43:17.860Z","steps":["trace[538740096] 'agreement among raft nodes before linearized reading' (duration: 159.2719ms)","trace[538740096] 'count revisions from in-memory index tree' (duration: 93.166698ms)"],"step_count":2} {"level":"warn","ts":"2022-04-17T07:43:17.862Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"572.19477ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"} {"level":"info","ts":"2022-04-17T07:43:17.864Z","caller":"traceutil/trace.go:171","msg":"trace[785550190] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2317; }","duration":"575.392942ms","start":"2022-04-17T07:43:17.288Z","end":"2022-04-17T07:43:17.864Z","steps":["trace[785550190] 'agreement among raft nodes before linearized reading' (duration: 477.708197ms)","trace[785550190] 'range keys from in-memory index tree' (duration: 94.449039ms)"],"step_count":2} {"level":"warn","ts":"2022-04-17T07:43:17.864Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-04-17T07:43:17.288Z","time spent":"575.521682ms","remote":"127.0.0.1:53680","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":29,"request content":"key:\"/registry/health\" "} {"level":"info","ts":"2022-04-17T07:43:18.345Z","caller":"traceutil/trace.go:171","msg":"trace[1883822475] linearizableReadLoop","detail":"{readStateIndex:2893; appliedIndex:2893; }","duration":"262.674316ms","start":"2022-04-17T07:43:18.083Z","end":"2022-04-17T07:43:18.345Z","steps":["trace[1883822475] 'read index received' (duration: 262.655648ms)","trace[1883822475] 'applied index is now lower than readState.Index' (duration: 16.35ยตs)"],"step_count":2} {"level":"warn","ts":"2022-04-17T07:43:18.847Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"142.720231ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"} {"level":"info","ts":"2022-04-17T07:43:18.847Z","caller":"traceutil/trace.go:171","msg":"trace[1513514736] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:2317; }","duration":"142.89036ms","start":"2022-04-17T07:43:18.704Z","end":"2022-04-17T07:43:18.847Z","steps":["trace[1513514736] 'range keys from in-memory index tree' (duration: 142.075279ms)"],"step_count":1} {"level":"warn","ts":"2022-04-17T07:43:18.847Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"764.151548ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" ","response":"range_response_count:1 size:1111"} {"level":"info","ts":"2022-04-17T07:43:18.847Z","caller":"traceutil/trace.go:171","msg":"trace[1900311776] range","detail":"{range_begin:/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath; range_end:; response_count:1; response_revision:2317; }","duration":"764.506738ms","start":"2022-04-17T07:43:18.083Z","end":"2022-04-17T07:43:18.847Z","steps":["trace[1900311776] 'agreement among raft nodes before linearized reading' (duration: 262.936185ms)","trace[1900311776] 'range keys from in-memory index tree' (duration: 501.162163ms)"],"step_count":2} {"level":"warn","ts":"2022-04-17T07:43:18.847Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2022-04-17T07:43:18.083Z","time spent":"764.835292ms","remote":"127.0.0.1:53628","response type":"/etcdserverpb.KV/Range","request count":0,"request size":67,"response count":1,"response size":1135,"request content":"key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" "} {"level":"warn","ts":"2022-04-17T07:43:18.848Z","caller":"etcdserver/util.go:166","msg":"apply request took too long","took":"400.060521ms","expected-duration":"100ms","prefix":"read-only range ","request":"limit:1 keys_only:true ","response":"range_response_count:0 size:5"} {"level":"info","ts":"2022-04-17T07:43:18.848Z","caller":"traceutil/trace.go:171","msg":"trace[219882649] range","detail":"{range_begin:; range_end:; response_count:0; response_revision:2317; }","duration":"400.145929ms","start":"2022-04-17T07:43:18.448Z","end":"2022-04-17T07:43:18.848Z","steps":["trace[219882649] 'range keys from in-memory index tree' (duration: 399.893685ms)"],"step_count":1} * * ==> kernel <== * 07:43:22 up 2:36, 0 users, load average: 3.96, 2.49, 2.19 Linux minikube 5.4.0-107-generic #121-Ubuntu SMP Thu Mar 24 16:04:27 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux PRETTY_NAME="Ubuntu 20.04.2 LTS" * * ==> kube-apiserver [fc44a3163370] <== * Trace[370029308]: ---"Object stored in database" 582ms (07:42:20.093) Trace[370029308]: [582.746665ms] [582.746665ms] END I0417 07:42:23.406728 1 trace.go:205] Trace[915420874]: "Get" url:/api/v1/namespaces/default,user-agent:kube-apiserver/v1.23.3 (linux/amd64) kubernetes/816c97a,audit-id:1392b41f-553c-455e-954f-75fc198f0b54,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (17-Apr-2022 07:42:22.831) (total time: 575ms): Trace[915420874]: ---"About to write a response" 574ms (07:42:23.406) Trace[915420874]: [575.063252ms] [575.063252ms] END I0417 07:42:24.266051 1 trace.go:205] Trace[2126696152]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (17-Apr-2022 07:42:23.416) (total time: 849ms): Trace[2126696152]: ---"Transaction committed" 841ms (07:42:24.265) Trace[2126696152]: [849.691962ms] [849.691962ms] END I0417 07:42:33.765966 1 trace.go:205] Trace[296027873]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (17-Apr-2022 07:42:32.835) (total time: 929ms): Trace[296027873]: ---"Transaction committed" 927ms (07:42:33.765) Trace[296027873]: [929.881197ms] [929.881197ms] END I0417 07:42:33.766964 1 trace.go:205] Trace[79279867]: "Get" url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,audit-id:e917b68f-ddbc-4017-9940-1c25b52246f2,client:192.168.49.2,accept:application/json, */*,protocol:HTTP/2.0 (17-Apr-2022 07:42:33.217) (total time: 549ms): Trace[79279867]: ---"About to write a response" 549ms (07:42:33.766) Trace[79279867]: [549.345946ms] [549.345946ms] END I0417 07:42:43.528902 1 trace.go:205] Trace[1271520123]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (17-Apr-2022 07:42:42.836) (total time: 692ms): Trace[1271520123]: ---"Transaction committed" 690ms (07:42:43.528) Trace[1271520123]: [692.30315ms] [692.30315ms] END I0417 07:42:53.465451 1 trace.go:205] Trace[246263848]: "Get" url:/api/v1/namespaces/default,user-agent:kube-apiserver/v1.23.3 (linux/amd64) kubernetes/816c97a,audit-id:79b0ac01-e270-4707-b769-f7c7172eaf25,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (17-Apr-2022 07:42:52.843) (total time: 622ms): Trace[246263848]: ---"About to write a response" 621ms (07:42:53.465) Trace[246263848]: [622.114846ms] [622.114846ms] END I0417 07:42:54.257653 1 trace.go:205] Trace[1249052786]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (17-Apr-2022 07:42:53.474) (total time: 783ms): Trace[1249052786]: ---"Transaction committed" 777ms (07:42:54.257) Trace[1249052786]: [783.120125ms] [783.120125ms] END I0417 07:43:04.171685 1 trace.go:205] Trace[1227026793]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (17-Apr-2022 07:43:02.968) (total time: 1202ms): Trace[1227026793]: ---"Transaction committed" 1195ms (07:43:04.171) Trace[1227026793]: [1.202722462s] [1.202722462s] END I0417 07:43:04.179585 1 trace.go:205] Trace[24965164]: "Get" url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,audit-id:db9c197d-a132-4b80-ad0a-912e8d1bfbeb,client:192.168.49.2,accept:application/json, */*,protocol:HTTP/2.0 (17-Apr-2022 07:43:03.077) (total time: 1102ms): Trace[24965164]: ---"About to write a response" 1102ms (07:43:04.179) Trace[24965164]: [1.102344427s] [1.102344427s] END I0417 07:43:13.314871 1 trace.go:205] Trace[1804800116]: "Get" url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,audit-id:5b916edc-d18b-4008-b74d-617ce11e647f,client:192.168.49.2,accept:application/json, */*,protocol:HTTP/2.0 (17-Apr-2022 07:43:12.313) (total time: 1001ms): Trace[1804800116]: ---"About to write a response" 1001ms (07:43:13.314) Trace[1804800116]: [1.001435783s] [1.001435783s] END I0417 07:43:14.032816 1 trace.go:205] Trace[711350766]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (17-Apr-2022 07:43:13.329) (total time: 703ms): Trace[711350766]: ---"Transaction committed" 699ms (07:43:14.032) Trace[711350766]: [703.612803ms] [703.612803ms] END I0417 07:43:14.034562 1 trace.go:205] Trace[718028928]: "List etcd3" key:/minions,resourceVersion:,resourceVersionMatch:,limit:500,continue: (17-Apr-2022 07:43:13.433) (total time: 601ms): Trace[718028928]: [601.436399ms] [601.436399ms] END I0417 07:43:14.034931 1 trace.go:205] Trace[712394460]: "List" url:/api/v1/nodes,user-agent:kubectl/v1.23.3 (linux/amd64) kubernetes/816c97a,audit-id:27e2c311-7e6d-4511-b9ae-897254be2568,client:127.0.0.1,accept:application/json,protocol:HTTP/2.0 (17-Apr-2022 07:43:13.433) (total time: 601ms): Trace[712394460]: ---"Listing from storage done" 601ms (07:43:14.034) Trace[712394460]: [601.865003ms] [601.865003ms] END I0417 07:43:14.035131 1 trace.go:205] Trace[1567217058]: "GuaranteedUpdate etcd3" type:*core.Endpoints (17-Apr-2022 07:43:13.344) (total time: 690ms): Trace[1567217058]: ---"Transaction committed" 689ms (07:43:14.035) Trace[1567217058]: [690.367983ms] [690.367983ms] END I0417 07:43:14.035252 1 trace.go:205] Trace[1152046235]: "Update" url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,audit-id:269d91e9-7c22-41b3-9bdc-008b7d687c4e,client:192.168.49.2,accept:application/json, */*,protocol:HTTP/2.0 (17-Apr-2022 07:43:13.344) (total time: 691ms): Trace[1152046235]: ---"Object stored in database" 690ms (07:43:14.035) Trace[1152046235]: [691.19247ms] [691.19247ms] END I0417 07:43:15.156610 1 trace.go:205] Trace[181639169]: "List etcd3" key:/pods,resourceVersion:,resourceVersionMatch:,limit:500,continue: (17-Apr-2022 07:43:14.460) (total time: 696ms): Trace[181639169]: [696.429265ms] [696.429265ms] END I0417 07:43:15.160351 1 trace.go:205] Trace[1402088792]: "List" url:/api/v1/pods,user-agent:kubectl/v1.23.3 (linux/amd64) kubernetes/816c97a,audit-id:c345d0cd-5655-41b2-9d5e-8d96cd5dfd1d,client:127.0.0.1,accept:application/json, */*,protocol:HTTP/2.0 (17-Apr-2022 07:43:14.460) (total time: 700ms): Trace[1402088792]: ---"Listing from storage done" 696ms (07:43:15.156) Trace[1402088792]: [700.226847ms] [700.226847ms] END I0417 07:43:18.850294 1 trace.go:205] Trace[1301428869]: "Get" url:/api/v1/namespaces/kube-system/endpoints/k8s.io-minikube-hostpath,user-agent:storage-provisioner/v0.0.0 (linux/amd64) kubernetes/$Format,audit-id:a9206ed5-d257-4c1f-a013-619b4ebfff2c,client:192.168.49.2,accept:application/json, */*,protocol:HTTP/2.0 (17-Apr-2022 07:43:18.081) (total time: 768ms): Trace[1301428869]: ---"About to write a response" 767ms (07:43:18.849) Trace[1301428869]: [768.331925ms] [768.331925ms] END I0417 07:43:24.086179 1 trace.go:205] Trace[1925669405]: "GuaranteedUpdate etcd3" type:*v1.Endpoints (17-Apr-2022 07:43:23.188) (total time: 897ms): Trace[1925669405]: ---"Transaction prepared" 888ms (07:43:24.078) Trace[1925669405]: [897.612008ms] [897.612008ms] END I0417 07:43:25.222822 1 trace.go:205] Trace[1086231175]: "Get" url:/api/v1/namespaces/default/endpoints/kubernetes,user-agent:kube-apiserver/v1.23.3 (linux/amd64) kubernetes/816c97a,audit-id:a67f2d6c-c05f-4ee9-abfe-8d3f19a86fde,client:127.0.0.1,accept:application/vnd.kubernetes.protobuf, */*,protocol:HTTP/2.0 (17-Apr-2022 07:43:24.088) (total time: 1133ms): Trace[1086231175]: ---"About to write a response" 1133ms (07:43:25.222) Trace[1086231175]: [1.133812204s] [1.133812204s] END * * ==> kube-controller-manager [2554115d0c2f] <== * I0417 06:59:43.839564 1 shared_informer.go:247] Caches are synced for namespace I0417 06:59:43.840077 1 shared_informer.go:247] Caches are synced for certificate-csrapproving I0417 06:59:44.011846 1 shared_informer.go:240] Waiting for caches to sync for garbage collector I0417 06:59:44.012068 1 shared_informer.go:247] Caches are synced for service account I0417 06:59:44.013288 1 shared_informer.go:247] Caches are synced for bootstrap_signer I0417 06:59:44.024051 1 range_allocator.go:173] Starting range CIDR allocator I0417 06:59:44.024131 1 shared_informer.go:240] Waiting for caches to sync for cidrallocator I0417 06:59:44.024170 1 shared_informer.go:247] Caches are synced for cidrallocator I0417 06:59:44.306898 1 shared_informer.go:247] Caches are synced for TTL I0417 06:59:44.311755 1 shared_informer.go:247] Caches are synced for crt configmap I0417 06:59:44.315546 1 shared_informer.go:247] Caches are synced for persistent volume I0417 06:59:44.323716 1 shared_informer.go:247] Caches are synced for disruption I0417 06:59:44.323803 1 disruption.go:371] Sending events to api server. I0417 06:59:44.324299 1 shared_informer.go:247] Caches are synced for endpoint_slice I0417 06:59:44.324518 1 shared_informer.go:247] Caches are synced for endpoint I0417 06:59:44.340159 1 shared_informer.go:247] Caches are synced for HPA I0417 06:59:44.340530 1 shared_informer.go:247] Caches are synced for ReplicationController I0417 06:59:44.491015 1 shared_informer.go:247] Caches are synced for job I0417 06:59:44.491939 1 shared_informer.go:247] Caches are synced for daemon sets I0417 06:59:44.492384 1 shared_informer.go:247] Caches are synced for PVC protection I0417 06:59:44.492681 1 shared_informer.go:247] Caches are synced for ReplicaSet I0417 06:59:44.492949 1 shared_informer.go:247] Caches are synced for GC I0417 06:59:44.493679 1 shared_informer.go:247] Caches are synced for taint I0417 06:59:44.494763 1 shared_informer.go:247] Caches are synced for deployment I0417 06:59:44.491024 1 shared_informer.go:247] Caches are synced for attach detach I0417 06:59:44.566726 1 taint_manager.go:187] "Starting NoExecuteTaintManager" I0417 06:59:44.568994 1 node_lifecycle_controller.go:1397] Initializing eviction metric for zone: W0417 06:59:44.570185 1 node_lifecycle_controller.go:1012] Missing timestamp for Node minikube. Assuming now as a timestamp. I0417 06:59:44.570679 1 node_lifecycle_controller.go:1213] Controller detected that zone is now in state Normal. I0417 06:59:44.491062 1 shared_informer.go:247] Caches are synced for ephemeral I0417 06:59:44.491089 1 shared_informer.go:247] Caches are synced for stateful set I0417 06:59:44.753560 1 event.go:294] "Event occurred" object="minikube" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node minikube event: Registered Node minikube in Controller" I0417 06:59:45.006110 1 range_allocator.go:374] Set node minikube PodCIDR to [10.244.0.0/24] I0417 06:59:45.012836 1 shared_informer.go:247] Caches are synced for garbage collector I0417 06:59:45.024636 1 shared_informer.go:247] Caches are synced for resource quota I0417 06:59:45.024722 1 shared_informer.go:247] Caches are synced for garbage collector I0417 06:59:45.024730 1 garbagecollector.go:155] Garbage collector: all resource monitors have synced. Proceeding to collect garbage I0417 06:59:45.135266 1 shared_informer.go:247] Caches are synced for resource quota I0417 06:59:48.309322 1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set coredns-64897985d to 2" I0417 06:59:50.379979 1 event.go:294] "Event occurred" object="kube-system/kube-proxy" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-wvv7n" I0417 06:59:52.452607 1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-798tz" I0417 06:59:53.535821 1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: coredns-64897985d-9rksw" I0417 06:59:57.828857 1 event.go:294] "Event occurred" object="kube-system/coredns" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled down replica set coredns-64897985d to 1" I0417 07:00:00.595927 1 event.go:294] "Event occurred" object="kube-system/coredns-64897985d" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: coredns-64897985d-798tz" I0417 07:02:02.913474 1 event.go:294] "Event occurred" object="kube-system/metrics-server" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set metrics-server-6b76bd68b6 to 1" I0417 07:02:05.664232 1 event.go:294] "Event occurred" object="kube-system/metrics-server-6b76bd68b6" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-6b76bd68b6-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" E0417 07:02:06.793361 1 replica_set.go:536] sync "kube-system/metrics-server-6b76bd68b6" failed with pods "metrics-server-6b76bd68b6-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found E0417 07:02:08.861587 1 replica_set.go:536] sync "kube-system/metrics-server-6b76bd68b6" failed with pods "metrics-server-6b76bd68b6-" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount "metrics-server" not found I0417 07:02:08.862309 1 event.go:294] "Event occurred" object="kube-system/metrics-server-6b76bd68b6" kind="ReplicaSet" apiVersion="apps/v1" type="Warning" reason="FailedCreate" message="Error creating: pods \"metrics-server-6b76bd68b6-\" is forbidden: error looking up service account kube-system/metrics-server: serviceaccount \"metrics-server\" not found" I0417 07:02:11.530994 1 event.go:294] "Event occurred" object="kube-system/metrics-server-6b76bd68b6" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: metrics-server-6b76bd68b6-6tk7p" W0417 07:02:15.215607 1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request] E0417 07:02:15.303620 1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request W0417 07:02:45.265268 1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request] E0417 07:02:45.315752 1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request W0417 07:03:15.316485 1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request] E0417 07:03:15.324354 1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request E0417 07:03:45.357563 1 resource_quota_controller.go:413] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: the server is currently unable to handle the request W0417 07:03:45.367574 1 garbagecollector.go:707] failed to discover some groups: map[metrics.k8s.io/v1beta1:the server is currently unable to handle the request] I0417 07:22:48.206744 1 event.go:294] "Event occurred" object="default/dpl-test" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set dpl-test-6fb55dc999 to 1" I0417 07:22:49.159348 1 event.go:294] "Event occurred" object="default/dpl-test-6fb55dc999" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: dpl-test-6fb55dc999-lmxfm" * * ==> kube-controller-manager [8c38a276b766] <== * k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x0, 0xdf8475800, 0x0, 0x28, 0xc00006bf40) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 +0x89 k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.Until(0xc0001ab280, 0x0, 0xc000102360) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:90 +0x25 created by k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates.(*DynamicFileCAContent).Run /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apiserver/pkg/server/dynamiccertificates/dynamic_cafile_content.go:163 +0x3c7 goroutine 138 [syscall]: syscall.Syscall6(0xe8, 0xe, 0xc000e2fbec, 0x7, 0xffffffffffffffff, 0x0, 0x0) /usr/local/go/src/syscall/asm_linux_amd64.s:43 +0x5 k8s.io/kubernetes/vendor/golang.org/x/sys/unix.EpollWait(0x0, {0xc000e2fbec, 0x0, 0x0}, 0x0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/sys/unix/zsyscall_linux_amd64.go:77 +0x58 k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*fdPoller).wait(0xc000563ea0) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify_poller.go:86 +0x7d k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*Watcher).readEvents(0xc00043c500) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify.go:192 +0x2b0 created by k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.NewWatcher /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify.go:59 +0x1c7 goroutine 146 [IO wait]: internal/poll.runtime_pollWait(0x7f8793642f88, 0x72) /usr/local/go/src/runtime/netpoll.go:234 +0x89 internal/poll.(*pollDesc).wait(0xc0001ab680, 0xc000d94000, 0x0) /usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x32 internal/poll.(*pollDesc).waitRead(...) /usr/local/go/src/internal/poll/fd_poll_runtime.go:89 internal/poll.(*FD).Read(0xc0001ab680, {0xc000d94000, 0x931, 0x931}) /usr/local/go/src/internal/poll/fd_unix.go:167 +0x25a net.(*netFD).Read(0xc0001ab680, {0xc000d94000, 0xc000d94047, 0x442}) /usr/local/go/src/net/fd_posix.go:56 +0x29 net.(*conn).Read(0xc000116b00, {0xc000d94000, 0x6, 0xc00013f7f0}) /usr/local/go/src/net/net.go:183 +0x45 crypto/tls.(*atLeastReader).Read(0xc000d6a798, {0xc000d94000, 0x0, 0x40a18d}) /usr/local/go/src/crypto/tls/conn.go:777 +0x3d bytes.(*Buffer).ReadFrom(0xc00060e5f8, {0x4d46520, 0xc000d6a798}) /usr/local/go/src/bytes/buffer.go:204 +0x98 crypto/tls.(*Conn).readFromUntil(0xc00060e380, {0x4d4e980, 0xc000116b00}, 0x8ef) /usr/local/go/src/crypto/tls/conn.go:799 +0xe5 crypto/tls.(*Conn).readRecordOrCCS(0xc00060e380, 0x0) /usr/local/go/src/crypto/tls/conn.go:606 +0x112 crypto/tls.(*Conn).readRecord(...) /usr/local/go/src/crypto/tls/conn.go:574 crypto/tls.(*Conn).Read(0xc00060e380, {0xc000db1000, 0x1000, 0x9187e0}) /usr/local/go/src/crypto/tls/conn.go:1277 +0x16f bufio.(*Reader).Read(0xc000643c20, {0xc0001b64a0, 0x9, 0x933e42}) /usr/local/go/src/bufio/bufio.go:227 +0x1b4 io.ReadAtLeast({0x4d46340, 0xc000643c20}, {0xc0001b64a0, 0x9, 0x9}, 0x9) /usr/local/go/src/io/io.go:328 +0x9a io.ReadFull(...) /usr/local/go/src/io/io.go:347 k8s.io/kubernetes/vendor/golang.org/x/net/http2.readFrameHeader({0xc0001b64a0, 0x9, 0xc001d63e30}, {0x4d46340, 0xc000643c20}) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:237 +0x6e k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Framer).ReadFrame(0xc0001b6460) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/frame.go:498 +0x95 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*clientConnReadLoop).run(0xc00013ff98) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:2101 +0x130 k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*ClientConn).readLoop(0xc000894c00) /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:1997 +0x6f created by k8s.io/kubernetes/vendor/golang.org/x/net/http2.(*Transport).newClientConn /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/net/http2/transport.go:725 +0xac5 * * ==> kube-proxy [58951328fc5b] <== * I0417 07:01:29.114013 1 node.go:163] Successfully retrieved node IP: 192.168.49.2 I0417 07:01:29.115879 1 server_others.go:138] "Detected node IP" address="192.168.49.2" I0417 07:01:29.136075 1 server_others.go:561] "Unknown proxy mode, assuming iptables proxy" proxyMode="" I0417 07:01:32.548231 1 server_others.go:206] "Using iptables Proxier" I0417 07:01:32.548265 1 server_others.go:213] "kube-proxy running in dual-stack mode" ipFamily=IPv4 I0417 07:01:32.548276 1 server_others.go:214] "Creating dualStackProxier for iptables" I0417 07:01:32.548300 1 server_others.go:491] "Detect-local-mode set to ClusterCIDR, but no IPv6 cluster CIDR defined, , defaulting to no-op detect-local for IPv6" I0417 07:01:32.549153 1 server.go:656] "Version info" version="v1.23.3" I0417 07:01:32.587370 1 config.go:317] "Starting service config controller" I0417 07:01:32.587403 1 shared_informer.go:240] Waiting for caches to sync for service config I0417 07:01:32.704899 1 config.go:226] "Starting endpoint slice config controller" I0417 07:01:32.704946 1 shared_informer.go:240] Waiting for caches to sync for endpoint slice config I0417 07:01:32.787882 1 shared_informer.go:247] Caches are synced for service config I0417 07:01:32.806081 1 shared_informer.go:247] Caches are synced for endpoint slice config * * ==> kube-scheduler [bbc7672c876b] <== * E0417 06:55:50.404393 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope W0417 06:55:51.537587 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E0417 06:55:51.537690 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope W0417 06:55:52.250256 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope E0417 06:55:52.250381 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope W0417 06:55:52.272381 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E0417 06:55:52.272489 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope W0417 06:55:53.268250 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope E0417 06:55:53.268283 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope W0417 06:55:53.566955 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E0417 06:55:53.567252 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope W0417 06:55:53.833928 1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0417 06:55:53.834270 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" W0417 06:55:54.530775 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E0417 06:55:54.530858 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope W0417 06:55:57.379332 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E0417 06:55:57.379383 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope W0417 06:55:58.246357 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope E0417 06:55:58.246735 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope W0417 06:55:58.509427 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E0417 06:55:58.509542 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope W0417 06:56:00.797690 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0417 06:56:00.797802 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope W0417 06:56:12.834852 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E0417 06:56:12.836200 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope W0417 06:56:23.619748 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope E0417 06:56:23.619796 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope W0417 06:56:24.779525 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope E0417 06:56:24.779613 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope W0417 06:56:27.131648 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope E0417 06:56:27.131797 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope W0417 06:56:28.430648 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope E0417 06:56:28.430828 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope W0417 06:56:28.452996 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope E0417 06:56:28.453031 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope W0417 06:56:28.922978 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E0417 06:56:28.923068 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope W0417 06:56:29.190587 1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0417 06:56:29.190627 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" W0417 06:56:30.293987 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope E0417 06:56:30.294080 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope W0417 06:56:31.905312 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope E0417 06:56:31.905519 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope W0417 06:56:32.596479 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope E0417 06:56:32.597407 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope W0417 06:56:39.031094 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope E0417 06:56:39.031426 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope W0417 06:56:42.458293 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope E0417 06:56:42.458428 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1beta1.CSIStorageCapacity: failed to list *v1beta1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope W0417 06:56:42.498813 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope E0417 06:56:42.498903 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope W0417 06:56:51.648763 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope E0417 06:56:51.648866 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope W0417 06:56:58.303890 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope E0417 06:56:58.304058 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope W0417 06:56:59.688459 1 reflector.go:324] k8s.io/client-go/informers/factory.go:134: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope E0417 06:56:59.689164 1 reflector.go:138] k8s.io/client-go/informers/factory.go:134: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope W0417 06:57:05.518320 1 reflector.go:324] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" E0417 06:57:05.518477 1 reflector.go:138] k8s.io/apiserver/pkg/server/dynamiccertificates/configmap_cafile_content.go:205: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system" I0417 06:57:52.166656 1 shared_informer.go:247] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file * * ==> kubelet <== * -- Logs begin at Sun 2022-04-17 06:53:26 UTC, end at Sun 2022-04-17 07:43:35 UTC. -- Apr 17 07:00:23 minikube kubelet[2396]: I0417 07:00:23.796224 2396 topology_manager.go:200] "Topology Admit Handler" Apr 17 07:00:23 minikube kubelet[2396]: I0417 07:00:23.993978 2396 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/c7a3a47d-eaee-4a32-84ca-1f678a6da14a-tmp\") pod \"storage-provisioner\" (UID: \"c7a3a47d-eaee-4a32-84ca-1f678a6da14a\") " pod="kube-system/storage-provisioner" Apr 17 07:00:23 minikube kubelet[2396]: I0417 07:00:23.994193 2396 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76qqb\" (UniqueName: \"kubernetes.io/projected/c7a3a47d-eaee-4a32-84ca-1f678a6da14a-kube-api-access-76qqb\") pod \"storage-provisioner\" (UID: \"c7a3a47d-eaee-4a32-84ca-1f678a6da14a\") " pod="kube-system/storage-provisioner" Apr 17 07:00:32 minikube kubelet[2396]: E0417 07:00:32.992817 2396 remote_runtime.go:572] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error: No such container: 58951328fc5ba3a80d4f2c06547ce49054b5c2eb93ad8bcd7c99edfde26d2e41" containerID="58951328fc5ba3a80d4f2c06547ce49054b5c2eb93ad8bcd7c99edfde26d2e41" Apr 17 07:00:32 minikube kubelet[2396]: E0417 07:00:32.993310 2396 kuberuntime_manager.go:1072] "getPodContainerStatuses for pod failed" err="rpc error: code = Unknown desc = Error: No such container: 58951328fc5ba3a80d4f2c06547ce49054b5c2eb93ad8bcd7c99edfde26d2e41" pod="kube-system/kube-proxy-wvv7n" Apr 17 07:00:32 minikube kubelet[2396]: E0417 07:00:32.994520 2396 kuberuntime_manager.go:1057] "PodSandboxStatus of sandbox for pod" err="rpc error: code = Unknown desc = Error: No such container: f2d72af2b03986f4f3b19ce119496316f9aa9be7c7dd6498e34a054e21e99d96" podSandboxID="f2d72af2b03986f4f3b19ce119496316f9aa9be7c7dd6498e34a054e21e99d96" pod="kube-system/storage-provisioner" Apr 17 07:00:34 minikube kubelet[2396]: E0417 07:00:34.034280 2396 kuberuntime_manager.go:1057] "PodSandboxStatus of sandbox for pod" err="rpc error: code = Unknown desc = Error: No such container: f2d72af2b03986f4f3b19ce119496316f9aa9be7c7dd6498e34a054e21e99d96" podSandboxID="f2d72af2b03986f4f3b19ce119496316f9aa9be7c7dd6498e34a054e21e99d96" pod="kube-system/storage-provisioner" Apr 17 07:00:34 minikube kubelet[2396]: E0417 07:00:34.040036 2396 kuberuntime_manager.go:1057] "PodSandboxStatus of sandbox for pod" err="rpc error: code = Unknown desc = Error: No such container: f2d72af2b03986f4f3b19ce119496316f9aa9be7c7dd6498e34a054e21e99d96" podSandboxID="f2d72af2b03986f4f3b19ce119496316f9aa9be7c7dd6498e34a054e21e99d96" pod="kube-system/storage-provisioner" Apr 17 07:00:35 minikube kubelet[2396]: I0417 07:00:35.119874 2396 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="f2d72af2b03986f4f3b19ce119496316f9aa9be7c7dd6498e34a054e21e99d96" Apr 17 07:00:53 minikube kubelet[2396]: E0417 07:00:53.616100 2396 remote_runtime.go:572] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error: No such container: f1222f61a7c04b1c5386500b5597ec28dd3066712c856a182363bdce11021050" containerID="f1222f61a7c04b1c5386500b5597ec28dd3066712c856a182363bdce11021050" Apr 17 07:00:53 minikube kubelet[2396]: E0417 07:00:53.617825 2396 kuberuntime_manager.go:1072] "getPodContainerStatuses for pod failed" err="rpc error: code = Unknown desc = Error: No such container: f1222f61a7c04b1c5386500b5597ec28dd3066712c856a182363bdce11021050" pod="kube-system/storage-provisioner" Apr 17 07:01:01 minikube kubelet[2396]: I0417 07:01:01.941487 2396 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-64897985d-9rksw through plugin: invalid network status for" Apr 17 07:01:01 minikube kubelet[2396]: I0417 07:01:01.999832 2396 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-64897985d-9rksw through plugin: invalid network status for" Apr 17 07:01:02 minikube kubelet[2396]: I0417 07:01:02.593627 2396 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-64897985d-798tz through plugin: invalid network status for" Apr 17 07:01:03 minikube kubelet[2396]: I0417 07:01:03.021907 2396 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-64897985d-798tz through plugin: invalid network status for" Apr 17 07:01:06 minikube kubelet[2396]: I0417 07:01:06.084467 2396 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-64897985d-9rksw through plugin: invalid network status for" Apr 17 07:01:06 minikube kubelet[2396]: E0417 07:01:06.092860 2396 remote_runtime.go:572] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error: No such container: c909b2ccb881f114011c641c07250be15bbcc16123d741f5a0aa0897bf63aebb" containerID="c909b2ccb881f114011c641c07250be15bbcc16123d741f5a0aa0897bf63aebb" Apr 17 07:01:06 minikube kubelet[2396]: E0417 07:01:06.093578 2396 kuberuntime_manager.go:1072] "getPodContainerStatuses for pod failed" err="rpc error: code = Unknown desc = Error: No such container: c909b2ccb881f114011c641c07250be15bbcc16123d741f5a0aa0897bf63aebb" pod="kube-system/coredns-64897985d-9rksw" Apr 17 07:01:06 minikube kubelet[2396]: I0417 07:01:06.098343 2396 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-64897985d-798tz through plugin: invalid network status for" Apr 17 07:01:06 minikube kubelet[2396]: E0417 07:01:06.102313 2396 remote_runtime.go:572] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error: No such container: 77eca80cb0bcb1e5dd76b1d156ab8818cc8a38e29c2c13b3334884a385be6d5f" containerID="77eca80cb0bcb1e5dd76b1d156ab8818cc8a38e29c2c13b3334884a385be6d5f" Apr 17 07:01:06 minikube kubelet[2396]: E0417 07:01:06.102375 2396 kuberuntime_manager.go:1072] "getPodContainerStatuses for pod failed" err="rpc error: code = Unknown desc = Error: No such container: 77eca80cb0bcb1e5dd76b1d156ab8818cc8a38e29c2c13b3334884a385be6d5f" pod="kube-system/coredns-64897985d-798tz" Apr 17 07:01:07 minikube kubelet[2396]: I0417 07:01:07.317784 2396 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-64897985d-9rksw through plugin: invalid network status for" Apr 17 07:01:09 minikube kubelet[2396]: I0417 07:01:09.039824 2396 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-64897985d-798tz through plugin: invalid network status for" Apr 17 07:01:17 minikube kubelet[2396]: I0417 07:01:17.520687 2396 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-64897985d-9rksw through plugin: invalid network status for" Apr 17 07:01:17 minikube kubelet[2396]: I0417 07:01:17.533760 2396 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/coredns-64897985d-798tz through plugin: invalid network status for" Apr 17 07:02:05 minikube kubelet[2396]: I0417 07:02:05.860475 2396 reconciler.go:192] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l2w2p\" (UniqueName: \"kubernetes.io/projected/5788904b-94e4-4d37-9cb2-ba927c9f4552-kube-api-access-l2w2p\") pod \"5788904b-94e4-4d37-9cb2-ba927c9f4552\" (UID: \"5788904b-94e4-4d37-9cb2-ba927c9f4552\") " Apr 17 07:02:05 minikube kubelet[2396]: I0417 07:02:05.863016 2396 reconciler.go:192] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5788904b-94e4-4d37-9cb2-ba927c9f4552-config-volume\") pod \"5788904b-94e4-4d37-9cb2-ba927c9f4552\" (UID: \"5788904b-94e4-4d37-9cb2-ba927c9f4552\") " Apr 17 07:02:05 minikube kubelet[2396]: W0417 07:02:05.863914 2396 empty_dir.go:517] Warning: Failed to clear quota on /var/lib/kubelet/pods/5788904b-94e4-4d37-9cb2-ba927c9f4552/volumes/kubernetes.io~configmap/config-volume: clearQuota called, but quotas disabled Apr 17 07:02:05 minikube kubelet[2396]: I0417 07:02:05.864643 2396 operation_generator.go:910] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5788904b-94e4-4d37-9cb2-ba927c9f4552-config-volume" (OuterVolumeSpecName: "config-volume") pod "5788904b-94e4-4d37-9cb2-ba927c9f4552" (UID: "5788904b-94e4-4d37-9cb2-ba927c9f4552"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Apr 17 07:02:05 minikube kubelet[2396]: I0417 07:02:05.876375 2396 scope.go:110] "RemoveContainer" containerID="77eca80cb0bcb1e5dd76b1d156ab8818cc8a38e29c2c13b3334884a385be6d5f" Apr 17 07:02:05 minikube kubelet[2396]: I0417 07:02:05.969453 2396 reconciler.go:300] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5788904b-94e4-4d37-9cb2-ba927c9f4552-config-volume\") on node \"minikube\" DevicePath \"\"" Apr 17 07:02:06 minikube kubelet[2396]: I0417 07:02:06.235541 2396 operation_generator.go:910] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5788904b-94e4-4d37-9cb2-ba927c9f4552-kube-api-access-l2w2p" (OuterVolumeSpecName: "kube-api-access-l2w2p") pod "5788904b-94e4-4d37-9cb2-ba927c9f4552" (UID: "5788904b-94e4-4d37-9cb2-ba927c9f4552"). InnerVolumeSpecName "kube-api-access-l2w2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Apr 17 07:02:06 minikube kubelet[2396]: I0417 07:02:06.285790 2396 reconciler.go:300] "Volume detached for volume \"kube-api-access-l2w2p\" (UniqueName: \"kubernetes.io/projected/5788904b-94e4-4d37-9cb2-ba927c9f4552-kube-api-access-l2w2p\") on node \"minikube\" DevicePath \"\"" Apr 17 07:02:06 minikube kubelet[2396]: I0417 07:02:06.915221 2396 scope.go:110] "RemoveContainer" containerID="f1222f61a7c04b1c5386500b5597ec28dd3066712c856a182363bdce11021050" Apr 17 07:02:11 minikube kubelet[2396]: I0417 07:02:11.639150 2396 kubelet_volumes.go:160] "Cleaned up orphaned pod volumes dir" podUID=5788904b-94e4-4d37-9cb2-ba927c9f4552 path="/var/lib/kubelet/pods/5788904b-94e4-4d37-9cb2-ba927c9f4552/volumes" Apr 17 07:02:11 minikube kubelet[2396]: E0417 07:02:11.937640 2396 remote_runtime.go:572] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error: No such container: af825b533544bbe6f174f0d1a7c26f6ead322e386b5447c4a9c2e46d5fa67b75" containerID="af825b533544bbe6f174f0d1a7c26f6ead322e386b5447c4a9c2e46d5fa67b75" Apr 17 07:02:11 minikube kubelet[2396]: E0417 07:02:11.937755 2396 kuberuntime_manager.go:1072] "getPodContainerStatuses for pod failed" err="rpc error: code = Unknown desc = Error: No such container: af825b533544bbe6f174f0d1a7c26f6ead322e386b5447c4a9c2e46d5fa67b75" pod="kube-system/storage-provisioner" Apr 17 07:02:12 minikube kubelet[2396]: I0417 07:02:12.774064 2396 topology_manager.go:200] "Topology Admit Handler" Apr 17 07:02:12 minikube kubelet[2396]: I0417 07:02:12.947014 2396 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/528cc03f-1519-4967-ba8e-65fcffa7d564-tmp-dir\") pod \"metrics-server-6b76bd68b6-6tk7p\" (UID: \"528cc03f-1519-4967-ba8e-65fcffa7d564\") " pod="kube-system/metrics-server-6b76bd68b6-6tk7p" Apr 17 07:02:12 minikube kubelet[2396]: I0417 07:02:12.947174 2396 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h4c5w\" (UniqueName: \"kubernetes.io/projected/528cc03f-1519-4967-ba8e-65fcffa7d564-kube-api-access-h4c5w\") pod \"metrics-server-6b76bd68b6-6tk7p\" (UID: \"528cc03f-1519-4967-ba8e-65fcffa7d564\") " pod="kube-system/metrics-server-6b76bd68b6-6tk7p" Apr 17 07:02:21 minikube kubelet[2396]: E0417 07:02:21.985966 2396 kuberuntime_manager.go:1057] "PodSandboxStatus of sandbox for pod" err="rpc error: code = Unknown desc = Error: No such container: 26eec9c79cc6e4b54e53fd0ebb97abdca29fd5e0d5b1894246636b2b788d45ef" podSandboxID="26eec9c79cc6e4b54e53fd0ebb97abdca29fd5e0d5b1894246636b2b788d45ef" pod="kube-system/metrics-server-6b76bd68b6-6tk7p" Apr 17 07:02:24 minikube kubelet[2396]: I0417 07:02:24.113103 2396 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="26eec9c79cc6e4b54e53fd0ebb97abdca29fd5e0d5b1894246636b2b788d45ef" Apr 17 07:02:39 minikube kubelet[2396]: I0417 07:02:39.182618 2396 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/metrics-server-6b76bd68b6-6tk7p through plugin: invalid network status for" Apr 17 07:02:39 minikube kubelet[2396]: I0417 07:02:39.603117 2396 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/metrics-server-6b76bd68b6-6tk7p through plugin: invalid network status for" Apr 17 07:03:30 minikube kubelet[2396]: I0417 07:03:30.942363 2396 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/metrics-server-6b76bd68b6-6tk7p through plugin: invalid network status for" Apr 17 07:03:30 minikube kubelet[2396]: E0417 07:03:30.952035 2396 remote_runtime.go:572] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error: No such container: 5ce45a2f0c426c3c4e63388d9887e59bf27b35dd2a994ea1bf125710440f5f53" containerID="5ce45a2f0c426c3c4e63388d9887e59bf27b35dd2a994ea1bf125710440f5f53" Apr 17 07:03:30 minikube kubelet[2396]: E0417 07:03:30.952631 2396 kuberuntime_manager.go:1072] "getPodContainerStatuses for pod failed" err="rpc error: code = Unknown desc = Error: No such container: 5ce45a2f0c426c3c4e63388d9887e59bf27b35dd2a994ea1bf125710440f5f53" pod="kube-system/metrics-server-6b76bd68b6-6tk7p" Apr 17 07:03:31 minikube kubelet[2396]: I0417 07:03:31.971598 2396 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/metrics-server-6b76bd68b6-6tk7p through plugin: invalid network status for" Apr 17 07:03:41 minikube kubelet[2396]: I0417 07:03:41.264682 2396 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for kube-system/metrics-server-6b76bd68b6-6tk7p through plugin: invalid network status for" Apr 17 07:22:49 minikube kubelet[2396]: I0417 07:22:49.926584 2396 topology_manager.go:200] "Topology Admit Handler" Apr 17 07:22:50 minikube kubelet[2396]: I0417 07:22:50.061262 2396 reconciler.go:221] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fcq8\" (UniqueName: \"kubernetes.io/projected/28adafc9-1b64-49e1-bd46-29108f218387-kube-api-access-9fcq8\") pod \"dpl-test-6fb55dc999-lmxfm\" (UID: \"28adafc9-1b64-49e1-bd46-29108f218387\") " pod="default/dpl-test-6fb55dc999-lmxfm" Apr 17 07:22:57 minikube kubelet[2396]: E0417 07:22:57.066781 2396 kuberuntime_manager.go:1057] "PodSandboxStatus of sandbox for pod" err="rpc error: code = Unknown desc = Error: No such container: c8cbc07ea4152860866f8db163c7e8dbddd88a55f2952436614e5d94fed83459" podSandboxID="c8cbc07ea4152860866f8db163c7e8dbddd88a55f2952436614e5d94fed83459" pod="default/dpl-test-6fb55dc999-lmxfm" Apr 17 07:22:58 minikube kubelet[2396]: I0417 07:22:58.453748 2396 pod_container_deletor.go:79] "Container not found in pod's containers" containerID="c8cbc07ea4152860866f8db163c7e8dbddd88a55f2952436614e5d94fed83459" Apr 17 07:23:09 minikube kubelet[2396]: I0417 07:23:09.096775 2396 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/dpl-test-6fb55dc999-lmxfm through plugin: invalid network status for" Apr 17 07:23:10 minikube kubelet[2396]: I0417 07:23:10.107371 2396 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/dpl-test-6fb55dc999-lmxfm through plugin: invalid network status for" Apr 17 07:24:27 minikube kubelet[2396]: I0417 07:24:27.286476 2396 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/dpl-test-6fb55dc999-lmxfm through plugin: invalid network status for" Apr 17 07:24:27 minikube kubelet[2396]: E0417 07:24:27.289564 2396 remote_runtime.go:572] "ContainerStatus from runtime service failed" err="rpc error: code = Unknown desc = Error: No such container: 47099365f68c933aa73f9840dd22b3d5d99a9300db8b942d56c026c30e6bccf7" containerID="47099365f68c933aa73f9840dd22b3d5d99a9300db8b942d56c026c30e6bccf7" Apr 17 07:24:27 minikube kubelet[2396]: E0417 07:24:27.289615 2396 kuberuntime_manager.go:1072] "getPodContainerStatuses for pod failed" err="rpc error: code = Unknown desc = Error: No such container: 47099365f68c933aa73f9840dd22b3d5d99a9300db8b942d56c026c30e6bccf7" pod="default/dpl-test-6fb55dc999-lmxfm" Apr 17 07:24:28 minikube kubelet[2396]: I0417 07:24:28.315532 2396 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/dpl-test-6fb55dc999-lmxfm through plugin: invalid network status for" Apr 17 07:24:36 minikube kubelet[2396]: I0417 07:24:36.881985 2396 docker_sandbox.go:402] "Failed to read pod IP from plugin/docker" err="Couldn't find network status for default/dpl-test-6fb55dc999-lmxfm through plugin: invalid network status for" * * ==> storage-provisioner [af825b533544] <== * I0417 07:02:18.219373 1 storage_provisioner.go:116] Initializing the minikube storage provisioner... I0417 07:02:20.656545 1 storage_provisioner.go:141] Storage provisioner initialized, now starting service! I0417 07:02:20.657012 1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath... I0417 07:02:22.273888 1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath I0417 07:02:22.274165 1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"2a06f756-8945-4cf1-b243-0b81c3bf2378", APIVersion:"v1", ResourceVersion:"672", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' minikube_3e1dfd3d-6e88-4cda-982c-31503eb0f105 became leader I0417 07:02:22.274883 1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_minikube_3e1dfd3d-6e88-4cda-982c-31503eb0f105! I0417 07:02:23.218965 1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_minikube_3e1dfd3d-6e88-4cda-982c-31503eb0f105! * * ==> storage-provisioner [f1222f61a7c0] <== * I0417 07:01:28.735446 1 storage_provisioner.go:116] Initializing the minikube storage provisioner... F0417 07:01:58.892605 1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout