Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

failed to download kicbase-build anonymously AliYun Mirror #13863

Closed
zhan9san opened this issue Mar 28, 2022 · 5 comments
Closed

failed to download kicbase-build anonymously AliYun Mirror #13863

zhan9san opened this issue Mar 28, 2022 · 5 comments

Comments

@zhan9san
Copy link
Contributor

What Happened?

Failed to download registry.cn-hangzhou.aliyuncs.com/google_containers/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5

A minimal, reproducible example

$ docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5
Error response from daemon: pull access denied for registry.cn-hangzhou.aliyuncs.com/google_containers/kicbase-builds, repository does not exist or may require 'docker login': denied: requested access to the resource is denied

Minikube Version

Compiled from latest master branch

$ ./out/minikube version
minikube version: v1.25.2
commit: aaf7c5cfaeb86c1c4efe07c27484137e5f5b3c18

Log

~/src/github/minikube$ ./out/minikube start --driver=docker --image-mirror-country=cn --alsologtostderr
I0328 11:06:39.525607 2530021 out.go:297] Setting OutFile to fd 1 ...
I0328 11:06:39.525723 2530021 out.go:349] isatty.IsTerminal(1) = true
I0328 11:06:39.525732 2530021 out.go:310] Setting ErrFile to fd 2...
I0328 11:06:39.525741 2530021 out.go:349] isatty.IsTerminal(2) = true
I0328 11:06:39.525920 2530021 root.go:315] Updating PATH: /home/x/.minikube/bin
I0328 11:06:39.526173 2530021 out.go:304] Setting JSON to false
I0328 11:06:39.546564 2530021 start.go:114] hostinfo: {"hostname":"build-ubuntu-01","uptime":5341000,"bootTime":1643095800,"procs":472,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.4.0-90-generic","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"host","hostId":"679a5d0a-f828-48d9-9ad5-84ac2965fba1"}
I0328 11:06:39.546658 2530021 start.go:124] virtualization: kvm host
I0328 11:06:39.549160 2530021 out.go:176] 😄  minikube v1.25.2 on Ubuntu 20.04
😄  minikube v1.25.2 on Ubuntu 20.04
I0328 11:06:39.549342 2530021 notify.go:193] Checking for updates...
I0328 11:06:39.549748 2530021 config.go:176] Loaded profile config "minikube": Driver=docker, ContainerRuntime=docker, KubernetesVersion=v1.23.3
W0328 11:06:39.549796 2530021 start.go:708] api.Load failed for minikube: filestore "minikube": Docker machine "minikube" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
I0328 11:06:39.549870 2530021 driver.go:346] Setting default libvirt URI to qemu:///system
W0328 11:06:39.549918 2530021 start.go:708] api.Load failed for minikube: filestore "minikube": Docker machine "minikube" does not exist. Use "docker-machine ls" to list machines. Use "docker-machine create" to add a new one.
I0328 11:06:39.587760 2530021 docker.go:132] docker version: linux-20.10.11
I0328 11:06:39.587893 2530021 cli_runner.go:133] Run: docker system info --format "{{json .}}"
I0328 11:06:39.651655 2530021 info.go:263] docker info: {ID:KY7Q:4CMM:B3WH:4YEC:GFFV:H4EF:LW2W:GETG:N7VG:FRQM:QING:X2TQ Containers:7 ContainersRunning:3 ContainersPaused:0 ContainersStopped:4 Images:840 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:49 OomKillDisable:true NGoroutines:52 SystemTime:2022-03-28 11:06:39.610481169 +0800 CST LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.4.0-90-generic OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:16523509760 GenericResources:<nil> DockerRootDir:/mnt/docker HTTPProxy: HTTPSProxy: NoProxy: Name:build-ubuntu-01 Labels:[] ExperimentalBuild:false ServerVersion:20.10.11 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.3-docker]] Warnings:<nil>}}
I0328 11:06:39.651730 2530021 docker.go:237] overlay module found
I0328 11:06:39.654255 2530021 out.go:176] ✨  Using the docker driver based on existing profile
✨  Using the docker driver based on existing profile
I0328 11:06:39.654312 2530021 start.go:283] selected driver: docker
I0328 11:06:39.654321 2530021 start.go:800] validating driver "docker" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:registry.cn-hangzhou.aliyuncs.com/google_containers/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:3900 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository:registry.cn-hangzhou.aliyuncs.com/google_containers LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/x:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
I0328 11:06:39.654411 2530021 start.go:811] status for docker: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
I0328 11:06:39.654516 2530021 cli_runner.go:133] Run: docker system info --format "{{json .}}"
I0328 11:06:39.722078 2530021 info.go:263] docker info: {ID:KY7Q:4CMM:B3WH:4YEC:GFFV:H4EF:LW2W:GETG:N7VG:FRQM:QING:X2TQ Containers:7 ContainersRunning:3 ContainersPaused:0 ContainersStopped:4 Images:840 Driver:overlay2 DriverStatus:[[Backing Filesystem extfs] [Supports d_type true] [Native Overlay Diff true] [userxattr false]] SystemStatus:<nil> Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:<nil> Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:false KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6Tables:true Debug:true NFd:49 OomKillDisable:true NGoroutines:52 SystemTime:2022-03-28 11:06:39.677183604 +0800 CST LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:5.4.0-90-generic OperatingSystem:Ubuntu 20.04.3 LTS OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:{AllowNondistributableArtifactsCIDRs:[] AllowNondistributableArtifactsHostnames:[] InsecureRegistryCIDRs:[127.0.0.0/8] IndexConfigs:{DockerIo:{Name:docker.io Mirrors:[] Secure:true Official:true}} Mirrors:[]} NCPU:8 MemTotal:16523509760 GenericResources:<nil> DockerRootDir:/mnt/docker HTTPProxy: HTTPSProxy: NoProxy: Name:build-ubuntu-01 Labels:[] ExperimentalBuild:false ServerVersion:20.10.11 ClusterStore: ClusterAdvertise: Runtimes:{Runc:{Path:runc}} DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:<nil>} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7b11cfaabd73bb80907dd23182b9347b4245eb5d Expected:7b11cfaabd73bb80907dd23182b9347b4245eb5d} RuncCommit:{ID:v1.0.2-0-g52b36a2 Expected:v1.0.2-0-g52b36a2} InitCommit:{ID:de40ad0 Expected:de40ad0} SecurityOptions:[name=apparmor name=seccomp,profile=default] ProductLicense: Warnings:[WARNING: No swap limit support] ServerErrors:[] ClientInfo:{Debug:false Plugins:[map[Experimental:true Name:app Path:/usr/libexec/docker/cli-plugins/docker-app SchemaVersion:0.1.0 ShortDescription:Docker App Vendor:Docker Inc. Version:v0.9.1-beta3] map[Name:buildx Path:/usr/libexec/docker/cli-plugins/docker-buildx SchemaVersion:0.1.0 ShortDescription:Build with BuildKit Vendor:Docker Inc. Version:v0.6.3-docker]] Warnings:<nil>}}
I0328 11:06:39.722874 2530021 cni.go:93] Creating CNI manager for ""
I0328 11:06:39.722904 2530021 cni.go:167] CNI unnecessary in this configuration, recommending no CNI
I0328 11:06:39.722911 2530021 start_flags.go:304] config:
{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:registry.cn-hangzhou.aliyuncs.com/google_containers/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:3900 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository:registry.cn-hangzhou.aliyuncs.com/google_containers LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/x:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false}
I0328 11:06:39.724686 2530021 out.go:176] 👍  Starting control plane node minikube in cluster minikube
👍  Starting control plane node minikube in cluster minikube
I0328 11:06:39.724723 2530021 cache.go:120] Beginning downloading kic base image for docker with docker
I0328 11:06:39.725988 2530021 out.go:176] 🚜  Pulling base image ...
🚜  Pulling base image ...
I0328 11:06:39.726063 2530021 image.go:75] Checking for registry.cn-hangzhou.aliyuncs.com/google_containers/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
I0328 11:06:39.726157 2530021 profile.go:148] Saving config to /home/x/.minikube/profiles/minikube/config.json ...
I0328 11:06:39.726336 2530021 cache.go:107] acquiring lock: {Name:mk3c425c6217f8d1a456e5a11b85d51b292f12c6 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0328 11:06:39.726357 2530021 cache.go:107] acquiring lock: {Name:mkc651585058b82f089af41bf0beb34f15865e28 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0328 11:06:39.726740 2530021 cache.go:107] acquiring lock: {Name:mkbdfdbe9d182ce1c6ceed6e206094c66d0ca915 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0328 11:06:39.726352 2530021 cache.go:107] acquiring lock: {Name:mk291a0758e75f96b3f3175aefa32efb43f0f46e Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0328 11:06:39.726828 2530021 cache.go:107] acquiring lock: {Name:mk74e9a9e0d06566a6544fd96d6d4fd89d5d4575 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0328 11:06:39.726866 2530021 cache.go:107] acquiring lock: {Name:mk24d13450f0375d3620166a0d94579798d83d82 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0328 11:06:39.726884 2530021 cache.go:107] acquiring lock: {Name:mk04df2412e74ffea5aee6620a751ebb21b9f3b7 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0328 11:06:39.726570 2530021 cache.go:107] acquiring lock: {Name:mk56b2c76f3d6c667990bc990931d300e9c72766 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0328 11:06:39.726337 2530021 cache.go:107] acquiring lock: {Name:mkf1db01e736bdb8172d0b71ab26785620c44f84 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0328 11:06:39.726683 2530021 cache.go:107] acquiring lock: {Name:mkbf5ab86bbdceac114c7fa5b2701e93a5084777 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0328 11:06:39.727026 2530021 cache.go:115] /home/x/.minikube/cache/images/amd64/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler_v1.23.3 exists
I0328 11:06:39.727030 2530021 cache.go:115] /home/x/.minikube/cache/images/amd64/registry.cn-hangzhou.aliyuncs.com/google_containers/pause_3.6 exists
I0328 11:06:39.727048 2530021 cache.go:115] /home/x/.minikube/cache/images/amd64/registry.cn-hangzhou.aliyuncs.com/google_containers/k8s-minikube/storage-provisioner_v5 exists
I0328 11:06:39.727051 2530021 cache.go:96] cache image "registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6" -> "/home/x/.minikube/cache/images/amd64/registry.cn-hangzhou.aliyuncs.com/google_containers/pause_3.6" took 369.84µs
I0328 11:06:39.727065 2530021 cache.go:80] save to tar file registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.6 -> /home/x/.minikube/cache/images/amd64/registry.cn-hangzhou.aliyuncs.com/google_containers/pause_3.6 succeeded
I0328 11:06:39.727048 2530021 cache.go:96] cache image "registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.23.3" -> "/home/x/.minikube/cache/images/amd64/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler_v1.23.3" took 719.818µs
I0328 11:06:39.727067 2530021 cache.go:96] cache image "registry.cn-hangzhou.aliyuncs.com/google_containers/k8s-minikube/storage-provisioner:v5" -> "/home/x/.minikube/cache/images/amd64/registry.cn-hangzhou.aliyuncs.com/google_containers/k8s-minikube/storage-provisioner_v5" took 720.415µs
I0328 11:06:39.727081 2530021 cache.go:80] save to tar file registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.23.3 -> /home/x/.minikube/cache/images/amd64/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler_v1.23.3 succeeded
I0328 11:06:39.727086 2530021 cache.go:80] save to tar file registry.cn-hangzhou.aliyuncs.com/google_containers/k8s-minikube/storage-provisioner:v5 -> /home/x/.minikube/cache/images/amd64/registry.cn-hangzhou.aliyuncs.com/google_containers/k8s-minikube/storage-provisioner_v5 succeeded
I0328 11:06:39.727025 2530021 cache.go:115] /home/x/.minikube/cache/images/amd64/registry.cn-hangzhou.aliyuncs.com/google_containers/etcd_3.5.1-0 exists
I0328 11:06:39.727109 2530021 cache.go:115] /home/x/.minikube/cache/images/amd64/registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetesui/dashboard_v2.3.1 exists
I0328 11:06:39.727114 2530021 cache.go:96] cache image "registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.1-0" -> "/home/x/.minikube/cache/images/amd64/registry.cn-hangzhou.aliyuncs.com/google_containers/etcd_3.5.1-0" took 761.298µs
I0328 11:06:39.727125 2530021 cache.go:96] cache image "registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetesui/dashboard:v2.3.1" -> "/home/x/.minikube/cache/images/amd64/registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetesui/dashboard_v2.3.1" took 324.922µs
I0328 11:06:39.727132 2530021 cache.go:80] save to tar file registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.5.1-0 -> /home/x/.minikube/cache/images/amd64/registry.cn-hangzhou.aliyuncs.com/google_containers/etcd_3.5.1-0 succeeded
I0328 11:06:39.727138 2530021 cache.go:80] save to tar file registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetesui/dashboard:v2.3.1 -> /home/x/.minikube/cache/images/amd64/registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetesui/dashboard_v2.3.1 succeeded
I0328 11:06:39.727155 2530021 cache.go:115] /home/x/.minikube/cache/images/amd64/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager_v1.23.3 exists
I0328 11:06:39.727270 2530021 cache.go:96] cache image "registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.23.3" -> "/home/x/.minikube/cache/images/amd64/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager_v1.23.3" took 388.849µs
I0328 11:06:39.727323 2530021 cache.go:80] save to tar file registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.23.3 -> /home/x/.minikube/cache/images/amd64/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager_v1.23.3 succeeded
I0328 11:06:39.727172 2530021 cache.go:115] /home/x/.minikube/cache/images/amd64/registry.cn-hangzhou.aliyuncs.com/google_containers/coredns/coredns_v1.8.6 exists
I0328 11:06:39.727422 2530021 cache.go:96] cache image "registry.cn-hangzhou.aliyuncs.com/google_containers/coredns/coredns:v1.8.6" -> "/home/x/.minikube/cache/images/amd64/registry.cn-hangzhou.aliyuncs.com/google_containers/coredns/coredns_v1.8.6" took 684.617µs
I0328 11:06:39.727473 2530021 cache.go:80] save to tar file registry.cn-hangzhou.aliyuncs.com/google_containers/coredns/coredns:v1.8.6 -> /home/x/.minikube/cache/images/amd64/registry.cn-hangzhou.aliyuncs.com/google_containers/coredns/coredns_v1.8.6 succeeded
I0328 11:06:39.727189 2530021 cache.go:115] /home/x/.minikube/cache/images/amd64/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver_v1.23.3 exists
I0328 11:06:39.727562 2530021 cache.go:96] cache image "registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.23.3" -> "/home/x/.minikube/cache/images/amd64/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver_v1.23.3" took 1.197471ms
I0328 11:06:39.727204 2530021 cache.go:115] /home/x/.minikube/cache/images/amd64/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy_v1.23.3 exists
I0328 11:06:39.727587 2530021 cache.go:80] save to tar file registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.23.3 -> /home/x/.minikube/cache/images/amd64/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver_v1.23.3 succeeded
I0328 11:06:39.727604 2530021 cache.go:96] cache image "registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.23.3" -> "/home/x/.minikube/cache/images/amd64/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy_v1.23.3" took 1.038145ms
I0328 11:06:39.727620 2530021 cache.go:80] save to tar file registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.23.3 -> /home/x/.minikube/cache/images/amd64/registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy_v1.23.3 succeeded
I0328 11:06:39.727223 2530021 cache.go:115] /home/x/.minikube/cache/images/amd64/registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetesui/metrics-scraper_v1.0.7 exists
I0328 11:06:39.727648 2530021 cache.go:96] cache image "registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetesui/metrics-scraper:v1.0.7" -> "/home/x/.minikube/cache/images/amd64/registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetesui/metrics-scraper_v1.0.7" took 1.316505ms
I0328 11:06:39.727672 2530021 cache.go:80] save to tar file registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetesui/metrics-scraper:v1.0.7 -> /home/x/.minikube/cache/images/amd64/registry.cn-hangzhou.aliyuncs.com/google_containers/kubernetesui/metrics-scraper_v1.0.7 succeeded
I0328 11:06:39.727683 2530021 cache.go:87] Successfully saved all images to host disk.
I0328 11:06:39.981973 2530021 cache.go:148] Downloading registry.cn-hangzhou.aliyuncs.com/google_containers/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 to local cache
I0328 11:06:39.982182 2530021 image.go:59] Checking for registry.cn-hangzhou.aliyuncs.com/google_containers/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local cache directory
I0328 11:06:39.982218 2530021 image.go:119] Writing registry.cn-hangzhou.aliyuncs.com/google_containers/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 to local cache
I0328 11:06:40.533860 2530021 cache.go:162] Loading registry.cn-hangzhou.aliyuncs.com/google_containers/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 from local cache
I0328 11:06:40.534003 2530021 cache.go:172] failed to load registry.cn-hangzhou.aliyuncs.com/google_containers/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5, will try remote image if available: tarball: open /home/x/.minikube/cache/kic/amd64/kicbase-builds_v0.0.30-1647797120-13815@sha256_90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5.tar: no such file or directory
I0328 11:06:40.534062 2530021 cache.go:174] Downloading registry.cn-hangzhou.aliyuncs.com/google_containers/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 to local daemon
I0328 11:06:40.534397 2530021 image.go:75] Checking for registry.cn-hangzhou.aliyuncs.com/google_containers/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
I0328 11:06:40.799554 2530021 image.go:243] Writing registry.cn-hangzhou.aliyuncs.com/google_containers/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 to local daemon
I0328 11:06:40.992338 2530021 cache.go:182] failed to download registry.cn-hangzhou.aliyuncs.com/google_containers/kicbase-builds:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5, will try fallback image if available: GET https://registry.cn-hangzhou.aliyuncs.com/v2/google_containers/kicbase-builds/manifests/sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5: UNAUTHORIZED: authentication required; [map[Action:pull Class: Name:google_containers/kicbase-builds Type:repository]]
I0328 11:06:40.992422 2530021 image.go:75] Checking for docker.io/kicbase/build:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
I0328 11:06:41.229435 2530021 cache.go:148] Downloading docker.io/kicbase/build:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 to local cache
I0328 11:06:41.229588 2530021 image.go:59] Checking for docker.io/kicbase/build:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local cache directory
I0328 11:06:41.229604 2530021 image.go:62] Found docker.io/kicbase/build:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local cache directory, skipping pull
I0328 11:06:41.229624 2530021 image.go:103] docker.io/kicbase/build:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 exists in cache, skipping pull
I0328 11:06:41.229656 2530021 cache.go:151] successfully saved docker.io/kicbase/build:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 as a tarball
I0328 11:06:41.229685 2530021 cache.go:162] Loading docker.io/kicbase/build:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 from local cache
I0328 11:06:41.229868 2530021 cache.go:172] failed to load docker.io/kicbase/build:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5, will try remote image if available: tarball: unexpected EOF
I0328 11:06:41.229880 2530021 cache.go:174] Downloading docker.io/kicbase/build:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 to local daemon
I0328 11:06:41.230022 2530021 image.go:75] Checking for docker.io/kicbase/build:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 in local docker daemon
I0328 11:06:41.467072 2530021 image.go:243] Writing docker.io/kicbase/build:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 to local daemon
    > index.docker.io/kicbase/bui...: 0 B [____________________] ?% ? p/s 2m34s
I0328 11:09:19.331847 2530021 cache.go:177] successfully downloaded docker.io/kicbase/build:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5
W0328 11:09:19.331926 2530021 out.go:241] ❗  minikube was unable to download registry.cn-hangzhou.aliyuncs.com/google_containers/kicbase-builds:v0.0.30-1647797120-13815, but successfully downloaded docker.io/kicbase/build:v0.0.30-1647797120-13815 as a fallback image
❗  minikube was unable to download registry.cn-hangzhou.aliyuncs.com/google_containers/kicbase-builds:v0.0.30-1647797120-13815, but successfully downloaded docker.io/kicbase/build:v0.0.30-1647797120-13815 as a fallback image
I0328 11:09:19.331967 2530021 cache.go:208] Successfully downloaded all kic artifacts
I0328 11:09:19.332046 2530021 start.go:348] acquiring machines lock for minikube: {Name:mk2d25858f831d7fff77efdf4c0b399672e3f35c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
I0328 11:09:19.332237 2530021 start.go:352] acquired machines lock for "minikube" in 144.114µs
I0328 11:09:19.332278 2530021 start.go:90] Provisioning new machine with config: &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:docker.io/kicbase/build:v0.0.30-1647797120-13815@sha256:90e8f7ee4065da728c0b80d303827e05ce4421985fe9bd7bdca30a55218347b5 Memory:3900 CPUs:2 DiskSize:20000 VMDriver: Driver:docker HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.23.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository:registry.cn-hangzhou.aliyuncs.com/google_containers LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/x:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false} &{Name: IP: Port:8443 KubernetesVersion:v1.23.3 ContainerRuntime:docker ControlPlane:true Worker:true}

Attach the log file

log.txt

Operating System

Ubuntu

Driver

Docker

@spowelljr
Copy link
Member

Hi @zhan9san, thanks for reporting your issue with minikube!

The aliyuncs registry is actually kept up to date by others (not the core minikube maintainers), but we are working towards having control over the registry. Therefore, those maintaining the registry try their best at keeping the release images up to date, but because they manually download and upload them they only do the release images, it would be impossible for them to keep up with all our build images. Once we have control over the registry we can hopefully automate pushing build images up there too.

Hope that explains the problem here, feel free to respond if you have more questions or close the issue otherwise, thanks for using minikube!

@zhan9san
Copy link
Contributor Author

Hi @spowelljr

Thanks for your attention.

I am not sure I understand kicbase correctly.

The kicbase exists in only two places in Minikube doc. One is release-new-kicbase-image and the other is testingkicbaseimage.

Would it be possible to not update the kicbase image in master branch until kicbase is ready to release?

If we do so, the master branch would always work.

Besides, if we want to test new kicbase image, how about testing it in a feature branch?
Once all checks pass in gcr, we can publish it to fallback registry(docker hub) as well as mirror registry(Aliyun).

@spowelljr
Copy link
Member

Anytime anyone makes changes to the deploy/kicbase directory, a new kicbase image has to be built for the changes to take affect.

Here are three PRs that modified that directory, you can see in each the kic version gets bumped as a new kic image was built for each PR:
#13302 (went from kic version v0.0.29 to v0.0.29-1643823806-13302)
#13563 (went from kic version v0.0.29-1643823806-13302 to v0.0.29-1644071658-13563)
#13531 (went from kic version v0.0.29-1644071658-13563 to v0.0.29-1644344181-13531)

And here's a comment example of minikube-bot commenting about the kic image being updated:
#13531 (comment)

We can't keep all the kicbase PRs unmerged until we release or else we won't even know if they play nicely with each other, also we want to get as much testing with the changes as possible, so merging them early gives us lots of time to catch anything that might have broken. Once we have control over Aliyun I'm sure we can push the images there, but until then that registry is just maintained at a best effort basis.

Hopefully that explains the situation, if you have more questions let me know.

@zhan9san
Copy link
Contributor Author

hi @spowelljr

Thank you very much.

It does make sense to merge the kicbase image early.

Once we have control over Aliyun I'm sure we can push the images there, but until then that registry is just maintained at a best effort basis.

I understand it is because you haven't had control over Aliyun Registry.

That explains a lot. I am grateful that you share with me these background knowledge about kicbase image.

Feel free to close this ticket.

@spowelljr
Copy link
Member

Glad that helped!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants