Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

getting error while triggering the minikube start --vm-driver=none #15639

Closed
sktechnologiesadl opened this issue Jan 13, 2023 · 15 comments
Closed
Labels
co/none-driver kind/support Categorizes issue or PR as a support question. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.

Comments

@sktechnologiesadl
Copy link

What Happened?

Hi Team
getting this below error

[root@ip-172-31-32-247 cri-dockerd]# minikube start --vm-driver=none

  • minikube v1.28.0 on Amazon 2 (xen/amd64)
  • Using the none driver based on existing profile
  • Starting control plane node minikube in cluster minikube
  • Restarting existing none bare metal machine for "minikube" ...
  • OS release is Amazon Linux 2

X Exiting due to RUNTIME_ENABLE: Temporary Error: sudo crictl version: exit status 1
stdout:

stderr:
sudo: crictl: command not found

╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
https://github.com/kubernetes/minikube/issues/new/choose
│ │
│ * Please run minikube logs --file=logs.txt and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯

[root@ip-172-31-32-247 cri-dockerd]#

Attach the log file

[root@ip-172-31-32-247 cri-dockerd]# minikube start --vm-driver=none

  • minikube v1.28.0 on Amazon 2 (xen/amd64)
  • Using the none driver based on existing profile
  • Starting control plane node minikube in cluster minikube
  • Restarting existing none bare metal machine for "minikube" ...
  • OS release is Amazon Linux 2

X Exiting due to RUNTIME_ENABLE: Temporary Error: sudo crictl version: exit status 1
stdout:

stderr:
sudo: crictl: command not found

╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
https://github.com/kubernetes/minikube/issues/new/choose
│ │
│ * Please run minikube logs --file=logs.txt and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯

[root@ip-172-31-32-247 cri-dockerd]#

Operating System

Other

Driver

None

@sktechnologiesadl
Copy link
Author

[root@ip-172-31-32-247 ~]# minikube logs --file=logs.txt
[root@ip-172-31-32-247 ~]# ls
cri-dockerd go installer_linux logs.txt minikube-linux-amd64
[root@ip-172-31-32-247 ~]# wc -l logs.txt
120 logs.txt
[root@ip-172-31-32-247 ~]# cat logs.txt
*

  • ==> Audit <==

  • |---------|------------------|----------|------|---------|---------------------|----------|
    | Command | Args | Profile | User | Version | Start Time | End Time |
    |---------|------------------|----------|------|---------|---------------------|----------|
    | start | --vm-driver=none | minikube | root | v1.28.0 | 13 Jan 23 14:47 UTC | |
    | start | --vm-driver=none | minikube | root | v1.28.0 | 13 Jan 23 14:53 UTC | |
    | start | --vm-driver=none | minikube | root | v1.28.0 | 13 Jan 23 14:58 UTC | |
    |---------|------------------|----------|------|---------|---------------------|----------|

  • ==> Last Start <==

  • Log file created at: 2023/01/13 14:58:17
    Running on machine: ip-172-31-32-247
    Binary: Built with gc go1.19.2 for linux/amd64
    Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
    I0113 14:58:17.249041 8442 out.go:296] Setting OutFile to fd 1 ...
    I0113 14:58:17.249200 8442 out.go:343] TERM=xterm,COLORTERM=, which probably does not support color
    I0113 14:58:17.249205 8442 out.go:309] Setting ErrFile to fd 2...
    I0113 14:58:17.249211 8442 out.go:343] TERM=xterm,COLORTERM=, which probably does not support color
    I0113 14:58:17.249321 8442 root.go:334] Updating PATH: /root/.minikube/bin
    W0113 14:58:17.249441 8442 root.go:311] Error reading config file at /root/.minikube/config/config.json: open /root/.minikube/config/config.json: no such file or directory
    I0113 14:58:17.249644 8442 out.go:303] Setting JSON to false
    I0113 14:58:17.250361 8442 start.go:116] hostinfo: {"hostname":"ip-172-31-32-247.ap-south-1.compute.internal","uptime":985,"bootTime":1673620913,"procs":109,"os":"linux","platform":"amazon","platformFamily":"rhel","platformVersion":"2","kernelVersion":"5.10.157-139.675.amzn2.x86_64","kernelArch":"x86_64","virtualizationSystem":"xen","virtualizationRole":"guest","hostId":"ec2c09f1-115b-27de-c721-075d9bb93ae2"}
    I0113 14:58:17.250420 8442 start.go:126] virtualization: xen guest
    I0113 14:58:17.252616 8442 out.go:177] * minikube v1.28.0 on Amazon 2 (xen/amd64)
    W0113 14:58:17.253670 8442 preload.go:295] Failed to list preload files: open /root/.minikube/cache/preloaded-tarball: no such file or directory
    I0113 14:58:17.253863 8442 notify.go:220] Checking for updates...
    I0113 14:58:17.254125 8442 config.go:180] Loaded profile config "minikube": Driver=none, ContainerRuntime=docker, KubernetesVersion=v1.25.3
    I0113 14:58:17.254417 8442 exec_runner.go:51] Run: systemctl --version
    I0113 14:58:17.256203 8442 driver.go:365] Setting default libvirt URI to qemu:///system
    I0113 14:58:17.257453 8442 out.go:177] * Using the none driver based on existing profile
    I0113 14:58:17.258495 8442 start.go:282] selected driver: none
    I0113 14:58:17.258510 8442 start.go:808] validating driver "none" against &{Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.31.32.247 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/root:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
    I0113 14:58:17.258601 8442 start.go:819] status for none: {Installed:true Healthy:true Running:false NeedsImprovement:false Error: Reason: Fix: Doc: Version:}
    I0113 14:58:17.259050 8442 cni.go:95] Creating CNI manager for ""
    I0113 14:58:17.259058 8442 cni.go:149] Driver none used, CNI unnecessary in this configuration, recommending no CNI
    I0113 14:58:17.259066 8442 start_flags.go:317] config:
    {Name:minikube KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase:v0.0.36@sha256:8debc1b6a335075c5f99bfbf131b4f5566f68c6500dc5991817832e55fcc9456 Memory:2200 CPUs:2 DiskSize:20000 VMDriver: Driver:none HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:0 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.25.3 ClusterName:minikube Namespace:default APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI: NodeIP: NodePort:8443 NodeName:} Nodes:[{Name:m01 IP:172.31.32.247 Port:8443 KubernetesVersion:v1.25.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop: ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/root:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath:/opt/socket_vmnet/bin/socket_vmnet_client SocketVMnetPath:/var/run/socket_vmnet}
    I0113 14:58:17.260540 8442 out.go:177] * Starting control plane node minikube in cluster minikube
    I0113 14:58:17.261609 8442 profile.go:148] Saving config to /root/.minikube/profiles/minikube/config.json ...
    I0113 14:58:17.261769 8442 cache.go:208] Successfully downloaded all kic artifacts
    I0113 14:58:17.261792 8442 start.go:364] acquiring machines lock for minikube: {Name:mkc8ab01ad3ea83211c505c81a7ee49a8e3ecb89 Clock:{} Delay:500ms Timeout:13m0s Cancel:}
    I0113 14:58:17.261973 8442 start.go:368] acquired machines lock for "minikube" in 158.429µs
    I0113 14:58:17.261993 8442 start.go:96] Skipping create...Using existing machine configuration
    I0113 14:58:17.262000 8442 fix.go:55] fixHost starting: m01
    W0113 14:58:17.262157 8442 none.go:130] unable to get port: "minikube" does not appear in /root/.kube/config
    I0113 14:58:17.262166 8442 api_server.go:165] Checking apiserver status ...
    I0113 14:58:17.262190 8442 exec_runner.go:51] Run: sudo pgrep -xnf kube-apiserver.minikube.
    W0113 14:58:17.275557 8442 api_server.go:169] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.minikube.: exit status 1
    stdout:

stderr:
I0113 14:58:17.275592 8442 exec_runner.go:51] Run: sudo systemctl is-active --quiet service kubelet
I0113 14:58:17.287009 8442 fix.go:103] recreateIfNeeded on minikube: state=Stopped err=
W0113 14:58:17.287024 8442 fix.go:129] unexpected machine state, will restart:
I0113 14:58:17.288970 8442 out.go:177] * Restarting existing none bare metal machine for "minikube" ...
I0113 14:58:17.291040 8442 profile.go:148] Saving config to /root/.minikube/profiles/minikube/config.json ...
I0113 14:58:17.291153 8442 start.go:300] post-start starting for "minikube" (driver="none")
I0113 14:58:17.291194 8442 start.go:328] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
I0113 14:58:17.291229 8442 exec_runner.go:51] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
I0113 14:58:17.301387 8442 main.go:134] libmachine: Couldn't set key CPE_NAME, no corresponding struct field found
I0113 14:58:17.303444 8442 out.go:177] * OS release is Amazon Linux 2
I0113 14:58:17.304589 8442 filesync.go:126] Scanning /root/.minikube/addons for local assets ...
I0113 14:58:17.304627 8442 filesync.go:126] Scanning /root/.minikube/files for local assets ...
I0113 14:58:17.304644 8442 start.go:303] post-start completed in 13.48133ms
I0113 14:58:17.304652 8442 fix.go:57] fixHost completed within 42.653183ms
I0113 14:58:17.304659 8442 start.go:83] releasing machines lock for "minikube", held for 42.675966ms
I0113 14:58:17.305087 8442 exec_runner.go:51] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
image-endpoint: unix:///var/run/cri-dockerd.sock
" | sudo tee /etc/crictl.yaml"
I0113 14:58:17.305133 8442 exec_runner.go:51] Run: curl -sS -m 2 https://registry.k8s.io/
I0113 14:58:17.333914 8442 exec_runner.go:51] Run: sudo systemctl unmask docker.service
I0113 14:58:17.427787 8442 exec_runner.go:51] Run: sudo systemctl enable docker.socket
I0113 14:58:17.518529 8442 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0113 14:58:17.599315 8442 exec_runner.go:51] Run: sudo systemctl restart docker
I0113 14:58:17.850104 8442 exec_runner.go:51] Run: sudo systemctl enable cri-docker.socket
I0113 14:58:17.940739 8442 exec_runner.go:51] Run: sudo systemctl daemon-reload
I0113 14:58:18.039746 8442 exec_runner.go:51] Run: sudo systemctl start cri-docker.socket
I0113 14:58:18.051810 8442 start.go:451] Will wait 60s for socket path /var/run/cri-dockerd.sock
I0113 14:58:18.051857 8442 exec_runner.go:51] Run: stat /var/run/cri-dockerd.sock
I0113 14:58:18.053203 8442 start.go:472] Will wait 60s for crictl version
I0113 14:58:18.053242 8442 exec_runner.go:51] Run: sudo crictl version
I0113 14:58:18.059562 8442 retry.go:31] will retry after 11.04660288s: Temporary Error: sudo crictl version: exit status 1
stdout:

stderr:
sudo: crictl: command not found
I0113 14:58:29.106435 8442 exec_runner.go:51] Run: sudo crictl version
I0113 14:58:29.113652 8442 retry.go:31] will retry after 21.607636321s: Temporary Error: sudo crictl version: exit status 1
stdout:

stderr:
sudo: crictl: command not found
I0113 14:58:50.721579 8442 exec_runner.go:51] Run: sudo crictl version
I0113 14:58:50.728783 8442 retry.go:31] will retry after 26.202601198s: Temporary Error: sudo crictl version: exit status 1
stdout:

stderr:
sudo: crictl: command not found
I0113 14:59:16.931671 8442 exec_runner.go:51] Run: sudo crictl version
I0113 14:59:16.940407 8442 out.go:177]
W0113 14:59:16.941589 8442 out.go:239] X Exiting due to RUNTIME_ENABLE: Temporary Error: sudo crictl version: exit status 1
stdout:

stderr:
sudo: crictl: command not found

W0113 14:59:16.941618 8442 out.go:239] *
W0113 14:59:16.944929 8442 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
│ │
│ * If the above advice does not help, please let us know: │
https://github.com/kubernetes/minikube/issues/new/choose
│ │
│ * Please run minikube logs --file=logs.txt and attach logs.txt to the GitHub issue. │
│ │
╰─────────────────────────────────────────────────────────────────────────────────────────────╯
I0113 14:59:16.946014 8442 out.go:177]

[root@ip-172-31-32-247 ~]#

@afbjorklund
Copy link
Collaborator

Did you forget to install cri-tools, perhaps ?

https://minikube.sigs.k8s.io/docs/drivers/none/

@afbjorklund afbjorklund added co/none-driver kind/support Categorizes issue or PR as a support question. labels Jan 13, 2023
@sktechnologiesadl
Copy link
Author

I have installed it

[root@ip-172-31-32-247 ~]# which crictl
/usr/local/bin/crictl
[root@ip-172-31-32-247 ~]# systemctl status cri-docker
● cri-docker.service - CRI Interface for Docker Application Container Engine
Loaded: loaded (/etc/systemd/system/cri-docker.service; enabled; vendor preset: disabled)
Active: active (running) since Fri 2023-01-13 16:03:24 UTC; 7min ago
Docs: https://docs.mirantis.com
Main PID: 3710 (cri-dockerd)
CGroup: /system.slice/cri-docker.service
└─3710 /usr/local/bin/cri-dockerd --container-runtime-endpoint fd://

Jan 13 16:03:24 ip-172-31-32-247.ap-south-1.compute.internal cri-dockerd[3710]: time="2023-01-13T16:03:24Z" level=info msg="Star...0s"
Jan 13 16:03:24 ip-172-31-32-247.ap-south-1.compute.internal cri-dockerd[3710]: time="2023-01-13T16:03:24Z" level=info msg="Hair...ne"
Jan 13 16:03:24 ip-172-31-32-247.ap-south-1.compute.internal cri-dockerd[3710]: time="2023-01-13T16:03:24Z" level=info msg="Load...ni"
Jan 13 16:03:24 ip-172-31-32-247.ap-south-1.compute.internal cri-dockerd[3710]: time="2023-01-13T16:03:24Z" level=info msg="Dock...ni"
Jan 13 16:03:24 ip-172-31-32-247.ap-south-1.compute.internal cri-dockerd[3710]: time="2023-01-13T16:03:24Z" level=info msg="Dock...ver
Jan 13 16:03:24 ip-172-31-32-247.ap-south-1.compute.internal cri-dockerd[3710]: time="2023-01-13T16:03:24Z" level=info msg="Sett...fs"
Jan 13 16:03:24 ip-172-31-32-247.ap-south-1.compute.internal systemd[1]: Started CRI Interface for Docker Application Container...ine.
Jan 13 16:03:24 ip-172-31-32-247.ap-south-1.compute.internal cri-dockerd[3710]: time="2023-01-13T16:03:24Z" level=info msg="Dock...,}"
Jan 13 16:03:24 ip-172-31-32-247.ap-south-1.compute.internal cri-dockerd[3710]: time="2023-01-13T16:03:24Z" level=info msg="Star...e."
Jan 13 16:03:24 ip-172-31-32-247.ap-south-1.compute.internal cri-dockerd[3710]: time="2023-01-13T16:03:24Z" level=info msg="Star...nd"
Hint: Some lines were ellipsized, use -l to show in full.
[root@ip-172-31-32-247 ~]#

@afbjorklund
Copy link
Collaborator

I don't think that /usr/local/bin is in the default sudo path, so you might have to install it to /usr/bin

@afbjorklund
Copy link
Collaborator

@sktechnologiesadl
Copy link
Author

awesome......!

it works

[root@ip-172-31-32-247 ~]# minikube start --vm-driver=none

  • minikube v1.28.0 on Amazon 2 (xen/amd64)

  • Using the none driver based on existing profile

  • Starting control plane node minikube in cluster minikube

  • Restarting existing none bare metal machine for "minikube" ...

  • OS release is Amazon Linux 2

    kubectl.sha256: 64 B / 64 B [-------------------------] 100.00% ? p/s 0s
    kubelet.sha256: 64 B / 64 B [-------------------------] 100.00% ? p/s 0s
    kubeadm.sha256: 64 B / 64 B [-------------------------] 100.00% ? p/s 0s
    kubectl: 42.93 MiB / 42.93 MiB [------------] 100.00% 52.05 MiB p/s 1.0s
    kubeadm: 41.77 MiB / 41.77 MiB [------------] 100.00% 45.24 MiB p/s 1.1s
    kubelet: 108.95 MiB / 108.95 MiB [----------] 100.00% 66.32 MiB p/s 1.8s

    • Generating certificates and keys ...
    • Booting up control plane ...
    • Configuring RBAC rules ...
  • Configuring local host environment ...

! The 'none' driver is designed for experts who need to integrate with an existing VM

! kubectl and minikube configuration will be stored in /root
! To use kubectl or minikube commands as your own user, you may need to relocate them. For example, to overwrite your own settings, run:
*

  • sudo mv /root/.kube /root/.minikube $HOME
  • sudo chown -R $USER $HOME/.kube $HOME/.minikube
  • This can also be done automatically by setting the env var CHANGE_MINIKUBE_NONE_USER=true
  • Verifying Kubernetes components...
    • Using image gcr.io/k8s-minikube/storage-provisioner:v5
  • Enabled addons: default-storageclass, storage-provisioner
  • Done! kubectl is now configured to use "minikube" cluster and "default" namespace by default
    [root@ip-172-31-32-247 ~]#

@sktechnologiesadl
Copy link
Author

But my control plan is not ready

[root@ip-172-31-32-247 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
ip-172-31-32-247.ap-south-1.compute.internal NotReady control-plane 2m49s v1.25.3
[root@ip-172-31-32-247 ~]#
[root@ip-172-31-32-247 ~]#

@afbjorklund
Copy link
Collaborator

afbjorklund commented Jan 13, 2023

We did some workaround for EL and the legacy sudoers setup before, but it seems like there is some regression.

@sktechnologiesadl
Copy link
Author

can you help on that how can I overcome the issue ?

@sktechnologiesadl
Copy link
Author

Do you possible to take control of my system

@sktechnologiesadl
Copy link
Author

[ec2-user@ip-172-31-32-247 ~]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
minikube Ready control-plane 4m40s v1.25.3
[ec2-user@ip-172-31-32-247 ~]$

@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle stale
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Apr 13, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues.

This bot triages un-triaged issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Mark this issue as fresh with /remove-lifecycle rotten
  • Close this issue with /close
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels May 13, 2023
@k8s-triage-robot
Copy link

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

@k8s-ci-robot
Copy link
Contributor

@k8s-triage-robot: Closing this issue, marking it as "Not Planned".

In response to this:

The Kubernetes project currently lacks enough active contributors to adequately respond to all issues and PRs.

This bot triages issues according to the following rules:

  • After 90d of inactivity, lifecycle/stale is applied
  • After 30d of inactivity since lifecycle/stale was applied, lifecycle/rotten is applied
  • After 30d of inactivity since lifecycle/rotten was applied, the issue is closed

You can:

  • Reopen this issue with /reopen
  • Mark this issue as fresh with /remove-lifecycle rotten
  • Offer to help out with Issue Triage

Please send feedback to sig-contributor-experience at kubernetes/community.

/close not-planned

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@k8s-ci-robot k8s-ci-robot closed this as not planned Won't fix, can't repro, duplicate, stale Jun 12, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
co/none-driver kind/support Categorizes issue or PR as a support question. lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed.
Projects
None yet
Development

No branches or pull requests

4 participants