-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Clear k3s docker containers after stop/uninstall #1469
Comments
I am not affiliated with the project but I expect that the reason containers do not get stopped when k3s stops is to prevent kubelet crashes killing containers. AFAIK this is aligned with upstream behaviour. Putting in a flag on a stop command does sound reasonable though. |
@benfairless I agree, my problem is k3s built-in containers (trafik, local-path-provisioner, metrics-server, etc) that are created and accumulated every time after run I'm my opinion if the containers is already created, there's no need to deploy them again after restart the k3s. |
Just seen this: on a DigitalOcean droplet, I stopped the k3s service and the containers are still there |
I became aware of this because it drained too much resources on my laptop after k3s-uninstall.sh, thank you so much for your work to make this reproducible, comprehensible, and possible to workaround! @Lohann! |
k3s should NOT stop containers when stopping the service. This would break the ability to nondisruptively restart k3s for upgrades. The k3s-killall.sh script is available if you want to kill all the pods after k3s is down; if you want to stop things gracefully you can use @Lohann The pods should not all be duplicated every time you restart k3s. Are you running rootless by any chance? Can you provide |
(k3s stable installed via curl'ed script) |
This repository uses a bot to automatically label issues which have not had any activity (commit/comment/label) for 180 days. This helps us manage the community issues better. If the issue is still relevant, please add a comment to the issue so the bot can remove the label and we know it is still valid. If it is no longer relevant (or possibly fixed in the latest release), the bot will automatically close the issue in 14 days. Thank you for your contributions. |
Still a thing |
@brandond my standalone server on which my k3s cluster is running has rebooted, usually the service restarts without error, but here it does not and all my docker containers are stopped. What should I do? Should I restart |
This is unrelated to the topic above, I suggest you open your own thread. |
This repository uses a bot to automatically label issues which have not had any activity (commit/comment/label) for 180 days. This helps us manage the community issues better. If the issue is still relevant, please add a comment to the issue so the bot can remove the label and we know it is still valid. If it is no longer relevant (or possibly fixed in the latest release), the bot will automatically close the issue in 14 days. Thank you for your contributions. |
How on earth it is not resolved after this long time? |
Yes. K3s only cleans up after the things it installs itself - such as the bundled containerd and pods running in the bundled containerd. If you use an external container runtime (either via --container-runtime-endpoint or --docker) you need to clean the pods out of that runtime yourself. The same is true of other items that can be disabled and replaced with your own selection - for example, if you disable flannel and install a different CNI, that CNI may create files or directories that our uninstall script will not remove. |
Version:
Describe the bug
K3s doesn't stop docker containers after run
k3s-killall.sh
and don't remove the containers afterk3s-uninstall.sh
To Reproduce
k3s server --docker
orcurl -sfL https://get.k3s.io | sh -s - --docker
docker ps -a --filter "name=k8s_"
k3s-killall.sh
orsystemctl stop k3s.service
docker ps -a --filter "name=k8s_"
k3s-uninstall.sh
docker ps -a --filter "name=k8s_"
Expected behavior
systemctl stop k3s.service
ork3s-killall.sh
systemctl start k3s.service
k3s-uninstall.sh
If it is the expected behavior, maybe a flag can be provided to the scripts
k3s-killall.sh --stop-containers
k3s-uninstall.sh --remove-containers
Workaround
I use follow command to delete k3s containers:
docker stop $(docker ps -a -q --filter "name=k8s_") | xargs docker rm
The text was updated successfully, but these errors were encountered: