-
-
Notifications
You must be signed in to change notification settings - Fork 138
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
When updating cluster_dns, k3s-agent (on worker nodes) needs to be restarted #353
Comments
hetzner-k3s re-runs the installation scripts with updated settings on both masters and workers, so if some setting changed on master nodes is not propagated to workers (provided that like in this case the setting you change is only available for masters), then it's more of an issue for the k3s repository IMO. Can you open one there? |
Ah, okay! I was assuming that I'm going to dig a bit deeper to make sure I understand what scripts are called, so that I can open a relevant issue on the k3s repo. Thanks! |
Yep, I run the official scripts and they then determine if something has changed. If nothing has changed they don't restart k3s. So in this case it's likely something that needs to be addressed there. |
Closing since it's more likely an issue with k3s itself :) |
When I was investigating #351, I noticed that after changing
cluster_dns
and re-runninghetzner-k3s create
, the cluster DNS address was updated on the control plane nodes, but not on the worker nodes.On the control plane nodes, the cluster DNS address is provided through a k3s flag (
--cluster-dns=...
) in/etc/systemd/system/k3s.service
, and it looks like when we re-runhetzner-k3s create
, that unit file gets correctly updated, reloaded, and restarted. Yay!However, on the worker nodes, the cluster DNS address is not specified anywhere. I suppose that
k3s-agent
obtains that parameters from the control plane somehow.When we update
cluster_dns
and re-runhetzner-k3s create
, it does not restartk3s-agent
on the worker nodes, and as a result, the worker nodes still use the old DNS address when creating new pods.If I
systemctl restart k3s-agent.service
manually on the worker nodes, they seem to pick up the new DNS address, and new pods get the new DNS address.I don't know if something should be changed in the code, or if it should simply be documented; but I wanted to report the issue so you could decide what was best for the project.
Thank you!
The text was updated successfully, but these errors were encountered: