-
Notifications
You must be signed in to change notification settings - Fork 582
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
talosctl health
extra flags
#7967
Comments
For some context, I am currently using a bash script to wait until the kubelet becomes healthy on my nodes: while true; do
output=$(talosctl dmesg -n $NODE_IP 2>&1)
if echo "$output" | grep -Fq "service[kubelet](Running): Health check successful"; then
echo ""
echo "Kubelet is Healthy on node $NODE_IP!"
break
else
printf "."
sleep 1
fi
done But I feel like there should be a more elegant way to handle this, since it's not an uncommon scenario to disable the CNI |
That would be amazing. I have exactly the same problem with CNI. Especially in terraform, |
This issue is stale because it has been open 180 days with no activity. Remove stale label or comment or this will be closed in 7 days. |
This issue was closed because it has been stalled for 7 days with no activity. |
Feature Request
Ability to specify whether or not to wait for nodes to be ready.
Description
When deloying Talos, I saw that a lot of people are disabling the CNI and opting to manually install one later on, mainly when doing gitops.
Currently,
talosctl health
checks on the health of the cluster end-to-end, i.e. both Talos and Kubernetes. I think there should be a flag, something liketalosctl health --kubernetes=false
which would validate the health up to and including the kubelet, so without checking if the nodes are in aReady
state, since without a CNI they will never reach that state.This makes it a bit harder to automate installs like
bootstrap -> wait -> apply CNI
for exampleThe text was updated successfully, but these errors were encountered: