-
Notifications
You must be signed in to change notification settings - Fork 317
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Feature] Node local DNS #1492
Comments
Any update on this? We deployed nodelocaldns and it made a massive difference for us, (#1326 (comment)) |
Good to know, we are designing it as we speak |
@palma21 one thing you may want to take back to your folks is the currently proposed nodelocal daemonset is not tolerating itself sufficiently to prevent from inadvertent inability to assign to any nodes with custom taints on them. I used these tolerations and they resolved that issue for us:
|
Any updates on this feature? When is the ETA? |
It's currently on the committed items for this semester and under design review. Will have more ideas on the concrete eta by the end of the month. |
Hopefully this is higher in the priority list, then would be amazing to have. I get DNS latency bursts constantly. |
Any update on the ETA? If there is now a rough timeline for ETA, we may prefer wait a bit more rather than investing engineering time into something which would be obsolete soon after. |
@bergerx this will make you happy :) https://github.com/curtdept/aks_nodelocaldns/blob/main/nodelocaldns.yaml |
My AKS cluster is actually running with kubenet, the DNS service IP is something like 172.17.80.10, but I cant' figure out what is |
@MiyKh the
|
@PSanetra Thanks for the clarification.The node local dns is deployed into the kube-system namespace but it is deleted by Azure sync system, how can we force AKS to skip deletion on this component? EDIT: This can be done by removing the labels: |
@MiyKh interesting. We still have those labels set on the daemonset, but it is not getting deleted. We are running AKS 1.17.11. |
Same here on 1.19 |
I have the same behavior than this issue: #1435 |
What AKS version? |
This is a 1.18.8 cluster with kubenet networking, removing the labels was the solution but I guess, I would still need to redeploy it after cluter upgrade. |
Any news on an ETA ? |
@4c74356b41 @djsly Upgrading a cluster still using bridge mode will make it use transparent mode instead |
@joaguas thanks! I asked a question about querying the status. Wondering how to check if our clusters consumed the update or not yet, since we did performed a few updates. |
you can just do |
ok so the CNI transparent mode is baked in the image. good to know. I will try to know which base image contains the fix then. thanks @4c74356b41 |
Hi @djsly , an easy way will be to get a shell in one of the nodes and check either the interfaces or route tables. You can also check the route table ( Thanks for the tip @4c74356b41 |
This issue has been automatically marked as stale because it has not had any activity for 60 days. It will be closed if no further activity occurs within 15 days of this comment. |
This issue will now be closed because it hasn't had any activity for 15 days after stale. palma21 feel free to comment again on the next 7 days to reopen or open a new issue after that time if you still have a question/issue or suggestion. |
Re-architect azure CNI for more resilient DNS
The text was updated successfully, but these errors were encountered: