-
Notifications
You must be signed in to change notification settings - Fork 41
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
bind kube-proxy to 127.0.0.1 #209
Conversation
modules/cluster/variables.tf
Outdated
variable "support_ipv6" { | ||
type = bool | ||
default = false | ||
description = "Support requilred for IPV6." |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
description = "Support requilred for IPV6." | |
description = "Support required for IPV6." |
modules/cluster/addons.tf
Outdated
{ aws_region = data.aws_region.current.name }, | ||
{ | ||
aws_region = data.aws_region.current.name | ||
bind_address = var.support_ipv6 ? "0.0.0.0" : "127.0.0.1" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure the support_ipv6 == true part of this branch is appropriate...
As I understand the comment here aws/amazon-vpc-cni-k8s#1078 (comment) the idea is that we can bypass some detection for v4 vs v6 if we specify 127.0.0.1...
Since I don't think we are using ipv6 stack at all anywhere I am not sure how to test the behaviour ... I think it would be fine to hardcode this, and make a note in the readme that we don't currently support ipv6
modules/cluster/addons.tf
Outdated
{ aws_region = data.aws_region.current.name }, | ||
{ | ||
aws_region = data.aws_region.current.name | ||
bind_address = var.support_ipv6 ? "0.0.0.0" : "127.0.0.1" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
❗ I don't think this is a great fix since we are allowing any host to connect under IPv4 as well. Seems like kube-proxy doesn't handle this configuration very gracefully!
Regardless, since we do not use IPv6 ATM and we are not supporting it (i.e., we haven't tested it at all), can we simply drop IPv6 for now?
bind_address = var.support_ipv6 ? "0.0.0.0" : "127.0.0.1" | |
bind_address = "127.0.0.1" |
Having read what is going on here kubernetes/kubernetes#91725 I think we are ok to hardcode 127.0.0.1 It seems that the comments about incompatibility with a ipv6 network only apply to a single stack ipv6 setup. On AWS VPC it is not possible to configure a single stack ipv6 network, you can only configure v4 single stack or dualstack. |
|
c3a05cf
to
8eab9af
Compare
8eab9af
to
d4e37c2
Compare
@errm @mtpereira ♻️ please 🙏 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
The EKS addon for kube-proxy introduced regressions of #124 and #209. We will apply the recommended overrides from aws/containers-roadmap#657 and aws/amazon-vpc-cni-k8s#1702 in the manifests until the EKS addon applies these by default or allows you to override config in the addon.
The EKS addon for kube-proxy introduced regressions of #124 and #209. We will apply the recommended overrides from aws/containers-roadmap#657 and aws/amazon-vpc-cni-k8s#1702 in the manifests until the EKS addon applies these by default or allows you to override config in the addon.
Following the advice from aws/amazon-vpc-cni-k8s#1078 (comment)
This fix should stop us hitting the bug around unstable aws-node startup.
I've made the bindAddress configurable via ipv4/6 usage as binding to 127.0.0.1 will not work for ipv6