-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
IPv6 masquerade NAT rules missing in dual-stack installation #4683
Comments
@manuelbuil would you mind taking a look at this? |
Yes, this is a known behavior in flannel dual-stack. I guess the person that implemented dual-stack thought people using ipv6 would not want SNAT. Could you open a similar issue in flannel referring to this one? This feature should be implemented there |
I will say that NAT with IPv6 is somewhat unusual, I think the expectation is usually that there are enough addresses available that you can have a unique addresses for everything and avoid NATing traffic entirely. |
I guess it makes sense why they would implement it like this but for the sake of consistency an option to enable SNAT for IPv6 would be nice. |
While IPv6 NAT is uncommon on "normal" setups; I don't think using ULA ipv6 addresses (iotw private address) usage will be that uncommon inside k3s clusters. While every pod having a global addresses is lovely, it does require a suitable ipv6 subnet to be routed/delegated to at least the master node. Which i wouldn't necessarily expect to be that common especially for more edge users of ipv6 (and I'm not sure how the cloud providers do ipv6 subnet delegation tbh). So making things just work seems sensible :) |
Today I hit this issue too if my investigation was correct. Sadly I did not have access to an IPv6 neighbor host to fully confirm with tcpdump that this is exactly the same thing. Setup:
What does work?
What does not work?
Advantage if this is allowed: The node itself has a public IPv6 address but the pod not. The pod also doesn't need one, as it is just is a ddclient pod to update my AAAA record via DynDNS. |
Reproduced the issue in k3s v1.23.1+k3s1
ping ipv4 address from within the testing pod
Verify on target host using tcpdump, pod address is NATed
ping ipv6 address from with the testing pod
Validated fix on k3s version v 1.23.2-rc1+k3s1 ping ipv6
Verify on target host using tcpdump, pod ipv6 address is NATed
|
This did the trick on my setup:
|
Environmental Info:
K3s Version:
Node(s) CPU architecture, OS, and Version:
Linux me-k3sv6 4.18.0-348.2.1.el8_5.x86_64 #1 SMP Mon Nov 15 20:49:28 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
RockyLinux 8.5 with a single network adapter with an ipv4 and ipv6 address assigned
I have verified the same behavior on Debian 11 too
Cluster Configuration:
Single node installation without any additional agents.
Describe the bug:
When creating a cluster using the dual-stack options described in the docs, any outgoing pod traffic is exiting the node with the ipv6 address of the pod instead of the address of the nodes interface.
Looks like the NAT rules in nftables which is responsible for translating the pod addresses are only being created for ipv4 addresses and not for ipv6.
Replicating the existing ipv4 rules to ipv6 by hand fixes the issue.
Im not sure if this is a bug or by design but I would expect outgoing traffic to work the same for ipv4 and ipv6 by default.
Steps To Reproduce:
Expected behavior:
Pod ipv6 address being NATed like the ipv4 address
Actual behavior:
Only ipv4 address being NATed while ipv6 address is being left as is
Additional context / logs:
Relevant firewall ipv4 and ipv6 rules before adding the rules by hand:
The text was updated successfully, but these errors were encountered: