Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dual stack capability in 1.21 #3158

Closed
davidnuzik opened this issue Apr 7, 2021 · 1 comment
Closed

Dual stack capability in 1.21 #3158

davidnuzik opened this issue Apr 7, 2021 · 1 comment
Assignees
Milestone

Comments

@davidnuzik
Copy link
Contributor

davidnuzik commented Apr 7, 2021

Full dual-stack support will be an ongoing process. Various CNIs have limited or no support for dual-stack yet.

Our goal should be to support upstream flags for ipv6, such as ip ranges and cidrs. Review https://kubernetes.io/docs/concepts/services-networking/dual-stack/ -- you'll notice that the feature is in alpha. This is moving to beta in 1.21. We must support these flags so that we may take the first step in supporting dual stack.

There are a few places where we have IPNET and will need to convert over to []IPNet. Examples:

ServiceCIDR net.IPNet
ServiceNodePortRange utilnet.PortRange
ClusterCIDR net.IPNet
ClusterDNS net.IP

ClusterIPRange *net.IPNet
ServiceIPRange *net.IPNet

We will need to comb thru and find all the places where we are assuming a single CIDR block and update them all to accept a list of blocks.

It's important to emphasize that this is just the first phase of fully supporting dual stack. This work should allow for ipv6 CIDR blocks -- CNI, Load Balancer, etc are a different matter.

@rancher-max
Copy link
Contributor

Validated using v1.21.0+k3s1 we are no longer hard blocked from deploying dual stack

  • Confirmed expected error: level=fatal msg="flannel CNI and network policy enforcement are not compatible with dual-stack operation; server must be restarted with --flannel-backend=none --disable-network-policy and an alternative CNI plugin deployed"
  • Specifying dual cidr works as expected on the node:
  spec:
    podCIDR: 192.168.0.0/24
    podCIDRs:
    - 192.168.0.0/24
    - xxxx:aaaa:bbb::/64
    providerID: k3s://ip-192-168-29-133
    taints:
    - effect: NoSchedule
      key: node.kubernetes.io/not-ready
  status:
    addresses:
    - address: 192.168.29.133
      type: InternalIP
    - address: xxxx:aaaa:bbb:cccc:1234:dddd:eeee:1234
      type: InternalIP
    - address: ip-192-168-29-133
      type: Hostname
  • Note the cluster still doesn't come up because there is no CNI. Need to deploy a CNI with dual stack to get the cluster up and running afterwards:
# kubectl get nodes,pods -A -o wide
NAME                     STATUS     ROLES                  AGE     VERSION        INTERNAL-IP      EXTERNAL-IP   OS-IMAGE             KERNEL-VERSION   CONTAINER-RUNTIME
node/ip-192-168-29-133   NotReady   control-plane,master   7m27s   v1.21.0+k3s1   192.168.29.133   <none>        Ubuntu 20.04.2 LTS   5.4.0-1045-aws   containerd://1.4.4-k3s1
NAMESPACE     NAME                                          READY   STATUS    RESTARTS   AGE     IP       NODE     NOMINATED NODE   READINESS GATES
kube-system   pod/helm-install-traefik-crd-h59t9            0/1     Pending   0          7m17s   <none>   <none>   <none>           <none>
kube-system   pod/helm-install-traefik-hx7jx                0/1     Pending   0          7m17s   <none>   <none>   <none>           <none>
kube-system   pod/metrics-server-86cbb8457f-nmgdx           0/1     Pending   0          7m17s   <none>   <none>   <none>           <none>
kube-system   pod/local-path-provisioner-5ff76fc89d-5msm6   0/1     Pending   0          7m17s   <none>   <none>   <none>           <none>
kube-system   pod/coredns-7448499f4d-jdd8n                  0/1     Pending   0          7m17s   <none>   <none>   <none>           <none>

# journalctl -eu k3s -f
"Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized"

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants