Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support dual stack IPv4 & IPv6 networking #6859

Merged
merged 6 commits into from
Feb 5, 2021
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 5 additions & 0 deletions .gitlab-ci/vagrant.yml
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,11 @@ molecule_tests:
after_script:
- chronic ./tests/scripts/testcases_cleanup.sh

vagrant_ubuntu18-calico-dual-stack:
stage: deploy-part2
extends: .vagrant
when: on_success

vagrant_ubuntu18-flannel:
stage: deploy-part2
extends: .vagrant
Expand Down
1 change: 1 addition & 0 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -160,6 +160,7 @@ Note: The list of available docker version is 18.09, 19.03 and 20.10. The recomm
- **Ansible v2.9.x, Jinja 2.11+ and python-netaddr is installed on the machine that will run Ansible commands, Ansible 2.10.x is not supported for now**
- The target servers must have **access to the Internet** in order to pull docker images. Otherwise, additional configuration is required (See [Offline Environment](docs/offline-environment.md))
- The target servers are configured to allow **IPv4 forwarding**.
- If using IPv6 for pods and services, the target servers are configured to allow **IPv6 forwarding**.
- The **firewalls are not managed**, you'll need to implement your own rules the way you used to.
in order to avoid any issue during deployment you should disable your firewall.
- If kubespray is ran from non-root user account, correct privilege escalation method
Expand Down
14 changes: 13 additions & 1 deletion Vagrantfile
Original file line number Diff line number Diff line change
Expand Up @@ -49,6 +49,7 @@ $vm_cpus ||= 2
$shared_folders ||= {}
$forwarded_ports ||= {}
$subnet ||= "172.18.8"
$subnet_ipv6 ||= "fd3c:b398:0698:0756"
$os ||= "ubuntu1804"
$network_plugin ||= "flannel"
# Setting multi_networking to true will install Multus: https://github.com/intel/multus-cni
Expand Down Expand Up @@ -194,11 +195,22 @@ Vagrant.configure("2") do |config|
end

ip = "#{$subnet}.#{i+100}"
node.vm.network :private_network, ip: ip
node.vm.network :private_network, ip: ip,
:libvirt__guest_ipv6 => 'yes',
:libvirt__ipv6_address => "#{$subnet_ipv6}::#{i+100}",
:libvirt__ipv6_prefix => "64",
:libvirt__forward_mode => "none",
:libvirt__dhcp_enabled => false
miff2000 marked this conversation as resolved.
Show resolved Hide resolved

# Disable swap for each vm
node.vm.provision "shell", inline: "swapoff -a"

# ubuntu1804 and ubuntu2004 have IPv6 explicitly disabled. This undoes that.
if ["ubuntu1804", "ubuntu2004"].include? $os
miff2000 marked this conversation as resolved.
Show resolved Hide resolved
node.vm.provision "shell", inline: "rm -f /etc/modprobe.d/local.conf"
node.vm.provision "shell", inline: "sed -i '/net.ipv6.conf.all.disable_ipv6/d' /etc/sysctl.d/99-sysctl.conf /etc/sysctl.conf"
end

# Disable firewalld on oraclelinux/redhat vms
if ["oraclelinux","oraclelinux8","rhel7","rhel8"].include? $os
node.vm.provision "shell", inline: "systemctl stop firewalld; systemctl disable firewalld"
Expand Down
7 changes: 4 additions & 3 deletions docs/calico.md
Original file line number Diff line number Diff line change
Expand Up @@ -58,13 +58,14 @@ To re-define you need to edit the inventory and add a group variable `calico_net
calico_network_backend: none
```

### Optional : Define the default pool CIDR
### Optional : Define the default pool CIDRs

By default, `kube_pods_subnet` is used as the IP range CIDR for the default IP Pool.
In some cases you may want to add several pools and not have them considered by Kubernetes as external (which means that they must be within or equal to the range defined in `kube_pods_subnet`), it starts with the default IP Pool of which IP range CIDR can by defined in group_vars (k8s-cluster/k8s-net-calico.yml):
By default, `kube_pods_subnet` is used as the IP range CIDR for the default IP Pool, and `kube_pods_subnet_ipv6` for IPv6.
In some cases you may want to add several pools and not have them considered by Kubernetes as external (which means that they must be within or equal to the range defined in `kube_pods_subnet` and `kube_pods_subnet_ipv6` ), it starts with the default IP Pools of which IP range CIDRs can by defined in group_vars (k8s-cluster/k8s-net-calico.yml):

```ShellSession
calico_pool_cidr: 10.233.64.0/20
calico_pool_cidr_ipv6: fd85:ee78:d8a6:8607::1:0000/112
```

### Optional : BGP Peering with border routers
Expand Down
8 changes: 8 additions & 0 deletions docs/vars.md
Original file line number Diff line number Diff line change
Expand Up @@ -62,6 +62,10 @@ following default cluster parameters:
raise an assertion in playbooks if the `kubelet_max_pods` var also isn't adjusted accordingly
(assertion not applicable to calico which doesn't use this as a hard limit, see
[Calico IP block sizes](https://docs.projectcalico.org/reference/resources/ippool#block-sizes).
* *enable_dual_stack_networks* - Setting this to true will provision both IPv4 and IPv6 networking for pods and services.
* *kube_service_addresses_ipv6* - Subnet for cluster IPv6 IPs (default is ``fd85:ee78:d8a6:8607::1000/116``). Must not overlap with ``kube_pods_subnet_ipv6``.
* *kube_pods_subnet_ipv6* - Subnet for Pod IPv6 IPs (default is ``fd85:ee78:d8a6:8607::1:0000/112``). Must not overlap with ``kube_service_addresses_ipv6``.
* *kube_network_node_prefix_ipv6* - Subnet allocated per-node for pod IPv6 IPs. Remaining bits in ``kube_pods_subnet_ipv6`` dictates how many kube-nodes can be in cluster.
* *skydns_server* - Cluster IP for DNS (default is 10.233.0.3)
* *skydns_server_secondary* - Secondary Cluster IP for CoreDNS used with coredns_dual deployment (default is 10.233.0.4)
* *enable_coredns_k8s_external* - If enabled, it configures the [k8s_external plugin](https://coredns.io/plugins/k8s_external/)
Expand All @@ -87,6 +91,10 @@ Note, if cloud providers have any use of the ``10.233.0.0/16``, like instances'
private addresses, make sure to pick another values for ``kube_service_addresses``
and ``kube_pods_subnet``, for example from the ``172.18.0.0/16``.

## Enabling Dual Stack (IPV4 + IPV6) networking

If *enable_dual_stack_networks* is set to ``true``, Dual Stack networking will be enabled in the cluster. This will use the default IPv4 and IPv6 subnets specified in the defaults file in the ``kubespray-defaults`` role, unless overridden of course. The default config will give you room for up to 256 nodes with 126 pods per node, and up to 4096 services.

## DNS variables

By default, hosts are set up with 8.8.8.8 as an upstream DNS server and all
Expand Down
2 changes: 2 additions & 0 deletions facts.yml
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,8 @@
loop:
- ansible_distribution_major_version
- ansible_default_ipv4
- ansible_default_ipv6
- ansible_all_ipv4_addresses
- ansible_all_ipv6_addresses
- ansible_memtotal_mb
- ansible_swaptotal_mb
19 changes: 19 additions & 0 deletions inventory/sample/group_vars/k8s-cluster/k8s-cluster.yml
Original file line number Diff line number Diff line change
Expand Up @@ -94,6 +94,25 @@ kube_pods_subnet: 10.233.64.0/18
# - kubelet_max_pods: 110
kube_network_node_prefix: 24

# Configure Dual Stack networking (i.e. both IPv4 and IPv6)
enable_dual_stack_networks: false

# Kubernetes internal network for IPv6 services, unused block of space.
# This is only used if enable_dual_stack_networks is set to true
# This provides 4096 IPv6 IPs
kube_service_addresses_ipv6: fd85:ee78:d8a6:8607::1000/116

# Internal network. When used, it will assign IPv6 addresses from this range to individual pods.
# This network must not already be in your network infrastructure!
# This is only used if enable_dual_stack_networks is set to true.
# This provides room for 256 nodes with 254 pods per node.
kube_pods_subnet_ipv6: fd85:ee78:d8a6:8607::1:0000/112

# IPv6 subnet size allocated to each for pods.
# This is only used if enable_dual_stack_networks is set to true
# This provides room for 254 pods per node.
kube_network_node_prefix_ipv6: 120

# The port the API Server will be listening on.
kube_apiserver_ip: "{{ kube_service_addresses|ipaddr('net')|ipaddr(1)|ipaddr('address') }}"
kube_apiserver_port: 6443 # (https)
Expand Down
3 changes: 3 additions & 0 deletions inventory/sample/group_vars/k8s-cluster/k8s-net-calico.yml
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,9 @@
# add default ippool CIDR (must be inside kube_pods_subnet, defaults to kube_pods_subnet otherwise)
# calico_pool_cidr: 1.2.3.4/5

# Add default IPV6 IPPool CIDR. Must be inside kube_pods_subnet_ipv6. Defaults to kube_pods_subnet_ipv6 if not set.
# calico_pool_cidr_ipv6: fd85:ee78:d8a6:8607::1:0000/112

# Global as_num (/calico/bgp/v1/global/as_num)
# global_as_num: "64512"

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -88,8 +88,14 @@ dns:
imageTag: {{ coredns_image_tag }}
networking:
dnsDomain: {{ dns_domain }}
serviceSubnet: {{ kube_service_addresses }}
podSubnet: {{ kube_pods_subnet }}
serviceSubnet: "{{ kube_service_addresses }}{{ ',' + kube_service_addresses_ipv6 if enable_dual_stack_networks }}"
podSubnet: "{{ kube_pods_subnet }}{{ ',' + kube_pods_subnet_ipv6 if enable_dual_stack_networks }}"
{% if kube_feature_gates %}
featureGates:
{% for feature in kube_feature_gates %}
{{ feature|replace("=", ": ") }}
{% endfor %}
{% endif %}
kubernetesVersion: {{ kube_version }}
{% if kubeadm_config_api_fqdn is defined %}
controlPlaneEndpoint: {{ kubeadm_config_api_fqdn }}:{{ loadbalancer_apiserver.port | default(kube_apiserver_port) }}
Expand Down Expand Up @@ -127,6 +133,7 @@ apiServer:
etcd-servers-overrides: "/events#{{ etcd_events_access_addresses_semicolon }}"
{% endif %}
service-node-port-range: {{ kube_apiserver_node_port_range }}
service-cluster-ip-range: "{{ kube_service_addresses }}{{ ',' + kube_service_addresses_ipv6 if enable_dual_stack_networks }}"
kubelet-preferred-address-types: "{{ kubelet_preferred_address_types }}"
profiling: "{{ kube_profiling }}"
request-timeout: "{{ kube_apiserver_request_timeout }}"
Expand Down Expand Up @@ -262,7 +269,14 @@ controllerManager:
extraArgs:
node-monitor-grace-period: {{ kube_controller_node_monitor_grace_period }}
node-monitor-period: {{ kube_controller_node_monitor_period }}
cluster-cidr: "{{ kube_pods_subnet }}{{ ',' + kube_pods_subnet_ipv6 if enable_dual_stack_networks }}"
service-cluster-ip-range: "{{ kube_service_addresses }}{{ ',' + kube_service_addresses_ipv6 if enable_dual_stack_networks }}"
{% if enable_dual_stack_networks %}
node-cidr-mask-size-ipv4: "{{ kube_network_node_prefix }}"
node-cidr-mask-size-ipv6: "{{ kube_network_node_prefix_ipv6 }}"
{% else %}
node-cidr-mask-size: "{{ kube_network_node_prefix }}"
{% endif %}
profiling: "{{ kube_profiling }}"
terminated-pod-gc-threshold: "{{ kube_controller_terminated_pod_gc_threshold }}"
bind-address: {{ kube_controller_manager_bind_address }}
Expand Down Expand Up @@ -349,7 +363,7 @@ clientConnection:
contentType: {{ kube_proxy_client_content_type }}
kubeconfig: {{ kube_proxy_client_kubeconfig }}
qps: {{ kube_proxy_client_qps }}
clusterCIDR: {{ kube_pods_subnet }}
clusterCIDR: "{{ kube_pods_subnet }}{{ ',' + kube_pods_subnet_ipv6 if enable_dual_stack_networks }}"
configSyncPeriod: {{ kube_proxy_config_sync_period }}
conntrack:
maxPerCore: {{ kube_proxy_conntrack_max_per_core }}
Expand Down Expand Up @@ -381,9 +395,9 @@ portRange: {{ kube_proxy_port_range }}
udpIdleTimeout: {{ kube_proxy_udp_idle_timeout }}
{% if kube_feature_gates %}
featureGates:
{% for feature in kube_feature_gates %}
{% for feature in kube_feature_gates %}
{{ feature|replace("=", ": ") }}
{% endfor %}
{% endfor %}
{% endif %}
{# DNS settings for kubelet #}
{% if enable_nodelocaldns %}
Expand All @@ -404,3 +418,9 @@ clusterDNS:
{% for dns_address in kubelet_cluster_dns %}
- {{ dns_address }}
{% endfor %}
{% if kube_feature_gates %}
featureGates:
{% for feature in kube_feature_gates %}
{{ feature|replace("=", ": ") }}
{% endfor %}
{% endif %}
7 changes: 7 additions & 0 deletions roles/kubernetes/preinstall/tasks/0040-set_facts.yml
Original file line number Diff line number Diff line change
Expand Up @@ -176,3 +176,10 @@
set_fact:
kubelet_flexvolumes_plugins_dir: /var/lib/kubelet/volumeplugins
when: not usr.stat.writeable

- name: Ensure IPv6DualStack featureGate is set when enable_dual_stack_networks is true
set_fact:
kube_feature_gates: "{{ kube_feature_gates + [ 'IPv6DualStack=true' ] }}"
when:
- enable_dual_stack_networks
- not 'IPv6DualStack=true' in kube_feature_gates
Original file line number Diff line number Diff line change
Expand Up @@ -62,6 +62,15 @@
state: present
reload: yes

- name: Enable ipv6 forwarding
sysctl:
sysctl_file: "{{ sysctl_file_path }}"
name: net.ipv6.conf.all.forwarding
value: 1
state: present
reload: yes
when: enable_dual_stack_networks | bool

- name: Ensure kube-bench parameters are set
sysctl:
sysctl_file: /etc/sysctl.d/bridge-nf-call.conf
Expand Down
19 changes: 19 additions & 0 deletions roles/kubespray-defaults/defaults/main.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -181,6 +181,25 @@ kube_pods_subnet: 10.233.64.0/18
# - kubelet_max_pods: 110
kube_network_node_prefix: 24

# Configure Dual Stack networking (i.e. both IPv4 and IPv6)
enable_dual_stack_networks: false

# Kubernetes internal network for IPv6 services, unused block of space.
# This is only used if enable_dual_stack_networks is set to true
# This provides 4096 IPv6 IPs
kube_service_addresses_ipv6: fd85:ee78:d8a6:8607::1000/116

# Internal network. When used, it will assign IPv6 addresses from this range to individual pods.
# This network must not already be in your network infrastructure!
# This is only used if enable_dual_stack_networks is set to true.
# This provides room for 256 nodes with 254 pods per node.
kube_pods_subnet_ipv6: fd85:ee78:d8a6:8607::1:0000/112

# IPv6 subnet size allocated to each for pods.
# This is only used if enable_dual_stack_networks is set to true
# This provides room for 254 pods per node.
kube_network_node_prefix_ipv6: 120

# The virtual cluster IP, real host IPs and ports the API Server will be
# listening on.
# NOTE: loadbalancer_apiserver_localhost somewhat alters the final API enpdoint
Expand Down
4 changes: 4 additions & 0 deletions roles/network_plugin/calico/defaults/main.yml
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,10 @@ ipip_mode: "{{ 'Always' if ipip else 'Never' }}" # change to "CrossSubnet" if y
calico_ipip_mode: "{{ ipip_mode }}"
calico_vxlan_mode: 'Never'

calico_ipip_mode_ipv6: Never
miff2000 marked this conversation as resolved.
Show resolved Hide resolved
calico_vxlan_mode_ipv6: Never
calico_pool_blocksize_ipv6: 116

calico_cert_dir: /etc/calico/certs

# Global as_num (/calico/bgp/v1/global/as_num)
Expand Down
46 changes: 46 additions & 0 deletions roles/network_plugin/calico/tasks/install.yml
Original file line number Diff line number Diff line change
Expand Up @@ -108,6 +108,31 @@
- 'calico_conf.stdout == "0"'
- calico_pool_cidr is defined

- name: Calico | Check if calico IPv6 network pool has already been configured
# noqa 306 - grep will exit 1 if no match found
shell: >
{{ bin_dir }}/calicoctl.sh get ippool | grep -w "{{ calico_pool_cidr_ipv6 | default(kube_pods_subnet_ipv6) }}" | wc -l
args:
executable: /bin/bash
register: calico_conf_ipv6
retries: 4
until: calico_conf_ipv6.rc == 0
delay: "{{ retry_stagger | random + 3 }}"
changed_when: false
when:
- inventory_hostname == groups['kube-master'][0]
- enable_dual_stack_networks

- name: Calico | Ensure that calico_pool_cidr_ipv6 is within kube_pods_subnet_ipv6 when defined
assert:
that: "[calico_pool_cidr_ipv6] | ipaddr(kube_pods_subnet_ipv6) | length == 1"
msg: "{{ calico_pool_cidr_ipv6 }} is not within or equal to {{ kube_pods_subnet_ipv6 }}"
when:
- inventory_hostname == groups['kube-master'][0]
- calico_conf_ipv6.stdout is defined and calico_conf_ipv6.stdout == "0"
- calico_pool_cidr_ipv6 is defined
- enable_dual_stack_networks

- name: Calico | Create calico manifests for kdd
template:
src: "{{ item.file }}.j2"
Expand Down Expand Up @@ -156,6 +181,27 @@
- inventory_hostname == groups['kube-master'][0]
- 'calico_conf.stdout == "0"'

- name: Calico | Configure calico ipv6 network pool (version >= v3.3.0)
command:
cmd: "{{ bin_dir }}/calicoctl.sh apply -f -"
stdin: >
{ "kind": "IPPool",
"apiVersion": "projectcalico.org/v3",
"metadata": {
"name": "{{ calico_pool_name }}-ipv6",
},
"spec": {
"blockSize": {{ calico_pool_blocksize_ipv6 | default(kube_network_node_prefix_ipv6) }},
"cidr": "{{ calico_pool_cidr_ipv6 | default(kube_pods_subnet_ipv6) }}",
"ipipMode": "{{ calico_ipip_mode_ipv6 }}",
"vxlanMode": "{{ calico_vxlan_mode_ipv6 }}",
"natOutgoing": {{ nat_outgoing_ipv6|default(false) and not peer_with_router_ipv6|default(false) }} }}
when:
- inventory_hostname == groups['kube-master'][0]
- calico_conf_ipv6.stdout is defined and calico_conf_ipv6.stdout == "0"
- calico_version is version("v3.3.0", ">=")
- enable_dual_stack_networks | bool

- name: Populate Service External IPs
set_fact:
_service_external_ips: "{{ _service_external_ips|default([]) + [ {'cidr': item} ] }}"
Expand Down
7 changes: 5 additions & 2 deletions roles/network_plugin/calico/templates/calico-node.yml.j2
Original file line number Diff line number Diff line change
Expand Up @@ -200,9 +200,8 @@ spec:
{% endif %}
- name: CALICO_IPV4POOL_IPIP
value: "{{ calico_ipv4pool_ipip }}"
# Disable IPv6 on Kubernetes.
- name: FELIX_IPV6SUPPORT
value: "false"
value: "{{ enable_dual_stack_networks | default(false) }}"
miff2000 marked this conversation as resolved.
Show resolved Hide resolved
# Set Felix logging to "info"
- name: FELIX_LOGSEVERITYSCREEN
value: "{{ calico_loglevel }}"
Expand Down Expand Up @@ -239,6 +238,10 @@ spec:
- name: IP
value: "autodetect"
{% endif %}
{% if enable_dual_stack_networks %}
- name: IP6
value: autodetect
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do we need IP6_AUTODETECTION_METHOD (see 5 lines before)

Copy link
Contributor Author

@miff2000 miff2000 Jan 7, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What would you want it to be set to? Here are the options. It's presently using the default of first-found

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

first-found is far from ideal as soon as you have multiple interfaces and want to use CrossSubnet options
see 7d7739e
Is there the IPv6 equivalent of status.hostIP

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not that I could see...

Maybe adding a variable like calico_ip6_can_reach_address: with a default value like google.com? That way it can be overridden for the user's scenario

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm wondering, do we really need both IP and IP6 ? if you run dual-stack, making tunnels over IPv4 makes more sense (less overhead), so is IP6 used for dual stack ?
On bare metal I usually have the K8S traffic on an isolated VLAN
This is fine with me for a first version, I'll think more about it when I try dual stack in the future.

Copy link
Contributor

@champtar champtar Jan 7, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was not thinking of having IPv4 only host, but thinking about Calico not needing to know anyhting about the actual IPv6 config of the host. Thanks for the BGP case, in this case Calico indeed need the IPv6 info.
Has you have a dual stack cluster handy, do we have any fields with the host IPv6 in the pods status ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The only IPv6 IPs seem to be in podIPs

status:
  conditions:
  - lastProbeTime: null
    lastTransitionTime: "2020-12-14T10:31:57Z"
    status: "True"
    type: Initialized
  - lastProbeTime: null
    lastTransitionTime: "2020-12-14T10:31:59Z"
    status: "True"
    type: Ready
  - lastProbeTime: null
    lastTransitionTime: "2020-12-14T10:31:59Z"
    status: "True"
    type: ContainersReady
  - lastProbeTime: null
    lastTransitionTime: "2020-12-14T10:31:57Z"
    status: "True"
    type: PodScheduled
  containerStatuses:
  - containerID: docker://5a7c2d65e4d1b56a0668dbf16daee4126b00c7bbd2cc757859afe388af0652ab
    image: myregistry.com/tenant/app:image-v0.1.0
    imageID: docker-pullable://myregistry.com/tenant/app@sha256:ec6cffc50892bcf8eaa5957c7adb4040b0f9977a41a93c2a8c384ef94b45ebbf
    lastState: {}
    name: app
    ready: true
    restartCount: 0
    started: true
    state:
      running:
        startedAt: "2020-12-14T10:31:58Z"
  hostIP: 172.28.239.105
  phase: Running
  podIP: 10.233.104.85
  podIPs:
  - ip: 10.233.104.85
  - ip: fd52:5b5b:b0ab:f430::1:bf
  qosClass: BestEffort
  startTime: "2020-12-14T10:31:57Z"

Copy link
Contributor Author

@miff2000 miff2000 Jan 8, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Shall we get this out of the door, and then make some iterative changes to it? It's functional and fits most use cases as it is, and it would be good to get something into master for people to tweak and improve upon 🙂

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yes, maybe just wait for 2.15 release (soon) then we can merge

Copy link
Contributor Author

@miff2000 miff2000 Jan 16, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like 2.15 went on on Thursday. Is this able to be merged now, do you think?

{% endif %}
{% if calico_use_default_route_src_ipaddr|default(false) %}
- name: FELIX_DEVICEROUTESOURCEADDRESS
valueFrom:
Expand Down
4 changes: 4 additions & 0 deletions roles/network_plugin/calico/templates/cni-calico.conflist.j2
Original file line number Diff line number Diff line change
Expand Up @@ -30,6 +30,10 @@
{% else %}
"ipam": {
"type": "calico-ipam",
{% if enable_dual_stack_networks %}
"assign_ipv6": "true",
"ipv6_pools": ["{{ calico_pool_cidr_ipv6 | default(kube_pods_subnet_ipv6) }}"],
{% endif %}
"assign_ipv4": "true",
"ipv4_pools": ["{{ calico_pool_cidr | default(kube_pods_subnet) }}"]
},
Expand Down
7 changes: 7 additions & 0 deletions tests/files/vagrant_ubuntu18-calico-dual-stack.rb
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
# For CI we are not worried about data persistence across reboot
$libvirt_volume_cache = "unsafe"

# Checking for box update can trigger API rate limiting
# https://www.vagrantup.com/docs/vagrant-cloud/request-limits.html
$box_check_update = false
$network_plugin = "calico"
8 changes: 8 additions & 0 deletions tests/files/vagrant_ubuntu18-calico-dual-stack.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
---
# Kubespray settings

kube_network_plugin: calico
enable_dual_stack_networks: true

deploy_netchecker: true
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is netchecker checking dualstack ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not familiar with netchecker. I can drop if you don't think it needs to be there?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I want to keep it, just wondering if it check dual stack, or if there is a setting to make it check, or if maybe we need an IPv4 and an IPv6 deployment. Maybe someone else knows, this is an open question not a blocker to merge.

dns_min_replicas: 1