diff --git a/content/en/docs/setup/production-environment/tools/kubeadm/create-dual-stack-cluster-kubeadm.md b/content/en/docs/setup/production-environment/tools/kubeadm/create-dual-stack-cluster-kubeadm.md
deleted file mode 100644
index 704fe8d499f98..0000000000000
--- a/content/en/docs/setup/production-environment/tools/kubeadm/create-dual-stack-cluster-kubeadm.md
+++ /dev/null
@@ -1,524 +0,0 @@
----
-reviewers:
-- sig-cluster-lifecycle
-title: Creating a dual-stack cluster with kubeadm
-content_type: task
-weight: 30
----
-
-
-
-Using `kubeadm`, you can create a minimum viable Kubernetes cluster that conforms to best practices. In fact, you can use `kubeadm` to set up a cluster that will pass the [Kubernetes Conformance tests](https://kubernetes.io/blog/2017/10/software-conformance-certification).
-`kubeadm` also supports other cluster
-lifecycle functions, such as [bootstrap tokens](/docs/reference/access-authn-authz/bootstrap-tokens/) and cluster upgrades.
-
-The `kubeadm` tool is good if you need:
-
-- A simple way for you to try out Kubernetes, possibly for the first time.
-- A way for existing users to automate setting up a cluster and test their application.
-- A building block in other ecosystem and/or installer tools with a larger
- scope.
-
-You can install and use `kubeadm` on various machines: your laptop, a set
-of cloud servers, a Raspberry Pi, and more. Whether you're deploying into the
-cloud or on-premises, you can integrate `kubeadm` into provisioning systems such
-as Ansible or Terraform.
-
-
-
-## {{% heading "prerequisites" %}}
-
-
-To follow this guide, you need:
-
-- One or more machines running a deb/rpm-compatible Linux OS; for example: Ubuntu or CentOS.
-- 2 GiB or more of RAM per machine--any less leaves little room for your
- apps.
-- At least 2 CPUs on the machine that you use as a control-plane node.
-- Full network connectivity among all machines in the cluster. You can use either a
- public or a private network.
-
-
-You also need to use a version of `kubeadm` that can deploy the version
-of Kubernetes that you want to use in your new cluster.
-
-[Kubernetes' version and version skew support policy](/docs/setup/release/version-skew-policy/#supported-versions) applies to `kubeadm` as well as to Kubernetes overall.
-Check that policy to learn about what versions of Kubernetes and `kubeadm`
-are supported. This page is written for Kubernetes {{< param "version" >}}.
-
-The `kubeadm` tool's overall feature state is General Availability (GA). Some sub-features are
-still under active development. The implementation of creating the cluster may change
-slightly as the tool evolves, but the overall implementation should be pretty stable.
-
-{{< note >}}
-Any commands under `kubeadm alpha` are, by definition, supported on an alpha level.
-{{< /note >}}
-
-
-
-
-
-## Objectives
-
-* Install a single control-plane Kubernetes cluster
-* Install a Pod network on the cluster so that your Pods can
- talk to each other
-
-## Instructions
-
-### Installing kubeadm on your hosts
-
-See ["Installing kubeadm"](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/).
-
-{{< note >}}
-If you have already installed kubeadm, run `apt-get update &&
-apt-get upgrade` or `yum update` to get the latest version of kubeadm.
-
-When you upgrade, the kubelet restarts every few seconds as it waits in a crashloop for
-kubeadm to tell it what to do. This crashloop is expected and normal.
-After you initialize your control-plane, the kubelet runs normally.
-{{< /note >}}
-
-### Initializing your control-plane node
-
-The control-plane node is the machine where the control plane components run, including
-{{< glossary_tooltip term_id="etcd" >}} (the cluster database) and the
-{{< glossary_tooltip text="API Server" term_id="kube-apiserver" >}}
-(which the {{< glossary_tooltip text="kubectl" term_id="kubectl" >}} command line tool
-communicates with).
-
-1. (Recommended) If you have plans to upgrade this single control-plane `kubeadm` cluster
-to high availability you should specify the `--control-plane-endpoint` to set the shared endpoint
-for all control-plane nodes. Such an endpoint can be either a DNS name or an IP address of a load-balancer.
-1. Choose a Pod network add-on, and verify whether it requires any arguments to
-be passed to `kubeadm init`. Depending on which
-third-party provider you choose, you might need to set the `--pod-network-cidr` to
-a provider-specific value. See [Installing a Pod network add-on](#pod-network).
-1. (Optional) Since version 1.14, `kubeadm` tries to detect the container runtime on Linux
-by using a list of well known domain socket paths. To use different container runtime or
-if there are more than one installed on the provisioned node, specify the `--cri-socket`
-argument to `kubeadm init`. See [Installing runtime](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#installing-runtime).
-1. (Optional) Unless otherwise specified, `kubeadm` uses the network interface associated
-with the default gateway to set the advertise address for this particular control-plane node's API server.
-To use a different network interface, specify the `--apiserver-advertise-address=` argument
-to `kubeadm init`. To deploy an IPv6 Kubernetes cluster using IPv6 addressing, you
-must specify an IPv6 address, for example `--apiserver-advertise-address=fd00::101`
-1. (Optional) Run `kubeadm config images pull` prior to `kubeadm init` to verify
-connectivity to the gcr.io container image registry.
-
-To initialize the control-plane node run:
-
-```bash
-kubeadm init
-```
-
-### Considerations about apiserver-advertise-address and ControlPlaneEndpoint
-
-While `--apiserver-advertise-address` can be used to set the advertise address for this particular
-control-plane node's API server, `--control-plane-endpoint` can be used to set the shared endpoint
-for all control-plane nodes.
-
-`--control-plane-endpoint` allows both IP addresses and DNS names that can map to IP addresses.
-Please contact your network administrator to evaluate possible solutions with respect to such mapping.
-
-Here is an example mapping:
-
-```
-192.168.0.102 cluster-endpoint
-```
-
-Where `192.168.0.102` is the IP address of this node and `cluster-endpoint` is a custom DNS name that maps to this IP.
-This will allow you to pass `--control-plane-endpoint=cluster-endpoint` to `kubeadm init` and pass the same DNS name to
-`kubeadm join`. Later you can modify `cluster-endpoint` to point to the address of your load-balancer in an
-high availability scenario.
-
-Turning a single control plane cluster created without `--control-plane-endpoint` into a highly available cluster
-is not supported by kubeadm.
-
-### More information
-
-For more information about `kubeadm init` arguments, see the [kubeadm reference guide](/docs/reference/setup-tools/kubeadm/kubeadm/).
-
-To configure `kubeadm init` with a configuration file see [Using kubeadm init with a configuration file](/docs/reference/setup-tools/kubeadm/kubeadm-init/#config-file).
-
-To customize control plane components, including optional IPv6 assignment to liveness probe for control plane components and etcd server, provide extra arguments to each component as documented in [custom arguments](/docs/setup/production-environment/tools/kubeadm/control-plane-flags/).
-
-To run `kubeadm init` again, you must first [tear down the cluster](#tear-down).
-
-If you join a node with a different architecture to your cluster, make sure that your deployed DaemonSets
-have container image support for this architecture.
-
-`kubeadm init` first runs a series of prechecks to ensure that the machine
-is ready to run Kubernetes. These prechecks expose warnings and exit on errors. `kubeadm init`
-then downloads and installs the cluster control plane components. This may take several minutes.
-After it finishes you should see:
-
-```none
-Your Kubernetes control-plane has initialized successfully!
-
-To start using your cluster, you need to run the following as a regular user:
-
- mkdir -p $HOME/.kube
- sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
- sudo chown $(id -u):$(id -g) $HOME/.kube/config
-
-You should now deploy a Pod network to the cluster.
-Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
- /docs/concepts/cluster-administration/addons/
-
-You can now join any number of machines by running the following on each node
-as root:
-
- kubeadm join : --token --discovery-token-ca-cert-hash sha256:
-```
-
-To make kubectl work for your non-root user, run these commands, which are
-also part of the `kubeadm init` output:
-
-```bash
-mkdir -p $HOME/.kube
-sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
-sudo chown $(id -u):$(id -g) $HOME/.kube/config
-```
-
-Alternatively, if you are the `root` user, you can run:
-
-```bash
-export KUBECONFIG=/etc/kubernetes/admin.conf
-```
-
-Make a record of the `kubeadm join` command that `kubeadm init` outputs. You
-need this command to [join nodes to your cluster](#join-nodes).
-
-The token is used for mutual authentication between the control-plane node and the joining
-nodes. The token included here is secret. Keep it safe, because anyone with this
-token can add authenticated nodes to your cluster. These tokens can be listed,
-created, and deleted with the `kubeadm token` command. See the
-[kubeadm reference guide](/docs/reference/setup-tools/kubeadm/kubeadm-token/).
-
-### Installing a Pod network add-on {#pod-network}
-
-{{< caution >}}
-This section contains important information about networking setup and
-deployment order.
-Read all of this advice carefully before proceeding.
-
-**You must deploy a
-{{< glossary_tooltip text="Container Network Interface" term_id="cni" >}}
-(CNI) based Pod network add-on so that your Pods can communicate with each other.
-Cluster DNS (CoreDNS) will not start up before a network is installed.**
-
-- Take care that your Pod network must not overlap with any of the host
- networks: you are likely to see problems if there is any overlap.
- (If you find a collision between your network plugin's preferred Pod
- network and some of your host networks, you should think of a suitable
- CIDR block to use instead, then use that during `kubeadm init` with
- `--pod-network-cidr` and as a replacement in your network plugin's YAML).
-
-- By default, `kubeadm` sets up your cluster to use and enforce use of
- [RBAC](/docs/reference/access-authn-authz/rbac/) (role based access
- control).
- Make sure that your Pod network plugin supports RBAC, and so do any manifests
- that you use to deploy it.
-
-- If you want to use IPv6--either dual-stack, or single-stack IPv6 only
- networking--for your cluster, make sure that your Pod network plugin
- supports IPv6.
- IPv6 support was added to CNI in [v0.6.0](https://github.com/containernetworking/cni/releases/tag/v0.6.0).
-
-{{< /caution >}}
-
-{{< note >}}
-Currently Calico is the only CNI plugin that the kubeadm project performs e2e tests against.
-If you find an issue related to a CNI plugin you should log a ticket in its respective issue
-tracker instead of the kubeadm or kubernetes issue trackers.
-{{< /note >}}
-
-Several external projects provide Kubernetes Pod networks using CNI, some of which also
-support [Network Policy](/docs/concepts/services-networking/network-policies/).
-
-See a list of add-ons that implement the
-[Kubernetes networking model](/docs/concepts/cluster-administration/networking/#how-to-implement-the-kubernetes-networking-model).
-
-You can install a Pod network add-on with the following command on the
-control-plane node or a node that has the kubeconfig credentials:
-
-```bash
-kubectl apply -f
-```
-
-You can install only one Pod network per cluster.
-
-Once a Pod network has been installed, you can confirm that it is working by
-checking that the CoreDNS Pod is `Running` in the output of `kubectl get pods --all-namespaces`.
-And once the CoreDNS Pod is up and running, you can continue by joining your nodes.
-
-If your network is not working or CoreDNS is not in the `Running` state, check out the
-[troubleshooting guide](/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/)
-for `kubeadm`.
-
-### Control plane node isolation
-
-By default, your cluster will not schedule Pods on the control-plane node for security
-reasons. If you want to be able to schedule Pods on the control-plane node, for example for a
-single-machine Kubernetes cluster for development, run:
-
-```bash
-kubectl taint nodes --all node-role.kubernetes.io/master-
-```
-
-With output looking something like:
-
-```
-node "test-01" untainted
-taint "node-role.kubernetes.io/master:" not found
-taint "node-role.kubernetes.io/master:" not found
-```
-
-This will remove the `node-role.kubernetes.io/master` taint from any nodes that
-have it, including the control-plane node, meaning that the scheduler will then be able
-to schedule Pods everywhere.
-
-### Joining your nodes {#join-nodes}
-
-The nodes are where your workloads (containers and Pods, etc) run. To add new nodes to your cluster do the following for each machine:
-
-* SSH to the machine
-* Become root (e.g. `sudo su -`)
-* Run the command that was output by `kubeadm init`. For example:
-
-```bash
-kubeadm join --token : --discovery-token-ca-cert-hash sha256:
-```
-
-If you do not have the token, you can get it by running the following command on the control-plane node:
-
-```bash
-kubeadm token list
-```
-
-The output is similar to this:
-
-```console
-TOKEN TTL EXPIRES USAGES DESCRIPTION EXTRA GROUPS
-8ewj1p.9r9hcjoqgajrj4gi 23h 2018-06-12T02:51:28Z authentication, The default bootstrap system:
- signing token generated by bootstrappers:
- 'kubeadm init'. kubeadm:
- default-node-token
-```
-
-By default, tokens expire after 24 hours. If you are joining a node to the cluster after the current token has expired,
-you can create a new token by running the following command on the control-plane node:
-
-```bash
-kubeadm token create
-```
-
-The output is similar to this:
-
-```console
-5didvk.d09sbcov8ph2amjw
-```
-
-If you don't have the value of `--discovery-token-ca-cert-hash`, you can get it by running the following command chain on the control-plane node:
-
-```bash
-openssl x509 -pubkey -in /etc/kubernetes/pki/ca.crt | openssl rsa -pubin -outform der 2>/dev/null | \
- openssl dgst -sha256 -hex | sed 's/^.* //'
-```
-
-The output is similar to:
-
-```console
-8cb2de97839780a412b93877f8507ad6c94f73add17d5d7058e91741c9d5ec78
-```
-
-{{< note >}}
-To specify an IPv6 tuple for `:`, IPv6 address must be enclosed in square brackets, for example: `[fd00::101]:2073`.
-{{< /note >}}
-
-The output should look something like:
-
-```
-[preflight] Running pre-flight checks
-
-... (log output of join workflow) ...
-
-Node join complete:
-* Certificate signing request sent to control-plane and response
- received.
-* Kubelet informed of new secure connection details.
-
-Run 'kubectl get nodes' on control-plane to see this machine join.
-```
-
-A few seconds later, you should notice this node in the output from `kubectl get
-nodes` when run on the control-plane node.
-
-### (Optional) Controlling your cluster from machines other than the control-plane node
-
-In order to get a kubectl on some other computer (e.g. laptop) to talk to your
-cluster, you need to copy the administrator kubeconfig file from your control-plane node
-to your workstation like this:
-
-```bash
-scp root@:/etc/kubernetes/admin.conf .
-kubectl --kubeconfig ./admin.conf get nodes
-```
-
-{{< note >}}
-The example above assumes SSH access is enabled for root. If that is not the
-case, you can copy the `admin.conf` file to be accessible by some other user
-and `scp` using that other user instead.
-
-The `admin.conf` file gives the user _superuser_ privileges over the cluster.
-This file should be used sparingly. For normal users, it's recommended to
-generate an unique credential to which you grant privileges. You can do
-this with the `kubeadm alpha kubeconfig user --client-name `
-command. That command will print out a KubeConfig file to STDOUT which you
-should save to a file and distribute to your user. After that, grant
-privileges by using `kubectl create (cluster)rolebinding`.
-{{< /note >}}
-
-### (Optional) Proxying API Server to localhost
-
-If you want to connect to the API Server from outside the cluster you can use
-`kubectl proxy`:
-
-```bash
-scp root@:/etc/kubernetes/admin.conf .
-kubectl --kubeconfig ./admin.conf proxy
-```
-
-You can now access the API Server locally at `http://localhost:8001/api/v1`
-
-## Clean up {#tear-down}
-
-If you used disposable servers for your cluster, for testing, you can
-switch those off and do no further clean up. You can use
-`kubectl config delete-cluster` to delete your local references to the
-cluster.
-
-However, if you want to deprovision your cluster more cleanly, you should
-first [drain the node](/docs/reference/generated/kubectl/kubectl-commands#drain)
-and make sure that the node is empty, then deconfigure the node.
-
-### Remove the node
-
-Talking to the control-plane node with the appropriate credentials, run:
-
-```bash
-kubectl drain --delete-local-data --force --ignore-daemonsets
-```
-
-Before removing the node, reset the state installed by `kubeadm`:
-
-```bash
-kubeadm reset
-```
-
-The reset process does not reset or clean up iptables rules or IPVS tables. If you wish to reset iptables, you must do so manually:
-
-```bash
-iptables -F && iptables -t nat -F && iptables -t mangle -F && iptables -X
-```
-
-If you want to reset the IPVS tables, you must run the following command:
-
-```bash
-ipvsadm -C
-```
-
-Now remove the node:
-```bash
-kubectl delete node
-```
-
-If you wish to start over, run `kubeadm init` or `kubeadm join` with the
-appropriate arguments.
-
-### Clean up the control plane
-
-You can use `kubeadm reset` on the control plane host to trigger a best-effort
-clean up.
-
-See the [`kubeadm reset`](/docs/reference/setup-tools/kubeadm/kubeadm-reset/)
-reference documentation for more information about this subcommand and its
-options.
-
-
-
-
-
-## What's next {#whats-next}
-
-* Verify that your cluster is running properly with [Sonobuoy](https://github.com/heptio/sonobuoy)
-* See [Upgrading kubeadm clusters](/docs/tasks/administer-cluster/kubeadm/kubeadm-upgrade/)
- for details about upgrading your cluster using `kubeadm`.
-* Learn about advanced `kubeadm` usage in the [kubeadm reference documentation](/docs/reference/setup-tools/kubeadm/kubeadm)
-* Learn more about Kubernetes [concepts](/docs/concepts/) and [`kubectl`](/docs/reference/kubectl/overview/).
-* See the [Cluster Networking](/docs/concepts/cluster-administration/networking/) page for a bigger list
- of Pod network add-ons.
-* See the [list of add-ons](/docs/concepts/cluster-administration/addons/) to
- explore other add-ons, including tools for logging, monitoring, network policy, visualization &
- control of your Kubernetes cluster.
-* Configure how your cluster handles logs for cluster events and from
- applications running in Pods.
- See [Logging Architecture](/docs/concepts/cluster-administration/logging/) for
- an overview of what is involved.
-
-### Feedback {#feedback}
-
-* For bugs, visit the [kubeadm GitHub issue tracker](https://github.com/kubernetes/kubeadm/issues)
-* For support, visit the
- [#kubeadm](https://kubernetes.slack.com/messages/kubeadm/) Slack channel
-* General SIG Cluster Lifecycle development Slack channel:
- [#sig-cluster-lifecycle](https://kubernetes.slack.com/messages/sig-cluster-lifecycle/)
-* SIG Cluster Lifecycle [SIG information](https://github.com/kubernetes/community/tree/master/sig-cluster-lifecycle#readme)
-* SIG Cluster Lifecycle mailing list:
- [kubernetes-sig-cluster-lifecycle](https://groups.google.com/forum/#!forum/kubernetes-sig-cluster-lifecycle)
-
-## Version skew policy {#version-skew-policy}
-
-The `kubeadm` tool of version v{{< skew latestVersion >}} may deploy clusters with a control plane of version v{{< skew latestVersion >}} or v{{< skew prevMinorVersion >}}.
-`kubeadm` v{{< skew latestVersion >}} can also upgrade an existing kubeadm-created cluster of version v{{< skew prevMinorVersion >}}.
-
-Due to that we can't see into the future, kubeadm CLI v{{< skew latestVersion >}} may or may not be able to deploy v{{< skew nextMinorVersion >}} clusters.
-
-These resources provide more information on supported version skew between kubelets and the control plane, and other Kubernetes components:
-
-* Kubernetes [version and version-skew policy](/docs/setup/release/version-skew-policy/)
-* Kubeadm-specific [installation guide](/docs/setup/production-environment/tools/kubeadm/install-kubeadm/#installing-kubeadm-kubelet-and-kubectl)
-
-## Limitations {#limitations}
-
-### Cluster resilience {#resilience}
-
-The cluster created here has a single control-plane node, with a single etcd database
-running on it. This means that if the control-plane node fails, your cluster may lose
-data and may need to be recreated from scratch.
-
-Workarounds:
-
-* Regularly [back up etcd](https://coreos.com/etcd/docs/latest/admin_guide.html). The
- etcd data directory configured by kubeadm is at `/var/lib/etcd` on the control-plane node.
-
-* Use multiple control-plane nodes. You can read
- [Options for Highly Available topology](/docs/setup/production-environment/tools/kubeadm/ha-topology/) to pick a cluster
- topology that provides [high-availability](/docs/setup/production-environment/tools/kubeadm/high-availability/).
-
-### Platform compatibility {#multi-platform}
-
-kubeadm deb/rpm packages and binaries are built for amd64, arm (32-bit), arm64, ppc64le, and s390x
-following the [multi-platform
-proposal](https://github.com/kubernetes/community/blob/master/contributors/design-proposals/multi-platform.md).
-
-Multiplatform container images for the control plane and addons are also supported since v1.12.
-
-Only some of the network providers offer solutions for all platforms. Please consult the list of
-network providers above or the documentation from each provider to figure out whether the provider
-supports your chosen platform.
-
-## Troubleshooting {#troubleshooting}
-
-If you are running into difficulties with kubeadm, please consult our [troubleshooting docs](/docs/setup/production-environment/tools/kubeadm/troubleshooting-kubeadm/).