diff --git a/content/en/docs/tasks/administer-cluster/dns-custom-nameservers.md b/content/en/docs/tasks/administer-cluster/dns-custom-nameservers.md index d16ff5f5d2add..76447cb67840b 100644 --- a/content/en/docs/tasks/administer-cluster/dns-custom-nameservers.md +++ b/content/en/docs/tasks/administer-cluster/dns-custom-nameservers.md @@ -28,19 +28,28 @@ DNS is a built-in Kubernetes service launched automatically using the addon manager [cluster add-on](http://releases.k8s.io/{{< param "githubbranch" >}}/cluster/addons/README.md). -As of Kubernetes v1.12, CoreDNS is the recommended DNS Server, replacing kube-dns. However, kube-dns may still be installed by -default with certain Kubernetes installer tools. Refer to the documentation provided by your installer to know which DNS server is installed by default. +The running DNS Pod holds 3 containers: +- "`kubedns`": watches the Kubernetes master for changes + in Services and Endpoints, and maintains in-memory lookup structures to serve + DNS requests. +- "`dnsmasq`": adds DNS caching to improve performance. +- "`sidecar`": provides a single health check endpoint + to perform healthchecks for `dnsmasq` and `kubedns`. -The CoreDNS Deployment is exposed as a Kubernetes Service with a static IP. -Both the CoreDNS and kube-dns Service are named `kube-dns` in the `metadata.name` field. This is done so that there is greater interoperability with workloads that relied on the legacy `kube-dns` Service name to resolve addresses internal to the cluster. It abstracts away the implementation detail of which DNS provider is running behind that common endpoint. -The kubelet passes DNS to each container with the `--cluster-dns=` flag. +The DNS Pod is exposed as a Kubernetes Service with a static IP. +The kubelet passes DNS to each container with the `--cluster-dns=` +flag. DNS names also need domains. You configure the local domain in the kubelet with the flag `--cluster-domain=`. -The DNS server supports forward lookups (A records), port lookups (SRV records), reverse IP address lookups (PTR records), -and more. For more information see [DNS for Services and Pods] (/docs/concepts/services-networking/dns-pod-service/). +The Kubernetes cluster DNS server is based on the +[SkyDNS](https://github.com/skynetservices/skydns) library. It supports forward +lookups (A records), service lookups (SRV records), and reverse IP address +lookups (PTR records). + +## Inheriting DNS from the node When running a Pod, kubelet prepends the cluster DNS server and searches paths to the node's DNS settings. If the node is able to resolve DNS names @@ -52,130 +61,7 @@ use the kubelet's `--resolv-conf` flag. Set this flag to "" to prevent Pods fro inheriting DNS. Set it to a valid file path to specify a file other than `/etc/resolv.conf` for DNS inheritance. -## CoreDNS - -CoreDNS is a general-purpose authoritative DNS server that can serve as cluster DNS, complying with the [dns specifications] -(https://github.com/kubernetes/dns/blob/master/docs/specification.md). - -### CoreDNS ConfigMap options - -CoreDNS is a DNS server that is modular and pluggable, and each plugin adds new functionality to CoreDNS. -This can be configured by maintaining a [Corefile](https://coredns.io/2017/07/23/corefile-explained/), which is the CoreDNS -configuration file. A cluster administrator can modify the ConfigMap for the CoreDNS Corefile to change how service discovery works. - -In Kubernetes, CoreDNS is installed with the following default Corefile configuration. - -```yaml -apiVersion: v1 -kind: ConfigMap -metadata: - name: coredns - namespace: kube-system -Corefile: | - .:53 { - errors - health - kubernetes cluster.local in-addr.arpa ip6.arpa { - pods insecure - upstream - fallthrough in-addr.arpa ip6.arpa - } - prometheus :9153 - proxy . /etc/resolv.conf - cache 30 - loop - reload - loadbalance - } -``` -The Corefile configuration includes the following [plugins](https://coredns.io/plugins/) of CoreDNS: - -* [errors](https://coredns.io/plugins/errors/): Errors are logged to stdout. -* [health](https://coredns.io/plugins/health/): Health of CoreDNS is reported to http://localhost:8080/health. -* [kubernetes](https://coredns.io/plugins/kubernetes/): CoreDNS will reply to DNS queries based on IP of the services and pods of Kubernetes. You can find more details [here](https://coredns.io/plugins/kubernetes/). - -> The `pods insecure` option is provided for backward compatibility with kube-dns. You can use the `pod verified` option, which returns an A record only if there exists a pod in same namespace with matching IP. The `pods disabled` option can be used if you don't use pod records. - -> `Upstream` is used for resolving services that point to external hosts (External Services). - -* [prometheus](https://coredns.io/plugins/prometheus/): Metrics of CoreDNS are available at http://localhost:9153/metrics in [Prometheus](https://prometheus.io/) format. -* [proxy](https://coredns.io/plugins/proxy/): Any queries that are not within the cluster domain of Kubernetes will be forwarded to predefined resolvers (/etc/resolv.conf). -* [cache](https://coredns.io/plugins/cache/): This enables a frontend cache. -* [loop](https://coredns.io/plugins/loop/): Detects simple forwarding loops and halts the CoreDNS process if a loop is found. -* [reload](https://coredns.io/plugins/reload): Allows automatic reload of a changed Corefile. -* [loadbalance](https://coredns.io/plugins/loadbalance): This is a round-robin DNS loadbalancer by randomizing the order of A, AAAA, and MX records in the answer. - -We can modify the default behavior by modifying this configmap. - -### Configuration of Stub-domain and upstream nameserver using CoreDNS - -CoreDNS has the ability to configure stubdomains and upstream nameservers using the [proxy plugin](https://coredns.io/plugins/proxy/). - -#### Example -If a cluster operator has a [Consul](https://www.consul.io/) domain server located at 10.150.0.1, and all Consul names have the suffix .consul.local. To configure it in CoreDNS, the cluster administrator creates the following stanza in the CoreDNS ConfigMap. - -``` -consul.local:53 { - errors - cache 30 - proxy . 10.150.0.1 - } -``` - -To explicitly force all non-cluster DNS lookups to go through a specific nameserver at 172.16.0.1, point the `proxy` and `upstream` to the nameserver instead of `/etc/resolv.conf` - -``` -proxy . 172.16.0.1 -``` -``` -upstream 172.16.0.1 -``` - -So, the final ConfigMap along with the default `Corefile` configuration will look like: - -```yaml -apiVersion: v1 -kind: ConfigMap -metadata: - name: coredns - namespace: kube-system -Corefile: | - .:53 { - errors - health - kubernetes cluster.local in-addr.arpa ip6.arpa { - pods insecure - upstream 172.16.0.1 - fallthrough in-addr.arpa ip6.arpa - } - prometheus :9153 - proxy . 172.16.0.1 - cache 30 - loop - reload - loadbalance - } - consul.local:53 { - errors - cache 30 - proxy . 10.150.0.1 - } -``` -In Kubernetes version 1.10 and later, kubeadm supports automatic translation of the CoreDNS ConfigMap from the kube-dns ConfigMap. - -## Kube-dns - -Kube-dns is now available as a optional DNS server since CoreDNS is now the default. -The running DNS Pod holds 3 containers: - -- "`kubedns`": watches the Kubernetes master for changes - in Services and Endpoints, and maintains in-memory lookup structures to serve - DNS requests. -- "`dnsmasq`": adds DNS caching to improve performance. -- "`sidecar`": provides a single health check endpoint - to perform healthchecks for `dnsmasq` and `kubedns`. - -### Configure stub-domain and upstream DNS servers +## Configure stub-domain and upstream DNS servers Cluster administrators can specify custom stub domains and upstream nameservers by providing a ConfigMap for kube-dns (`kube-system:kube-dns`). @@ -216,7 +102,7 @@ details about the configuration option format. {{% capture discussion %}} -#### Effects on Pods +### Effects on Pods Custom upstream nameservers and stub domains do not affect Pods with a `dnsPolicy` set to "`Default`" or "`None`". @@ -250,7 +136,7 @@ DNS queries are routed according to the following flow: ![DNS lookup flow](/docs/tasks/administer-cluster/dns-custom-nameservers/dns.png) -### ConfigMap options +## ConfigMap options Options for the kube-dns `kube-system:kube-dns` ConfigMap: @@ -259,9 +145,9 @@ Options for the kube-dns `kube-system:kube-dns` ConfigMap: | `stubDomains` (optional) | A JSON map using a DNS suffix key such as “acme.local”, and a value consisting of a JSON array of DNS IPs. | The target nameserver can itself be a Kubernetes Service. For instance, you can run your own copy of dnsmasq to export custom DNS names into the ClusterDNS namespace. | | `upstreamNameservers` (optional) | A JSON array of DNS IPs. | If specified, the values replace the nameservers taken by default from the node’s `/etc/resolv.conf`. Limits: a maximum of three upstream nameservers can be specified. | -#### Examples +### Examples -##### Example: Stub domain +#### Example: Stub domain In this example, the user has a Consul DNS service discovery system they want to integrate with kube-dns. The consul domain server is located at 10.150.0.1, and @@ -283,7 +169,7 @@ Note that the cluster administrator does not want to override the node’s upstream nameservers, so they did not specify the optional `upstreamNameservers` field. -##### Example: Upstream nameserver +#### Example: Upstream nameserver In this example the cluster administrator wants to explicitly force all non-cluster DNS lookups to go through their own nameserver at 172.16.0.1. @@ -303,9 +189,17 @@ data: {{% /capture %}} -## CoreDNS configuration equivalent to kube-dns +## Configuring CoreDNS {#config-coredns} + +You can configure [CoreDNS](https://coredns.io/) as a service discovery. + +CoreDNS is available as an option in Kubernetes starting with version 1.9. +It is currently a [GA feature](https://github.com/kubernetes/community/blob/master/keps/sig-network/0010-20180314-coredns-GA-proposal.md) and is on course to be [the default](https://github.com/kubernetes/community/blob/master/keps/sig-network/0012-20180518-coredns-default-proposal.md), replacing kube-dns. + + +## CoreDNS ConfigMap options -CoreDNS supports all the functionalities and more that is provided by kube-dns. +CoreDNS chains plugins and can be configured by maintaining a Corefile with the ConfigMap. CoreDNS supports all the functionalities and more that is provided by kube-dns. A ConfigMap created for kube-dns to support `StubDomains`and `upstreamNameservers` translates to the `proxy` plugin in CoreDNS. Similarly, the `Federation` plugin translates to the `federation` plugin in CoreDNS. @@ -382,8 +276,8 @@ In Kubernetes version 1.10 and later, kubeadm supports automatic translation of ## Migration to CoreDNS +A number of tools support the installation of CoreDNS instead of kube-dns. To migrate from kube-dns to CoreDNS, [a detailed blog](https://coredns.io/2018/05/21/migration-from-kube-dns-to-coredns/) is available to help users adapt CoreDNS in place of kube-dns. -A cluster administrator can also migrate using [the deploy script](https://github.com/coredns/deployment/blob/master/kubernetes/deploy.sh), which will also help you translate the kube-dns configmap to the equivalent CoreDNS one. ## What's next - [Debugging DNS Resolution](/docs/tasks/administer-cluster/dns-debugging-resolution/). diff --git a/content/en/docs/tasks/administer-cluster/dns-debugging-resolution.md b/content/en/docs/tasks/administer-cluster/dns-debugging-resolution.md index 1a9b7bf7e2289..23061cbd74dfd 100644 --- a/content/en/docs/tasks/administer-cluster/dns-debugging-resolution.md +++ b/content/en/docs/tasks/administer-cluster/dns-debugging-resolution.md @@ -13,7 +13,7 @@ This page provides hints on diagnosing DNS problems. {{% capture prerequisites %}} * {{< include "task-tutorial-prereqs.md" >}} {{< version-check >}} * Kubernetes version 1.6 and above. -* The cluster must be configured to use the `coredns` (or `kube-dns`) addons. +* The cluster must be configured to use the `kube-dns` addon. {{% /capture %}} {{% capture steps %}} @@ -68,7 +68,7 @@ nameserver 10.0.0.10 options ndots:5 ``` -Errors such as the following indicate a problem with the coredns/kube-dns add-on or +Errors such as the following indicate a problem with the kube-dns add-on or associated Services: ``` @@ -93,17 +93,6 @@ nslookup: can't resolve 'kubernetes.default' Use the `kubectl get pods` command to verify that the DNS pod is running. -For CoreDNS: -```shell -kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -NAME READY STATUS RESTARTS AGE -... -coredns-7b96bf9f76-5hsxb 1/1 Running 0 1h -coredns-7b96bf9f76-mvmmt 1/1 Running 0 1h -... -``` - -Or for kube-dns: ```shell kubectl get pods --namespace=kube-system -l k8s-app=kube-dns NAME READY STATUS RESTARTS AGE @@ -118,26 +107,8 @@ have to deploy it manually. ### Check for Errors in the DNS pod -Use `kubectl logs` command to see logs for the DNS containers. +Use `kubectl logs` command to see logs for the DNS daemons. -For CoreDNS: -```shell -for p in $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name); do kubectl logs --namespace=kube-system $p; done -``` - -Here is an example of a healthy CoreDNS log: - -``` -.:53 -2018/08/15 14:37:17 [INFO] CoreDNS-1.2.2 -2018/08/15 14:37:17 [INFO] linux/amd64, go1.10.3, 2e322f6 -CoreDNS-1.2.2 -linux/amd64, go1.10.3, 2e322f6 -2018/08/15 14:37:17 [INFO] plugin/reload: Running configuration MD5 = 24e6c59e83ce706f07bcc82c31b1ea1c -``` - - -For kube-dns, there are 3 sets of logs: ```shell kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name | head -1) -c kubedns @@ -146,8 +117,8 @@ kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system kubectl logs --namespace=kube-system $(kubectl get pods --namespace=kube-system -l k8s-app=kube-dns -o name | head -1) -c sidecar ``` -See if there are any suspicious error messages in the logs. In kube-dns, a '`W`', '`E`' or '`F`' at the beginning -of a line represents a Warning, Error or Failure. Please search for entries that have these +See if there is any suspicious log. Letter '`W`', '`E`', '`F`' at the beginning +represent Warning, Error and Failure. Please search for entries that have these as the logging level and use [kubernetes issues](https://github.com/kubernetes/kubernetes/issues) to report unexpected errors. @@ -164,8 +135,6 @@ kube-dns ClusterIP 10.0.0.10 53/UDP,53/TCP 1h ... ``` - -Note that the service name will be "kube-dns" for both CoreDNS and kube-dns deployments. If you have created the service or in the case it should be created by default but it does not appear, see [debugging services](/docs/tasks/debug-application-cluster/debug-service/) for @@ -189,83 +158,20 @@ For additional Kubernetes DNS examples, see the [cluster-dns examples](https://github.com/kubernetes/examples/tree/master/staging/cluster-dns) in the Kubernetes GitHub repository. - -### Are DNS queries being received/processed? - -You can verify if queries are being received by CoreDNS by adding the `log` plugin to the CoreDNS configuration (aka Corefile). -The CoreDNS Corefile is held in a ConfigMap named `coredns`. To edit it, use the command ... - -``` -kubectl -n kube-system edit configmap coredns -``` - -Then add `log` in the Corefile section per the example below. - -``` -apiVersion: v1 -kind: ConfigMap -metadata: - name: coredns - namespace: kube-system -data: - Corefile: | - .:53 { - log - errors - health - kubernetes cluster.local in-addr.arpa ip6.arpa { - pods insecure - upstream - fallthrough in-addr.arpa ip6.arpa - } - prometheus :9153 - proxy . /etc/resolv.conf - cache 30 - loop - reload - loadbalance - } - -``` - -After saving the changes, it may take up to minute or two for Kubernetes to propagate these changes to the CoreDNS pods. - -Next, make some queries and view the logs per the sections above in this document. If CoreDNS pods are receiving the queries, you should see them in the logs. - -Here is an example of a query in the log. - -``` -.:53 -2018/08/15 14:37:15 [INFO] CoreDNS-1.2.0 -2018/08/15 14:37:15 [INFO] linux/amd64, go1.10.3, 2e322f6 -CoreDNS-1.2.0 -linux/amd64, go1.10.3, 2e322f6 -2018/09/07 15:29:04 [INFO] plugin/reload: Running configuration MD5 = 162475cdf272d8aa601e6fe67a6ad42f -2018/09/07 15:29:04 [INFO] Reloading complete -172.17.0.18:41675 - [07/Sep/2018:15:29:11 +0000] 59925 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd,ra 106 0.000066649s - -``` - ## Known issues -Some Linux distributions (e.g. Ubuntu), use a local DNS resolver by default (systemd-resolved). -Systemd-resolved moves and replaces `/etc/resolv.conf` with a stub file that can cause a fatal forwarding -loop when resolving names in upstream servers. This can be fixed manually by using kubelet's `--resolv-conf` flag -to point to the correct `resolv.conf` (With `systemd-resolved`, this is `/run/systemd/resolve/resolv.conf`). -kubeadm 1.11 automatically detects `systemd-resolved`, and adjusts the kubelet flags accordingly. - -Kubernetes installs do not configure the nodes' `resolv.conf` files to use the -cluster DNS by default, because that process is inherently distribution-specific. +Kubernetes installs do not configure the nodes' resolv.conf files to use the +cluster DNS by default, because that process is inherently distro-specific. This should probably be implemented eventually. Linux's libc is impossibly stuck ([see this bug from 2005](https://bugzilla.redhat.com/show_bug.cgi?id=168253)) with limits of just -3 DNS `nameserver` records and 6 DNS `search` records. Kubernetes needs to -consume 1 `nameserver` record and 3 `search` records. This means that if a +3 DNS `nameserver` records and 6 DNS `search` records. Kubernetes needs to +consume 1 `nameserver` record and 3 `search` records. This means that if a local installation already uses 3 `nameserver`s or uses more than 3 `search`es, -some of those settings will be lost. As a partial workaround, the node can run +some of those settings will be lost. As a partial workaround, the node can run `dnsmasq` which will provide more `nameserver` entries, but not more `search` -entries. You can also use kubelet's `--resolv-conf` flag. +entries. You can also use kubelet's `--resolv-conf` flag. If you are using Alpine version 3.3 or earlier as your base image, DNS may not work properly owing to a known issue with Alpine. diff --git a/content/en/docs/tasks/administer-cluster/dns-horizontal-autoscaling.md b/content/en/docs/tasks/administer-cluster/dns-horizontal-autoscaling.md index c28f4c1001c99..afdb829455a3c 100644 --- a/content/en/docs/tasks/administer-cluster/dns-horizontal-autoscaling.md +++ b/content/en/docs/tasks/administer-cluster/dns-horizontal-autoscaling.md @@ -36,10 +36,10 @@ The output is similar to this: NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE ... - dns-autoscaler 1 1 1 1 ... + kube-dns-autoscaler 1 1 1 1 ... ... -If you see "dns-autoscaler" in the output, DNS horizontal autoscaling is +If you see "kube-dns-autoscaler" in the output, DNS horizontal autoscaling is already enabled, and you can skip to [Tuning autoscaling parameters](#tuning-autoscaling-parameters). @@ -53,13 +53,10 @@ The output is similar to this: NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE ... - coredns 2 2 2 2 ... + kube-dns 1 1 1 1 ... ... - -In Kubernetes versions earlier than 1.12, the DNS Deployment was called "kube-dns". - -In Kubernetes versions earlier than 1.5 DNS was implemented using a +In Kubernetes versions earlier than 1.5 DNS is implemented using a ReplicationController instead of a Deployment. So if you don't see kube-dns, or a similar name, in the preceding output, list the ReplicationControllers in your cluster in the kube-system namespace: @@ -80,7 +77,7 @@ If you have a DNS Deployment, your scale target is: Deployment/ where is the name of your DNS Deployment. For example, if -your DNS Deployment name is coredns, your scale target is Deployment/coredns. +your DNS Deployment name is kube-dns, your scale target is Deployment/kube-dns. If you have a DNS ReplicationController, your scale target is: @@ -114,7 +111,7 @@ DNS horizontal autoscaling is now enabled. ## Tuning autoscaling parameters -Verify that the dns-autoscaler ConfigMap exists: +Verify that the kube-dns-autoscaler ConfigMap exists: kubectl get configmap --namespace=kube-system @@ -122,12 +119,12 @@ The output is similar to this: NAME DATA AGE ... - dns-autoscaler 1 ... + kube-dns-autoscaler 1 ... ... Modify the data in the ConfigMap: - kubectl edit configmap dns-autoscaler --namespace=kube-system + kubectl edit configmap kube-dns-autoscaler --namespace=kube-system Look for this line: @@ -154,15 +151,15 @@ There are other supported scaling patterns. For details, see There are a few options for turning DNS horizontal autoscaling. Which option to use depends on different conditions. -### Option 1: Scale down the dns-autoscaler deployment to 0 replicas +### Option 1: Scale down the kube-dns-autoscaler deployment to 0 replicas This option works for all situations. Enter this command: - kubectl scale deployment --replicas=0 dns-autoscaler --namespace=kube-system + kubectl scale deployment --replicas=0 kube-dns-autoscaler --namespace=kube-system The output is: - deployment.extensions/dns-autoscaler scaled + deployment.extensions/kube-dns-autoscaler scaled Verify that the replica count is zero: @@ -172,33 +169,33 @@ The output displays 0 in the DESIRED and CURRENT columns: NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE ... - dns-autoscaler 0 0 0 0 ... + kube-dns-autoscaler 0 0 0 0 ... ... -### Option 2: Delete the dns-autoscaler deployment +### Option 2: Delete the kube-dns-autoscaler deployment -This option works if dns-autoscaler is under your own control, which means +This option works if kube-dns-autoscaler is under your own control, which means no one will re-create it: - kubectl delete deployment dns-autoscaler --namespace=kube-system + kubectl delete deployment kube-dns-autoscaler --namespace=kube-system The output is: - deployment.extensions "dns-autoscaler" deleted + deployment.extensions "kube-dns-autoscaler" deleted -### Option 3: Delete the dns-autoscaler manifest file from the master node +### Option 3: Delete the kube-dns-autoscaler manifest file from the master node -This option works if dns-autoscaler is under control of the +This option works if kube-dns-autoscaler is under control of the [Addon Manager](https://git.k8s.io/kubernetes/cluster/addons/README.md)'s control, and you have write access to the master node. Sign in to the master node and delete the corresponding manifest file. -The common path for this dns-autoscaler is: +The common path for this kube-dns-autoscaler is: /etc/kubernetes/addons/dns-horizontal-autoscaler/dns-horizontal-autoscaler.yaml After the manifest file is deleted, the Addon Manager will delete the -dns-autoscaler Deployment. +kube-dns-autoscaler Deployment. {{% /capture %}} diff --git a/content/en/examples/admin/dns/dns-horizontal-autoscaler.yaml b/content/en/examples/admin/dns/dns-horizontal-autoscaler.yaml index 5e6d55a6b280a..3c7eb40ffe2e1 100644 --- a/content/en/examples/admin/dns/dns-horizontal-autoscaler.yaml +++ b/content/en/examples/admin/dns/dns-horizontal-autoscaler.yaml @@ -1,18 +1,18 @@ apiVersion: apps/v1 kind: Deployment metadata: - name: dns-autoscaler + name: kube-dns-autoscaler namespace: kube-system labels: - k8s-app: dns-autoscaler + k8s-app: kube-dns-autoscaler spec: selector: matchLabels: - k8s-app: dns-autoscaler + k8s-app: kube-dns-autoscaler template: metadata: labels: - k8s-app: dns-autoscaler + k8s-app: kube-dns-autoscaler spec: containers: - name: autoscaler @@ -24,7 +24,7 @@ spec: command: - /cluster-proportional-autoscaler - --namespace=kube-system - - --configmap=dns-autoscaler + - --configmap=kube-dns-autoscaler - --target= # When cluster is using large nodes(with more cores), "coresPerReplica" should dominate. # If using small nodes, "nodesPerReplica" should dominate.