Skip to content

Commit

Permalink
Merge pull request #193 from cncf/lastpassing-develop
Browse files Browse the repository at this point in the history
WIP v0.6.1-prerelease tests, example CNFs, UX updates
  • Loading branch information
denverwilliams authored May 5, 2020
2 parents 975581a + c1ce11c commit fda375e
Show file tree
Hide file tree
Showing 53 changed files with 2,198 additions and 31 deletions.
25 changes: 12 additions & 13 deletions .travis.yml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
language: crystal
language: minimal

crystal:
- 'latest'
# crystal:
# - 'latest'

services:
- docker
Expand All @@ -11,12 +11,18 @@ jobs:
- stage: K8s
before_script:
# Download and install go
- wget https://dl.google.com/go/go1.12.linux-amd64.tar.gz
- tar -xvf go1.12.linux-amd64.tar.gz
- wget https://dl.google.com/go/go1.13.linux-amd64.tar.gz
- tar -xvf go1.13.linux-amd64.tar.gz
- sudo mv go /usr/local
- export GOROOT=/usr/local/go
- export GOPATH=$HOME/go
- export PATH=$GOPATH/bin:$GOROOT/bin:$PATH
# Download and install Crystal
- sudo apt update && sudo apt install -y libevent-dev
- wget https://github.com/crystal-lang/crystal/releases/download/0.33.0/crystal-0.33.0-1-linux-x86_64.tar.gz
- tar -xvf crystal-*.tar.gz
- export PATH=$(pwd)/crystal-0.33.0-1/bin:$PATH
- crystal version
# Download and install kubectl
- curl -LO https://storage.googleapis.com/kubernetes-release/release/$(curl -s https://storage.googleapis.com/kubernetes-release/release/stable.txt)/bin/linux/amd64/kubectl && chmod +x kubectl && sudo mv kubectl /usr/local/bin/
# Download and install KinD
Expand All @@ -25,15 +31,8 @@ jobs:
# This is useful in cases when Go toolchain isn't available or you prefer running stable version
# Binaries for KinD are available on GitHub Releases: https://github.com/kubernetes-sigs/kind/releases
# - curl -Lo kind https://github.com/kubernetes-sigs/kind/releases/download/0.0.1/kind-linux-amd64 && chmod +x kind && sudo mv kind /usr/local/bin/

# Create a new Kubernetes cluster using KinD
- kind create cluster

# Set KUBECONFIG environment variable
- export KUBECONFIG="$(kind get kubeconfig-path)"
script:
- shards install
- crystal spec -v
- crystal build src/cnf-conformance.cr
- ./cnf-conformance sample_coredns
- ./cnf-conformance configuration_file_setup
- ./cnf-conformance liveness verbose
8 changes: 8 additions & 0 deletions points.yml
Original file line number Diff line number Diff line change
Expand Up @@ -3,6 +3,12 @@
tags:
pass: 5
fail: -1

- name: image_size_large
tags: microservice, dynamic
- name: reasonable_startup_time
tags: microservice, dynamic

- name: cni_spec
tags: compatibility, dynamic
- name: api_snoop_alpha
Expand Down Expand Up @@ -66,6 +72,8 @@
- name: openmetric_compatible
tags: observability, dynamic

- name: helm_deploy
tags: installability, dynamic
- name: install_script_helm
tags: installability, static
- name: helm_chart_valid
Expand Down
1 change: 1 addition & 0 deletions sample-cnfs/sample-coredns-cnf/cnf-conformance.yml
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@ install_script:
release_name: coredns
deployment_name: coredns-coredns
application_deployment_names: [coredns-coredns]
docker_repository: coredns/coredns
helm_repository:
name: stable
repo_url: https://kubernetes-charts.storage.googleapis.com
Expand Down
1 change: 1 addition & 0 deletions sample-cnfs/sample-generic-cnf/cnf-conformance.yml
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,7 @@ install_script: cnfs/coredns/Makefile
release_name: coredns
deployment_name: coredns-coredns
application_deployment_names: [coredns-coredns]
docker_repository: coredns/coredns
helm_repository:
name: stable
repo_url: https://kubernetes-charts.storage.googleapis.com
Expand Down
39 changes: 39 additions & 0 deletions sample-cnfs/sample-large-cnf/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
# Set up Sample CoreDNS CNF
./sample-cnfs/sample-coredns-cnf/readme.md
# Prerequistes
### Install helm
```
curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3
chmod 700 get_helm.sh
./get_helm.sh
```
### Optional: Use a helm version manager
https://github.com/yuya-takeyama/helmenv
Check out helmenv into any path (here is ${HOME}/.helmenv)
```
${HOME}/.helmenv)
$ git clone https://github.com/yuya-takeyama/helmenv.git ~/.helmenv
```
Add ~/.helmenv/bin to your $PATH any way you like
```
$ echo 'export PATH="$HOME/.helmenv/bin:$PATH"' >> ~/.bash_profile
```
```
helmenv versions
helmenv install <version 3.1?>
```

### core-dns installation
```
helm install coredns stable/coredns
```
### Pull down the helm chart code, untar it, and put it in the cnfs/coredns directory
```
helm pull stable/coredns
```
### Example cnf-conformance config file for sample-core-dns-cnf
In ./cnfs/sample-core-dns-cnf/cnf-conformance.yml
```
---
container_names: [coredns-coredns]
```
22 changes: 22 additions & 0 deletions sample-cnfs/sample-large-cnf/chart/.helmignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*~
# Various IDEs
.project
.idea/
*.tmproj
OWNERS
23 changes: 23 additions & 0 deletions sample-cnfs/sample-large-cnf/chart/Chart.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
apiVersion: v1
appVersion: 1.6.7
description: CoreDNS is a DNS server that chains plugins and provides Kubernetes DNS
Services
home: https://coredns.io
icon: https://coredns.io/images/CoreDNS_Colour_Horizontal.png
keywords:
- coredns
- dns
- kubedns
maintainers:
- email: hello@acale.ph
name: Acaleph
- email: shashidhara.huawei@gmail.com
name: shashidharatd
- email: andor44@gmail.com
name: andor44
- email: manuel@rueg.eu
name: mrueg
name: coredns
sources:
- https://github.com/coredns/coredns
version: 1.10.0
138 changes: 138 additions & 0 deletions sample-cnfs/sample-large-cnf/chart/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,138 @@
# CoreDNS

[CoreDNS](https://coredns.io/) is a DNS server that chains plugins and provides DNS Services

# TL;DR;

```console
$ helm install --name coredns --namespace=kube-system stable/coredns
```

## Introduction

This chart bootstraps a [CoreDNS](https://github.com/coredns/coredns) deployment on a [Kubernetes](http://kubernetes.io) cluster using the [Helm](https://helm.sh) package manager. This chart will provide DNS Services and can be deployed in multiple configuration to support various scenarios listed below:

- CoreDNS as a cluster dns service and a drop-in replacement for Kube/SkyDNS. This is the default mode and CoreDNS is deployed as cluster-service in kube-system namespace. This mode is chosen by setting `isClusterService` to true.
- CoreDNS as an external dns service. In this mode CoreDNS is deployed as any kubernetes app in user specified namespace. The CoreDNS service can be exposed outside the cluster by using using either the NodePort or LoadBalancer type of service. This mode is chosen by setting `isClusterService` to false.
- CoreDNS as an external dns provider for kubernetes federation. This is a sub case of 'external dns service' which uses etcd plugin for CoreDNS backend. This deployment mode as a dependency on `etcd-operator` chart, which needs to be pre-installed.

## Prerequisites

- Kubernetes 1.10 or later

## Installing the Chart

The chart can be installed as follows:

```console
$ helm install --name coredns --namespace=kube-system stable/coredns
```

The command deploys CoreDNS on the Kubernetes cluster in the default configuration. The [configuration](#configuration) section lists various ways to override default configuration during deployment.

> **Tip**: List all releases using `helm list`
## Uninstalling the Chart

To uninstall/delete the `my-release` deployment:

```console
$ helm delete coredns
```

The command removes all the Kubernetes components associated with the chart and deletes the release.

## Configuration

| Parameter | Description | Default |
|:----------------------------------------|:--------------------------------------------------------------------------------------|:------------------------------------------------------------|
| `image.repository` | The image repository to pull from | coredns/coredns |
| `image.tag` | The image tag to pull from | `v1.6.7` |
| `image.pullPolicy` | Image pull policy | IfNotPresent |
| `replicaCount` | Number of replicas | 1 |
| `resources.limits.cpu` | Container maximum CPU | `100m` |
| `resources.limits.memory` | Container maximum memory | `128Mi` |
| `resources.requests.cpu` | Container requested CPU | `100m` |
| `resources.requests.memory` | Container requested memory | `128Mi` |
| `serviceType` | Kubernetes Service type | `ClusterIP` |
| `prometheus.monitor.enabled` | Set this to `true` to create ServiceMonitor for Prometheus operator | `false` |
| `prometheus.monitor.additionalLabels` | Additional labels that can be used so ServiceMonitor will be discovered by Prometheus | {} |
| `prometheus.monitor.namespace` | Selector to select which namespaces the Endpoints objects are discovered from. | `""` |
| `service.clusterIP` | IP address to assign to service | `""` |
| `service.loadBalancerIP` | IP address to assign to load balancer (if supported) | `""` |
| `service.externalTrafficPolicy` | Enable client source IP preservation | `[]` |
| `service.annotations` | Annotations to add to service | `{prometheus.io/scrape: "true", prometheus.io/port: "9153"}`|
| `serviceAccount.create` | If true, create & use serviceAccount | false |
| `serviceAccount.name` | If not set & create is true, use template fullname | |
| `rbac.create` | If true, create & use RBAC resources | true |
| `rbac.pspEnable` | Specifies whether a PodSecurityPolicy should be created. | `false` |
| `isClusterService` | Specifies whether chart should be deployed as cluster-service or normal k8s app. | true |
| `priorityClassName` | Name of Priority Class to assign pods | `""` |
| `servers` | Configuration for CoreDNS and plugins | See values.yml |
| `affinity` | Affinity settings for pod assignment | {} |
| `nodeSelector` | Node labels for pod assignment | {} |
| `tolerations` | Tolerations for pod assignment | [] |
| `zoneFiles` | Configure custom Zone files | [] |
| `extraSecrets` | Optional array of secrets to mount inside the CoreDNS container | [] |
| `customLabels` | Optional labels for Deployment(s), Pod, Service, ServiceMonitor objects | {} |
| `podDisruptionBudget` | Optional PodDisruptionBudget | {} |
| `autoscaler.enabled` | Optionally enabled a cluster-proportional-autoscaler for CoreDNS | `false` |
| `autoscaler.coresPerReplica` | Number of cores in the cluster per CoreDNS replica | `256` |
| `autoscaler.nodesPerReplica` | Number of nodes in the cluster per CoreDNS replica | `16` |
| `autoscaler.image.repository` | The image repository to pull autoscaler from | k8s.gcr.io/cluster-proportional-autoscaler-amd64 |
| `autoscaler.image.tag` | The image tag to pull autoscaler from | `1.7.1` |
| `autoscaler.image.pullPolicy` | Image pull policy for the autoscaler | IfNotPresent |
| `autoscaler.priorityClassName` | Optional priority class for the autoscaler pod. `priorityClassName` used if not set. | `""` |
| `autoscaler.affinity` | Affinity settings for pod assignment for autoscaler | {} |
| `autoscaler.nodeSelector` | Node labels for pod assignment for autoscaler | {} |
| `autoscaler.tolerations` | Tolerations for pod assignment for autoscaler | [] |
| `autoscaler.resources.limits.cpu` | Container maximum CPU for cluster-proportional-autoscaler | `20m` |
| `autoscaler.resources.limits.memory` | Container maximum memory for cluster-proportional-autoscaler | `10Mi` |
| `autoscaler.resources.requests.cpu` | Container requested CPU for cluster-proportional-autoscaler | `20m` |
| `autoscaler.resources.requests.memory` | Container requested memory for cluster-proportional-autoscaler | `10Mi` |
| `autoscaler.configmap.annotations` | Annotations to add to autoscaler config map. For example to stop CI renaming them | {} |

See `values.yaml` for configuration notes. Specify each parameter using the `--set key=value[,key=value]` argument to `helm install`. For example,

```console
$ helm install --name coredns \
--set rbac.create=false \
stable/coredns
```

The above command disables automatic creation of RBAC rules.

Alternatively, a YAML file that specifies the values for the above parameters can be provided while installing the chart. For example,

```console
$ helm install --name coredns -f values.yaml stable/coredns
```

> **Tip**: You can use the default [values.yaml](values.yaml)

## Caveats

The chart will automatically determine which protocols to listen on based on
the protocols you define in your zones. This means that you could potentially
use both "TCP" and "UDP" on a single port.
Some cloud environments like "GCE" or "Azure container service" cannot
create external loadbalancers with both "TCP" and "UDP" protocols. So
When deploying CoreDNS with `serviceType="LoadBalancer"` on such cloud
environments, make sure you do not attempt to use both protocols at the same
time.

## Autoscaling

By setting `autoscaler.enabled = true` a
[cluster-proportional-autoscaler](https://github.com/kubernetes-incubator/cluster-proportional-autoscaler)
will be deployed. This will default to a coredns replica for every 256 cores, or
16 nodes in the cluster. These can be changed with `autoscaler.coresPerReplica`
and `autoscaler.nodesPerReplica`. When cluster is using large nodes (with more
cores), `coresPerReplica` should dominate. If using small nodes,
`nodesPerReplica` should dominate.

This also creates a ServiceAccount, ClusterRole, and ClusterRoleBinding for
the autoscaler deployment.

`replicaCount` is ignored if this is enabled.
30 changes: 30 additions & 0 deletions sample-cnfs/sample-large-cnf/chart/templates/NOTES.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
{{- if .Values.isClusterService }}
CoreDNS is now running in the cluster as a cluster-service.
{{- else }}
CoreDNS is now running in the cluster.
It can be accessed using the below endpoint
{{- if contains "NodePort" .Values.serviceType }}
export NODE_PORT=$(kubectl get --namespace {{ .Release.Namespace }} -o jsonpath="{.spec.ports[0].nodePort}" services {{ template "coredns.fullname" . }})
export NODE_IP=$(kubectl get nodes --namespace {{ .Release.Namespace }} -o jsonpath="{.items[0].status.addresses[0].address}")
echo "$NODE_IP:$NODE_PORT"
{{- else if contains "LoadBalancer" .Values.serviceType }}
NOTE: It may take a few minutes for the LoadBalancer IP to be available.
You can watch the status by running 'kubectl get svc -w {{ template "coredns.fullname" . }}'

export SERVICE_IP=$(kubectl get svc --namespace {{ .Release.Namespace }} {{ template "coredns.fullname" . }} -o jsonpath='{.status.loadBalancer.ingress[0].ip}')
echo $SERVICE_IP
{{- else if contains "ClusterIP" .Values.serviceType }}
"{{ template "coredns.fullname" . }}.{{ .Release.Namespace }}.svc.cluster.local"
from within the cluster
{{- end }}
{{- end }}

It can be tested with the following:

1. Launch a Pod with DNS tools:

kubectl run -it --rm --restart=Never --image=infoblox/dnstools:latest dnstools

2. Query the DNS server:

/ # host kubernetes
Loading

0 comments on commit fda375e

Please sign in to comment.