-
Notifications
You must be signed in to change notification settings - Fork 1.6k
/
Copy pathKUBERNETES.md
401 lines (307 loc) · 14.7 KB
/
KUBERNETES.md
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
# DefectDojo on Kubernetes
DefectDojo Kubernetes utilizes [Helm](https://helm.sh/), a
package manager for Kubernetes. Helm Charts help you define, install, and
upgrade even the most complex Kubernetes application.
For development purposes,
[minikube](https://kubernetes.io/docs/tasks/tools/install-minikube/)
and [Helm](https://helm.sh/) can be installed locally by following
this [guide](https://helm.sh/docs/using_helm/#installing-helm).
## Supported Kubernetes Versions
The tests cover the deployment on the lastest [kubernetes version](https://kubernetes.io/releases/) and the oldest supported [version from AWS](https://docs.aws.amazon.com/eks/latest/userguide/kubernetes-versions.html#available-versions). The assumption is that version in between do not have significant differences. Current tested versions can looks up in the [github k8s workflow](https://github.com/DefectDojo/django-DefectDojo/blob/master/.github/workflows/k8s-tests.yml).
## Helm chart
Starting with version 1.14.0, a helm chart will be pushed onto the `helm-charts` branch during the release process. Don't look for a chart museum, we're leveraging the "raw" capabilities of GitHub at this time.
To use it, you can add our repo.
```
$ helm repo add helm-charts 'https://raw.githubusercontent.com/DefectDojo/django-DefectDojo/helm-charts'
"helm-charts" has been added to your repositories
$ helm repo update
```
You should now be able to see the chart.
```
$ helm search repo defectdojo
NAME CHART VERSION APP VERSION DESCRIPTION
helm-charts/defectdojo 1.5.1 1.14.0-dev A Helm chart for Kubernetes to install DefectDojo
```
## Kubernetes Local Quickstart
Requirements:
1. Helm installed locally
2. Minikube installed locally
3. Latest cloned copy of DefectDojo
```zsh
git clone https://github.com/DefectDojo/django-DefectDojo
cd django-DefectDojo
minikube start
minikube addons enable ingress
```
Helm >= v3
```zsh
helm repo add bitnami https://charts.bitnami.com/bitnami
helm repo update
```
Then pull the dependent charts:
```zsh
helm dependency update ./helm/defectdojo
```
Now, install the helm chart into minikube.
If you have setup an ingress controller:
```zsh
DJANGO_INGRESS_ENABLED=true
```
else:
```zsh
DJANGO_INGRESS_ENABLED=false
```
If you have configured TLS:
```zsh
DJANGO_INGRESS_ACTIVATE_TLS=true
```
else:
```zsh
DJANGO_INGRESS_ACTIVATE_TLS=false
```
Warning: Use the `createSecret*=true` flags only upon first install. For re-installs, see `§Re-install the chart`
Helm >= v3:
```zsh
helm install \
defectdojo \
./helm/defectdojo \
--set django.ingress.enabled=${DJANGO_INGRESS_ENABLED} \
--set django.ingress.activateTLS=${DJANGO_INGRESS_ACTIVATE_TLS} \
--set createSecret=true \
--set createRedisSecret=true \
--set createPostgresqlSecret=true
```
It usually takes up to a minute for the services to startup and the
status of the containers can be viewed by starting up ```minikube dashboard```.
Note: If the containers are not cached locally the services will start once the
containers have been pulled locally.
To be able to access DefectDojo, set up an ingress or access the service
directly by running the following command:
```zsh
kubectl port-forward --namespace=default \
service/defectdojo-django 8080:80
```
As you set your host value to defectdojo.default.minikube.local, make sure that
it resolves to the localhost IP address, e.g. by adding the following two lines
to /etc/hosts:
```zsh
::1 defectdojo.default.minikube.local
127.0.0.1 defectdojo.default.minikube.local
```
To find out the password, run the following command:
```zsh
echo "DefectDojo admin password: $(kubectl \
get secret defectdojo \
--namespace=default \
--output jsonpath='{.data.DD_ADMIN_PASSWORD}' \
| base64 --decode)"
```
To access DefectDojo, go to <http://defectdojo.default.minikube.local:8080>.
Log in with username admin and the password from the previous command.
### Minikube with locally built containers
If testing containers locally, then set the imagePullPolicy to Never,
which ensures containers are not pulled from Docker hub.
Use the same commands as before but add:
```zsh
--set imagePullPolicy=Never
```
### Installing from a private registry
If you have stored your images in a private registry, you can install defectdojo chart with (helm 3).
- First create a secret named "defectdojoregistrykey" based on the credentials that can pull from the registry: see https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
- Then install the chart with the same commands as before but adding:
```zsh
--set repositoryPrefix=<myregistry.com/path> \
--set imagePullSecrets=defectdojoregistrykey
```
### Build Images Locally
```zsh
# Build images
docker build -t defectdojo/defectdojo-django -f Dockerfile.django .
docker build -t defectdojo/defectdojo-nginx -f Dockerfile.nginx .
```
```zsh
# Build images behind proxy
docker build --build-arg http_proxy=http://myproxy.com:8080 --build-arg https_proxy=http://myproxy.com:8080 -t defectdojo/defectdojo-django -f Dockerfile.django .
docker build --build-arg http_proxy=http://myproxy.com:8080 --build-arg https_proxy=http://myproxy.com:8080 -t defectdojo/defectdojo-nginx -f Dockerfile.nginx .
```
### Debug uWSGI with ptvsd
You can set breakpoints in code that is handled by uWSGI. The feature is meant to be used when you run locally on minikube, and mimics [what can be done with docker-compose](DOCKER.md#run-with-docker-compose-in-development-mode-with-ptvsd-remote-debug).
The port is currently hard-coded to 3000.
* In `values.yaml`, ensure the value for `enable_ptvsd` is set to `true` (the default is `false`). Make sure the change is taken into account in your deployment.
* Have `DD_DEBUG` set to `True`.
* Port forward port 3000 to the pod, such as `kubectl port-forward defectdojo-django-7886f49466-7cwm7 3000`.
### Upgrade the chart
If you want to change kubernetes configuration of use an updated docker image (evolution of defectDojo code), upgrade the application:
```
kubectl delete job defectdojo-initializer
helm upgrade defectdojo ./helm/defectdojo/ \
--set django.ingress.enabled=${DJANGO_INGRESS_ENABLED} \
--set django.ingress.activateTLS=${DJANGO_INGRESS_ACTIVATE_TLS}
```
### Re-install the chart
In case of issue or in any other situation where you need to re-install the chart, you can do it and re-use the same secrets.
**Note: With postgresql you'll keep the same database (more information below)**
```zsh
# helm 3
helm uninstall defectdojo
helm install \
defectdojo \
./helm/defectdojo \
--set django.ingress.enabled=${DJANGO_INGRESS_ENABLED} \
--set django.ingress.activateTLS=${DJANGO_INGRESS_ACTIVATE_TLS}
```
## Kubernetes Production
When running defectdojo in production be aware that you understood the full setup and always have a backup.
### Encryption to Kubernetes
Optionally, for TLS locally, you need to install a TLS certificate into your Kubernetes cluster.
For development purposes, you can create your own certificate authority as
described [here](https://github.com/hendrikhalkow/k8s-docs/blob/master/tls.md).
```zsh
# https://kubernetes.io/docs/concepts/services-networking/ingress/#tls
# Create a TLS secret called minikube-tls as mentioned above, e.g.
K8S_NAMESPACE="default"
TLS_CERT_DOMAIN="${K8S_NAMESPACE}.minikube.local"
kubectl --namespace "${K8S_NAMESPACE}" create secret tls defectdojo-tls \
--key <(openssl rsa \
-in "${CA_DIR}/private/${TLS_CERT_DOMAIN}.key.pem" \
-passin "pass:${TLS_CERT_PASSWORD}") \
--cert <(cat \
"${CA_DIR}/certs/${TLS_CERT_DOMAIN}.cert.pem" \
"${CA_DIR}/chain.pem")
```
### Encryption in Kubernetes and End-to-End Encryption
With the TLS certificate from your Kubernetes cluster all traffic to you cluster is encrypted, but the traffic in your cluster is still unencrypted.
If you want to encrypt the traffic to the nginx server you can use the option `--set nginx.tls.enabled=true` and `--set nginx.tls.generateCertificate=true` to generate a self signed certificate and use the https config. The option to add you own pregenerated certificate is generelly possible but not implemented in the helm chart yet.
Be aware that the traffic to the database and celery broker are unencrypted at the moment.
### Media persistent volume
By default, DefectDojo helm installation doesn't support persistent storage for storing images (dynamically uploaded by users). By default, it uses emptyDir, which is ephemeral by its nature and doesn't support multiple replicas of django pods, so should not be in use for production.
To enable persistence of the media storage that supports R/W many, should be in use as backend storage like S3, NFS, glusterfs, etc
```bash
mediaPersistentVolume:
enabled: true
# any name
name: media
# could be emptyDir (not for production) or pvc
type: pvc
# there are two options to create pvc 1) when you want the chart to create pvc for you, set django.mediaPersistentVolume.persistentVolumeClaim.create to true and do not specify anything for django.mediaPersistentVolume.PersistentVolumeClaim.name 2) when you want to create pvc outside the chart, pass the pvc name via django.mediaPersistentVolume.PersistentVolumeClaim.name and ensure django.mediaPersistentVolume.PersistentVolumeClaim.create is set to false
persistentVolumeClaim:
create: true
name:
size: 5Gi
accessModes:
- ReadWriteMany
storageClassName:
```
In the example above, we want the media content to be preserved to `pvc` as `persistentVolumeClaim` k8s resource and what we are basically doing is enabling the pvc to be created conditionally if the user wants to create it using the chart (in this case the pvc name 'defectdojo-media' will be inherited from template file used to deploy the pvc). By default the volume type is emptyDir which does not require a pvc. But when the type is set to pvc then we need a kubernetes Persistent Volume Claim and this is where the django.mediaPersistentVolume.persistentVolumeClaim.name comes into play.
The accessMode is set to ReadWriteMany by default to accommodate using more than one replica. Ensure storage support ReadWriteMany before setting this option, otherwise set accessMode to ReadWriteOnce.
NOTE: PersistrentVolume needs to be prepared in front before helm installation/update is triggered.
For more detail how how to create proper PVC see [example](https://github.com/DefectDojo/Community-Contribs/tree/master/persistent-media)
### Installation
```zsh
# Install Helm chart. Choose a host name that matches the certificate above
helm install \
defectdojo \
./helm/defectdojo \
--namespace="${K8S_NAMESPACE}" \
--set host="defectdojo.${TLS_CERT_DOMAIN}" \
--set django.ingress.secretName="minikube-tls" \
--set createSecret=true \
--set createRedisSecret=true \
--set createPostgresqlSecret=true
# For high availability deploy multiple instances of Django, Celery and Redis
helm install \
defectdojo \
./helm/defectdojo \
--namespace="${K8S_NAMESPACE}" \
--set host="defectdojo.${TLS_CERT_DOMAIN}" \
--set django.ingress.secretName="minikube-tls" \
--set django.replicas=3 \
--set celery.replicas=3 \
--set redis.replicas=3 \
--set createSecret=true \
--set createRedisSecret=true \
--set createPostgresqlSecret=true
# Run highly available PostgreSQL cluster
# for production environment.
helm install \
defectdojo \
./helm/defectdojo \
--namespace="${K8S_NAMESPACE}" \
--set host="defectdojo.${TLS_CERT_DOMAIN}" \
--set django.replicas=3 \
--set celery.replicas=3 \
--set redis.replicas=3 \
--set django.ingress.secretName="minikube-tls" \
--set database=postgresql \
--set postgresql.enabled=true \
--set postgresql.replication.enabled=true \
--set postgresql.replication.slaveReplicas=3 \
--set createSecret=true \
--set createRedisSecret=true \
--set createPostgresqlSecret=true
# Note: If you run `helm install defectdojo before, you will get an error
# message like `Error: release defectdojo failed: secrets "defectdojo" already
# exists`. This is because the secret is kept across installations.
# To prevent recreating the secret, add --set createSecret=false` to your
# command.
# Run test.
helm test defectdojo
# Navigate to <https://defectdojo.default.minikube.local>.
```
### Prometheus metrics
It's possible to enable Nginx prometheus exporter by setting `--set monitoring.enabled=true` and `--set monitoring.prometheus.enabled=true`. This adds the Nginx exporter sidecar and the standard Prometheus pod annotations to django deployment.
## Useful stuff
### Setting your own domain
The `site_url` in values.yaml controls what domain is configured in Django, and also what the celery workers will put as links in Jira tickets for example.
Set this to your `https://<yourdomain>` in values.yaml
### Multiple Hostnames
Django requires a list of all hostnames that are valid for requests.
You can add additional hostnames via helm or values file as an array.
This helps if you have a local service submitting reports to defectDojo using
the namespace name (say defectdojo.scans) instead of the TLD name used in a browser.
In your helm install simply pass them as a defined array, for example:
`--set "alternativeHosts={defectdojo.default,localhost,defectdojo.example.com}"`
This will also work with shell inserted variables:
` --set "alternativeHosts={defectdojo.${TLS_CERT_DOMAIN},localhost}"`
You will still need to set a host value as well.
### Using an existing redis setup with redis-sentinel
If you want to use a redis-sentinel setup as the Celery broker, you will need to set the following.
1. Set redis.scheme to "sentinel" in values.yaml
2. Set two additional extraEnv vars specifying the sentinel master name and port in values.yaml
```yaml
celery:
broker: "redis"
redis:
redisServer: "PutYourRedisSentinelAddress"
scheme: "sentinel"
extraEnv:
- name: DD_CELERY_BROKER_TRANSPORT_OPTIONS
value: '{"master_name": "mymaster"}'
- name: 'DD_CELERY_BROKER_PORT'
value: "26379"
```
### kubectl commands
```zsh
# View logs of a specific pod
kubectl logs $(kubectl get pod --selector=defectdojo.org/component=${POD} \
-o jsonpath="{.items[0].metadata.name}") -f
# Open a shell in a specific pod
kubectl exec -it $(kubectl get pod --selector=defectdojo.org/component=${POD} \
-o jsonpath="{.items[0].metadata.name}") -- /bin/bash
# Or:
kubectl exec defectdojo-django-<xxx-xxx> -c uwsgi -it /bin/sh
# Open a Python shell in a specific pod
kubectl exec -it $(kubectl get pod --selector=defectdojo.org/component=${POD} \
-o jsonpath="{.items[0].metadata.name}") -- python manage.py shell
```
### Clean up Kubernetes
Helm >= v3
```
helm uninstall defectdojo
```
To remove persistent objects not removed by uninstall (this will remove any database):
```
kubectl delete secrets defectdojo defectdojo-redis-specific defectdojo-postgresql-specific
kubectl delete serviceAccount defectdojo
kubectl delete pvc data-defectdojo-redis-0 data-defectdojo-postgresql-0
```