Skip to content

AnastasiaTWW/ingress-chart

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

94 Commits
 
 
 
 

Repository files navigation

wallarm-ingress

wallarm-ingress is the distribution of Wallarm Node based on the community supported NGINX Ingress controller.

Wallarm Ingress Controller allows you to use Wallarm Application Security Platform to protect web services that are running in the Kubernetes cluster.

To use, add the kubernetes.io/ingress.class: nginx annotation to your Ingress resources.

TL;DR;

https://docs.wallarm.com/admin-en/installation-kubernetes-en/

Introduction

This chart bootstraps an wallarm-ingress deployment on a Kubernetes cluster using the Helm package manager.

Prerequisites

  • Kubernetes 1.6+

Configuration

The following table lists the configurable parameters of the wallarm-ingress chart and their default values.

Parameter Description Default
controller.name name of the controller component controller
controller.image.repository controller container image repository quay.io/kubernetes-ingress-controller/nginx-ingress-controller
controller.image.tag controller container image tag 0.26.1
controller.image.pullPolicy controller container image pull policy IfNotPresent
controller.image.runAsUser User ID of the controller process. Value depends on the Linux distribution used inside of the container image. By default uses debian one. 33
controller.containerPort.http The port that the controller container listens on for http connections. 80
controller.containerPort.https The port that the controller container listens on for https connections. 443
controller.config nginx ConfigMap entries none
controller.hostNetwork If the nginx deployment / daemonset should run on the host's network namespace. Do not set this when controller.service.externalIPs is set and kube-proxy is used as there will be a port-conflict for port 80 false
controller.defaultBackendService default 404 backend service; needed only if defaultBackend.enabled = false ""
controller.dnsPolicy If using hostNetwork=true, change to ClusterFirstWithHostNet. See pod's dns policy for details ClusterFirst
controller.reportNodeInternalIp If using hostNetwork=true, setting reportNodeInternalIp=true, will pass the flag report-node-internal-ip-address to nginx-ingress. This sets the status of all Ingress objects to the internal IP address of all nodes running the NGINX Ingress controller.
controller.electionID election ID to use for the status update ingress-controller-leader
controller.extraEnvs any additional environment variables to set in the pods {}
controller.extraContainers Sidecar containers to add to the controller pod. See LemonLDAP::NG controller as example {}
controller.extraVolumeMounts Additional volumeMounts to the controller main container {}
controller.extraVolumes Additional volumes to the controller pod {}
controller.extraInitContainers Containers, which are run before the app containers are started []
controller.ingressClass name of the ingress class to route through this controller nginx
controller.scope.enabled limit the scope of the ingress controller false (watch all namespaces)
controller.scope.namespace namespace to watch for ingress "" (use the release namespace)
controller.extraArgs Additional controller container arguments {}
controller.kind install as Deployment, DaemonSet or Both Deployment
controller.autoscaling.enabled If true, creates Horizontal Pod Autoscaler false
controller.autoscaling.minReplicas If autoscaling enabled, this field sets minimum replica count 2
controller.autoscaling.maxReplicas If autoscaling enabled, this field sets maximum replica count 11
controller.autoscaling.targetCPUUtilizationPercentage Target CPU utilization percentage to scale "50"
controller.autoscaling.targetMemoryUtilizationPercentage Target memory utilization percentage to scale "50"
controller.daemonset.useHostPort If controller.kind is DaemonSet, this will enable hostPort for TCP/80 and TCP/443 false
controller.daemonset.hostPorts.http If controller.daemonset.useHostPort is true and this is non-empty, it sets the hostPort "80"
controller.daemonset.hostPorts.https If controller.daemonset.useHostPort is true and this is non-empty, it sets the hostPort "443"
controller.tolerations node taints to tolerate (requires Kubernetes >=1.6) []
controller.affinity node/pod affinities (requires Kubernetes >=1.6) {}
controller.terminationGracePeriodSeconds how many seconds to wait before terminating a pod 60
controller.minReadySeconds how many seconds a pod needs to be ready before killing the next, during update 0
controller.nodeSelector node labels for pod assignment {}
controller.podAnnotations annotations to be added to pods {}
controller.podLabels labels to add to the pod container metadata {}
controller.podSecurityContext Security context policies to add to the controller pod {}
controller.replicaCount desired number of controller pods 1
controller.minAvailable minimum number of available controller pods for PodDisruptionBudget 1
controller.resources controller pod resource requests & limits {}
controller.priorityClassName controller priorityClassName nil
controller.lifecycle controller pod lifecycle hooks {}
controller.service.annotations annotations for controller service {}
controller.service.labels labels for controller service {}
controller.publishService.enabled if true, the controller will set the endpoint records on the ingress objects to reflect those on the service false
controller.publishService.pathOverride override of the default publish-service name ""
controller.service.enabled if disabled no service will be created. This is especially useful when controller.kind is set to DaemonSet and controller.daemonset.useHostPorts is true true
controller.service.clusterIP internal controller cluster service IP (set to "-" to pass an empty value) nil
controller.service.omitClusterIP (Deprecated) To omit the clusterIP from the controller service false
controller.service.externalIPs controller service external IP addresses. Do not set this when controller.hostNetwork is set to true and kube-proxy is used as there will be a port-conflict for port 80 []
controller.service.externalTrafficPolicy If controller.service.type is NodePort or LoadBalancer, set this to Local to enable source IP preservation "Cluster"
controller.service.healthCheckNodePort If controller.service.type is NodePort or LoadBalancer and controller.service.externalTrafficPolicy is set to Local, set this to the managed health-check port the kube-proxy will expose. If blank, a random port in the NodePort range will be assigned ""
controller.service.loadBalancerIP IP address to assign to load balancer (if supported) ""
controller.service.loadBalancerSourceRanges list of IP CIDRs allowed access to load balancer (if supported) []
controller.service.enableHttp if port 80 should be opened for service true
controller.service.enableHttps if port 443 should be opened for service true
controller.service.targetPorts.http Sets the targetPort that maps to the Ingress' port 80 80
controller.service.targetPorts.https Sets the targetPort that maps to the Ingress' port 443 443
controller.service.ports.http Sets service http port 80
controller.service.ports.https Sets service https port 443
controller.service.type type of controller service to create LoadBalancer
controller.service.nodePorts.http If controller.service.type is either NodePort or LoadBalancer and this is non-empty, it sets the nodePort that maps to the Ingress' port 80 ""
controller.service.nodePorts.https If controller.service.type is either NodePort or LoadBalancer and this is non-empty, it sets the nodePort that maps to the Ingress' port 443 ""
controller.service.nodePorts.tcp Sets the nodePort for an entry referenced by its key from tcp {}
controller.service.nodePorts.udp Sets the nodePort for an entry referenced by its key from udp {}
controller.livenessProbe.initialDelaySeconds Delay before liveness probe is initiated 10
controller.livenessProbe.periodSeconds How often to perform the probe 10
controller.livenessProbe.timeoutSeconds When the probe times out 5
controller.livenessProbe.successThreshold Minimum consecutive successes for the probe to be considered successful after having failed. 1
controller.livenessProbe.failureThreshold Minimum consecutive failures for the probe to be considered failed after having succeeded. 3
controller.livenessProbe.port The port number that the liveness probe will listen on. 10254
controller.readinessProbe.initialDelaySeconds Delay before readiness probe is initiated 10
controller.readinessProbe.periodSeconds How often to perform the probe 10
controller.readinessProbe.timeoutSeconds When the probe times out 1
controller.readinessProbe.successThreshold Minimum consecutive successes for the probe to be considered successful after having failed. 1
controller.readinessProbe.failureThreshold Minimum consecutive failures for the probe to be considered failed after having succeeded. 3
controller.readinessProbe.port The port number that the readiness probe will listen on. 10254
controller.metrics.enabled if true, enable Prometheus metrics false
controller.stats.service.omitClusterIP To omit the clusterIP from the stats service false
controller.metrics.service.annotations annotations for Prometheus metrics service {}
controller.metrics.service.clusterIP cluster IP address to assign to service (set to "-" to pass an empty value) nil
controller.metrics.service.omitClusterIP (Deprecated) To omit the clusterIP from the metrics service false
controller.metrics.service.externalIPs Prometheus metrics service external IP addresses []
controller.metrics.service.labels labels for metrics service {}
controller.metrics.service.loadBalancerIP IP address to assign to load balancer (if supported) ""
controller.metrics.service.loadBalancerSourceRanges list of IP CIDRs allowed access to load balancer (if supported) []
controller.metrics.service.servicePort Prometheus metrics service port 9913
controller.metrics.service.type type of Prometheus metrics service to create ClusterIP
controller.metrics.serviceMonitor.enabled Set this to true to create ServiceMonitor for Prometheus operator false
controller.metrics.serviceMonitor.additionalLabels Additional labels that can be used so ServiceMonitor will be discovered by Prometheus {}
controller.metrics.serviceMonitor.honorLabels honorLabels chooses the metric's labels on collisions with target labels. false
controller.metrics.serviceMonitor.namespace namespace where servicemonitor resource should be created the same namespace as nginx ingress
controller.metrics.serviceMonitor.scrapeInterval interval between Prometheus scraping 30s
controller.metrics.prometheusRule.enabled Set this to true to create prometheusRules for Prometheus operator false
controller.metrics.prometheusRule.additionalLabels Additional labels that can be used so prometheusRules will be discovered by Prometheus {}
controller.metrics.prometheusRule.namespace namespace where prometheusRules resource should be created the same namespace as nginx ingress
controller.metrics.prometheusRule.rules rules to be prometheus in YAML format, check values for an example. []
controller.admissionWebhooks.enabled Create Ingress admission webhooks. Validating webhook will check the ingress syntax. false
controller.admissionWebhooks.failurePolicy Failure policy for admission webhooks Fail
controller.admissionWebhooks.port Admission webhook port 8080
controller.admissionWebhooks.service.annotations Annotations for admission webhook service {}
controller.admissionWebhooks.service.omitClusterIP (Deprecated) To omit the clusterIP from the admission webhook service false
controller.admissionWebhooks.service.clusterIP cluster IP address to assign to admission webhook service (set to "-" to pass an empty value) nil
controller.admissionWebhooks.service.externalIPs Admission webhook service external IP addresses []
controller.admissionWebhooks.service.loadBalancerIP IP address to assign to load balancer (if supported) ""
controller.admissionWebhooks.service.loadBalancerSourceRanges List of IP CIDRs allowed access to load balancer (if supported) []
controller.admissionWebhooks.service.servicePort Admission webhook service port 443
controller.admissionWebhooks.service.type Type of admission webhook service to create ClusterIP
controller.admissionWebhooks.patch.enabled If true, will use a pre and post install hooks to generate a CA and certificate to use for the prometheus operator tls proxy, and patch the created webhooks with the CA. true
controller.admissionWebhooks.patch.image.repository Repository to use for the webhook integration jobs jettech/kube-webhook-certgen
controller.admissionWebhooks.patch.image.tag Tag to use for the webhook integration jobs v1.0.0
controller.admissionWebhooks.patch.image.pullPolicy Image pull policy for the webhook integration jobs IfNotPresent
controller.admissionWebhooks.patch.priorityClassName Priority class for the webhook integration jobs ""
controller.admissionWebhooks.patch.podAnnotations Annotations for the webhook job pods {}
controller.admissionWebhooks.patch.nodeSelector Node selector for running admission hook patch jobs {}
controller.customTemplate.configMapName configMap containing a custom nginx template ""
controller.customTemplate.configMapKey configMap key containing the nginx template ""
controller.addHeaders configMap key:value pairs containing custom headers added before sending response to the client {}
controller.proxySetHeaders configMap key:value pairs containing custom headers added before sending request to the backends {}
controller.headers DEPRECATED, Use controller.proxySetHeaders instead. {}
controller.updateStrategy allows setting of RollingUpdate strategy {}
controller.wallarm.enabled if true, enable Wallarm protection false
controller.wallarm.apiHost Address of Wallarm API service "api.wallarm.com"
controller.wallarm.token Cluster Node token to authorize controller in the Wallarm Cloud ""
controller.wallarm.tarantool.service.annotations annotations to be added to the postanalytics service {}
controller.wallarm.tarantool.replicaCount desired number of postanalytics service pods 1
controller.wallarm.tarantool.arena Amount of memory allocated for postanalytics service "0.2"
controller.wallarm.tarantool.livenessProbe.failureThreshold Minimum consecutive failures for the probe to be considered failed after having succeeded. 3
controller.wallarm.tarantool.livenessProbe.initialDelaySeconds Delay before liveness probe is initiated 10
controller.wallarm.tarantool.livenessProbe.periodSeconds How often to perform the probe 10
controller.wallarm.tarantool.livenessProbe.successThreshold Minimum consecutive successes for the probe to be considered successful after having failed. 1
controller.wallarm.tarantool.livenessProbe.timeoutSeconds When the probe times out 1
controller.wallarm.tarantool.resources postanalytics service pod resource requests & limits {}
controller.wallarm.metrics.enabled if true, enable Prometheus metrics (controller.metrics.enabled must be true as well) false
controller.wallarm.metrics.service.annotations annotations for Prometheus metrics service {"prometheus.io/scrape": "true", "prometheus.io/path": "/wallarm-metrics", "prometheus.io/port": "18080"}
controller.wallarm.metrics.clusterIP internal controller cluster service IP ""
controller.wallarm.metrics.externalIP controller service external IP addresses. Do not set this when controller.hostNetwork is set to true and kube-proxy is used as there will be a port-conflict for port 80 []
controller.wallarm.metrics.loadBalancerIP IP address to assign to load balancer (if supported) ""
controller.wallarm.metrics.loadBalancerSourceRanges list of IP CIDRs allowed access to load balancer (if supported) []
controller.wallarm.metrics.servicePort Prometheus metrics service port 9913
controller.wallarm.metrics.type Prometheus metrics target port ClusterIP
controller.wallarm.collectd.resources collectd container resource requests & limits {}
controller.wallarm.synccloud.resources synccloud container resource requests & limits {}
defaultBackend.enabled If false, controller.defaultBackendService must be provided true
controller.configMapNamespace The nginx-configmap namespace name ""
controller.tcp.configMapNamespace The tcp-services-configmap namespace name ""
controller.udp.configMapNamespace The udp-services-configmap namespace name ""
defaultBackend.name name of the default backend component default-backend
defaultBackend.image.repository default backend container image repository k8s.gcr.io/defaultbackend-amd64
defaultBackend.image.tag default backend container image tag 1.5
defaultBackend.image.pullPolicy default backend container image pull policy IfNotPresent
defaultBackend.image.runAsUser User ID of the controller process. Value depends on the Linux distribution used inside of the container image. By default uses nobody user. 65534
defaultBackend.extraArgs Additional default backend container arguments {}
defaultBackend.extraEnvs any additional environment variables to set in the defaultBackend pods []
defaultBackend.port Http port number 8080
defaultBackend.livenessProbe.initialDelaySeconds Delay before liveness probe is initiated 30
defaultBackend.livenessProbe.periodSeconds How often to perform the probe 10
defaultBackend.livenessProbe.timeoutSeconds When the probe times out 5
defaultBackend.livenessProbe.successThreshold Minimum consecutive successes for the probe to be considered successful after having failed. 1
defaultBackend.livenessProbe.failureThreshold Minimum consecutive failures for the probe to be considered failed after having succeeded. 3
defaultBackend.readinessProbe.initialDelaySeconds Delay before readiness probe is initiated 0
defaultBackend.readinessProbe.periodSeconds How often to perform the probe 5
defaultBackend.readinessProbe.timeoutSeconds When the probe times out 5
defaultBackend.readinessProbe.successThreshold Minimum consecutive successes for the probe to be considered successful after having failed. 1
defaultBackend.readinessProbe.failureThreshold Minimum consecutive failures for the probe to be considered failed after having succeeded. 6
defaultBackend.tolerations node taints to tolerate (requires Kubernetes >=1.6) []
defaultBackend.affinity node/pod affinities (requires Kubernetes >=1.6) {}
defaultBackend.nodeSelector node labels for pod assignment {}
defaultBackend.podAnnotations annotations to be added to pods {}
defaultBackend.podLabels labels to add to the pod container metadata {}
defaultBackend.replicaCount desired number of default backend pods 1
defaultBackend.minAvailable minimum number of available default backend pods for PodDisruptionBudget 1
defaultBackend.resources default backend pod resource requests & limits {}
defaultBackend.priorityClassName default backend priorityClassName nil
defaultBackend.podSecurityContext Security context policies to add to the default backend {}
defaultBackend.service.annotations annotations for default backend service {}
defaultBackend.service.clusterIP internal default backend cluster service IP (set to "-" to pass an empty value) nil
defaultBackend.service.omitClusterIP (Deprecated) To omit the clusterIP from the default backend service false
defaultBackend.service.externalIPs default backend service external IP addresses []
defaultBackend.service.loadBalancerIP IP address to assign to load balancer (if supported) ""
defaultBackend.service.loadBalancerSourceRanges list of IP CIDRs allowed access to load balancer (if supported) []
defaultBackend.service.type type of default backend service to create ClusterIP
imagePullSecrets name of Secret resource containing private registry credentials nil
rbac.create if true, create & use RBAC resources true
podSecurityPolicy.enabled if true, create & use Pod Security Policy resources false
serviceAccount.create if true, create a service account for the controller true
serviceAccount.name The name of the controller service account to use. If not set and create is true, a name is generated using the fullname template. ``
serviceAccount.backend.create if true, create a backend service account. Only useful if you need a pod security policy to run the backend. true
serviceAccount.backend.name The name of the backend service account to use. If not set and create is true, a name is generated using the fullname template. Only useful if you need a pod security policy to run the backend. ``
revisionHistoryLimit The number of old history to retain to allow rollback. 10
tcp TCP service key:value pairs. The value is evaluated as a template. {}
udp UDP service key:value pairs The value is evaluated as a template. {}

Usage example

Helm v2

$ helm install wallarm/wallarm-ingress --name my-release \
    --set controller.wallarm.enabled=true

Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example,

$ helm install wallarm/wallarm-ingress --name my-release -f values.yaml

Helm v3

$ helm install wallarm/wallarm-ingress my-release \
    --set controller.wallarm.enabled=true

Alternatively, a YAML file that specifies the values for the parameters can be provided while installing the chart. For example,

$ helm install my-release wallarm/wallarm-ingress -f values.yaml

A useful trick to debug issues with ingress is to increase the logLevel as described here

$ helm install wallarm/wallarm-ingress --set controller.extraArgs.v=2

Tip: You can use the default values.yaml

PodDisruptionBudget

Note that the PodDisruptionBudget resource will only be defined if the replicaCount is greater than one, else it would make it impossible to evacuate a node. See gh issue #7127 for more info.

Prometheus Metrics

The Nginx ingress controller can export Prometheus metrics.

Helm v2

$ helm install wallarm/wallarm-ingress --name my-release \
    --set controller.stats.enabled=true \
    --set controller.metrics.enabled=true \
    --set controller.wallarm.metrics.enabled=true

Helm v3

$ helm install wallarm/wallarm-ingress my-release \
    --set controller.stats.enabled=true \
    --set controller.metrics.enabled=true \
    --set controller.wallarm.metrics.enabled=true

You can add Prometheus annotations to the metrics service using controller.metrics.service.annotations. Alternatively, if you use the Prometheus Operator, you can enable ServiceMonitor creation using controller.metrics.serviceMonitor.enabled.

nginx-ingress nginx_status page/stats server

Previous versions of this chart had a controller.stats.* configuration block, which is now obsolete due to the following changes in nginx ingress controller:

  • in 0.16.1, the vts (virtual host traffic status) dashboard was removed
  • in 0.23.0, the status page at port 18080 is now a unix socket webserver only available at localhost. You can use curl --unix-socket /tmp/nginx-status-server.sock http://localhost/nginx_status inside the controller container to access it locally, or use the snippet from nginx-ingress changelog to re-enable the http server

ExternalDNS Service configuration

Add an ExternalDNS annotation to the LoadBalancer service:

controller:
  service:
    annotations:
      external-dns.alpha.kubernetes.io/hostname: kubernetes-example.com.

AWS L7 ELB with SSL Termination

Annotate the controller as shown in the nginx-ingress l7 patch:

controller:
  service:
    targetPorts:
      http: http
      https: http
    annotations:
      service.beta.kubernetes.io/aws-load-balancer-ssl-cert: arn:aws:acm:XX-XXXX-X:XXXXXXXXX:certificate/XXXXXXXX-XXXX-XXXX-XXXX-XXXXXXXXXX
      service.beta.kubernetes.io/aws-load-balancer-backend-protocol: "http"
      service.beta.kubernetes.io/aws-load-balancer-ssl-ports: "https"
      service.beta.kubernetes.io/aws-load-balancer-connection-idle-timeout: '3600'

AWS route53-mapper

To configure the LoadBalancer service with the route53-mapper addon, add the domainName annotation and dns label:

controller:
  service:
    labels:
      dns: "route53"
    annotations:
      domainName: "kubernetes-example.com"

Ingress Admission Webhooks

With nginx-ingress-controller version 0.25+, the nginx ingress controller pod exposes an endpoint that will integrate with the validatingwebhookconfiguration Kubernetes feature to prevent bad ingress from being added to the cluster.

With nginx-ingress-controller in 0.25.* work only with kubernetes 1.14+, 0.26 fix this issue

Helm error when upgrading: spec.clusterIP: Invalid value: ""

If you are upgrading this chart from a version between 0.31.0 and 1.2.2 then you may get an error like this:

Error: UPGRADE FAILED: Service "?????-controller" is invalid: spec.clusterIP: Invalid value: "": field is immutable

Detail of how and why are in this issue but to resolve this you can set xxxx.service.omitClusterIP to true where xxxx is the service referenced in the error.

As of version 1.26.0 of this chart, by simply not providing any clusterIP value, invalid: spec.clusterIP: Invalid value: "": field is immutable will no longer occur since clusterIP: "" will not be rendered.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Mustache 100.0%