Application Controller provides a simplified way of deploying and updating single-container applications to a homelab Kubernetes cluster.
I'm making this for personal use. Please feel free to fork & experiment, but do expect breaking changes.
Application controller automatically manages the following Kubernetes resources for an Application:
- Deployment
- ServiceAccount
- Service
- IngressRoute
- PodMonitor
- RoleBinding
- ClusterRoleBinding
It automatically updates the Deployment whenever a new version of the container image becomes available.
Example resource:
apiVersion: yarotsky.me/v1alpha1
kind: Application
metadata:
name: application-sample
spec:
image:
repository: "git.home.yarotsky.me/vlad/dashboard"
versionStrategy: "SemVer"
semver:
constraint: "^0"
ports:
- name: "http"
containerPort: 8080
- name: "metrics"
containerPort: 8192
ingress:
host: "dashboard.home.yarotsky.me"
metrics:
enabled: true
Expanded example
apiVersion: yarotsky.me/v1alpha1
kind: Application
metadata:
name: application-sample
spec:
image:
repository: "git.home.yarotsky.me/vlad/dashboard"
versionStrategy: "SemVer"
semver:
constraint: "^0"
updateSchedule: "@every 5m" # see https://pkg.go.dev/github.com/robfig/cron#hdr-CRON_Expression_Format;
# default can be set via `--default-update-schedule`
command: ["/bin/dashboard"]
args: ["start-server"]
env:
- name: "FOO"
value: "bar"
- name: "QUX"
valueFrom:
secretKeyRef:
name: "my-secret"
key: "my-secret-key"
ports:
- name: "http"
containerPort: 8080
- name: "metrics"
containerPort: 8192
- name: "listen-udp"
protocol: "UDP"
containerPort: 22000
ingress:
# Ingress annotations can be set via `--ingress-annotations`.
host: "dashboard.home.yarotsky.me"
portName: "http" # defaults to `"web" `or `"http" `if present in `.spec.ports`
auth:
enabled: true # Enables authentication proxy for this IngressRoute.
loadBalancer:
host: "udp.home.yarotsky.me"
portNames: ["listen-udp"]
metrics:
enabled: true
portName: "metrics" # defaults to `"metrics"` or `"prometheus" `if present in `.spec.ports`
path: "/metrics" # defaults to `"/metrics"`
resources:
requests:
cpu: "100m"
limits:
cpu: "250m"
probe:
httpGet:
path: "/healthz"
securityContext:
runAsUser: 1000
runAsGroup: 1000
volumes:
- name: "my-volume"
volumeSource:
persistentVolumeClaim:
claimName: "my-pvc"
mountPath: "/data"
roles:
- apiGroup: "rbac.authorization.k8s.io"
kind: "ClusterRole"
name: "my-cluster-role"
scope: "Cluster"
- apiGroup: "rbac.authorization.k8s.io"
kind: "ClusterRole"
name: "my-cluster-role2"
scope: "Namespace"
- apiGroup: "rbac.authorization.k8s.io"
kind: "Role"
name: "my-role"
cronJobs:
- name: "daily-job"
schedule: "@daily"
command: ["/bin/the-daily-thing"]
- Traefik ingress controller
- Traefik LoadBalancer Service needs to have a DNS record that will be used to create CNAME records for Application ingresses.
- ExternalDNS
Additional configuration for k3s
apiVersion: helm.cattle.io/v1
kind: HelmChartConfig
metadata:
name: traefik
namespace: kube-system
spec:
valuesContent: |-
providers:
kubernetesCRD:
allowCrossNamespace: true
service:
annotations:
"external-dns.alpha.kubernetes.io/hostname": "my.traefik.ingress.example.com"
Traefik Middlewares
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: oauth2-signin
namespace: kube-system
spec:
errors:
query: /oauth2/sign_in
service:
name: oauth2-proxy
namespace: kube-system
port: http
status:
- "401"
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: oauth2-forward
namespace: kube-system
spec:
forwardAuth:
address: https://auth.home.example.com/oauth2/auth
trustForwardHeader: true
authResponseHeaders:
- "X-Auth-Request-Email"
- "X-Auth-Request-Groups"
- "X-Auth-Request-Preferred-Username"
- "X-Auth-Request-User"
---
apiVersion: traefik.containo.us/v1alpha1
kind: Middleware
metadata:
name: forward-auth
namespace: kube-system
spec:
chain:
middlewares:
- name: oauth2-signin
namespace: kube-system
- name: oauth2-forward
namespace: kube-system
Next, install OAuth2 Proxy, with the following configuration:
OAUTH2_PROXY_HTTP_ADDRESS: "0.0.0.0:4180",
OAUTH2_PROXY_COOKIE_DOMAINS: ".example.com",
OAUTH2_PROXY_WHITELIST_DOMAINS: ".example.com",
OAUTH2_PROXY_PROVIDER: "oidc",
OAUTH2_PROXY_CLIENT_ID: "oauth2-proxy",
OAUTH2_PROXY_CLIENT_SECRET: "<OIDC provider client secret>"
OAUTH2_PROXY_EMAIL_DOMAINS: "*",
OAUTH2_PROXY_OIDC_ISSUER_URL: oidcClient.discoveryURL,
OAUTH2_PROXY_REDIRECT_URL: redirectUrl,
OAUTH2_PROXY_COOKIE_CSRF_PER_REQUEST: "true",
OAUTH2_PROXY_COOKIE_CSRF_EXPIRE: '5m',
OAUTH2_PROXY_REVERSE_PROXY: "true",
OAUTH2_PROXY_SET_XAUTHREQUEST: "true",
// Needed to disable the mandatory validated email requirement, if you know what you're doing
// Ref: https://joeeey.com/blog/selfhosting-sso-with-traefik-oauth2-proxy-part-2#https://joeeey.com/blog/selfhosting-sso-with-traefik-oauth2-proxy-part-2/
OAUTH2_PROXY_INSECURE_OIDC_ALLOW_UNVERIFIED_EMAIL: "true",
OAUTH2_PROXY_OIDC_EMAIL_CLAIM: "sub",
// Needed for multiple subdomains support
// Ref: https://github.com/oauth2-proxy/oauth2-proxy/issues/1297#issuecomment-1564124675
OAUTH2_PROXY_FOOTER: "<script>(function(){var rd=document.getElementsByName('rd');for(var i=0;i<rd.length;i++)rd[i].value=window.location.toString().split('/oauth2')[0]})()</script>"
OAUTH2_PROXY_COOKIE_SECRET: "<32-byte cookie secret>"
The following flags should be supplied to the application controller:
--traefik-cname-target=my.traefik.ingress.example.com \
--traefik-auth-path-prefix=/oauth2/ \
--traefik-auth-service-name=kube-system/oauth2-proxy \
--traefik-auth-service-port-name=http \
--traefik-auth-middleware-name=kube-system/forward-auth \
Container image registry authentication is handled by utilizing Kubernetes Secrets. See Specifying imagePullSecrets on a Pod.
Use --image-pull-secret
to supply secret names (the flag can be used multiple times).
See https://book.kubebuilder.io/reference/metrics-reference.
In addition to the above metrics, the following metrics are instrumented:
Name | Description | Tags |
---|---|---|
image_registry_calls_total |
Number of calls to a Container Image Registry | registry , repository , success , method |
- Automatically update docker images of Applications (digest).
- Unhardcode reference to image pull secret (accept many via configuration)
- Allow configuration of default Ingress annotations via controller config.
- Gracefully manage updates
- To Container/Service/Ingress configurations
- To Roles
- Expose meaningful status on the
Application
CR. - Expose Events on the
Application
CR (e.g. when we're updating the image). - Expose current image in status
- Garbage-collect
ClusterRoleBinding
objects (they cannot be auto-removed via ownership relationship). - Automatically update docker images of Applications (semver).
- Add
app
short name - Ensure we don't hammer the image registry on errors (requeue reconciliation with increased interval) - solved via caching image refs
- Support different update schedules
- Allow Applications to pick a particular update schedule
- Allow choosing a default one
- Update README
- Add prometheus metrics
- Prune Ingress, Service (LoadBalancer) when
ingress
orloadBalancer
are removed. - Allow specifying a mixture of container SecurityContext and PodSecurityContext (rather, allow setting
fsGroup
on pod template level) - Validating admission webhook? Or at least write tests to make sure we have nice error messages
- Enforce 15 characters max on port name
You’ll need a Kubernetes cluster to run against. You can use KIND to get a local cluster for testing, or run against a remote cluster.
Note: Your controller will automatically use the current context in your kubeconfig file (i.e. whatever cluster kubectl cluster-info
shows).
- Install Instances of Custom Resources:
kubectl apply -k config/samples/
- Build and push your image to the location specified by
IMG
:
make docker-build docker-push IMG=<some-registry>/application-controller:tag
- Deploy the controller to the cluster with the image specified by
IMG
:
make deploy IMG=<some-registry>/application-controller:tag
To delete the CRDs from the cluster:
make uninstall
UnDeploy the controller from the cluster:
make undeploy
This project aims to follow the Kubernetes Operator pattern.
It uses Controllers, which provide a reconcile function responsible for synchronizing resources until the desired state is reached on the cluster.
- Install the CRDs into the cluster:
make install
- Run your controller (this will run in the foreground, so switch to a new terminal if you want to leave it running):
make run
NOTE: You can also run this in one step by running: make install run
If you are editing the API definitions, generate the manifests such as CRs or CRDs using:
make manifests
NOTE: Run make --help
for more information on all potential make
targets
More information can be found via the Kubebuilder Documentation
Copyright 2023.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.