Skip to content
This repository has been archived by the owner on Mar 22, 2024. It is now read-only.

Adding quick-start example #476

Open
wants to merge 5 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
66 changes: 66 additions & 0 deletions examples/quick-start/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,66 @@
In this introduction to SPIRE on Kubernetes you will learn how to:

* Deploy SPIRE and SPIFFE with helm in a non production ready configuration suitable for testing purposes.
* Configure a registration entry for a workload
* Fetch an x509-SVID over the SPIFFE Workload API
* Learn where to find resources for more complex installations

The steps in this guide have been tested on these versions:
- Kubernetes: 1.26
- Helm Chart: 0.10.1
- App: 1.7.0

{{< info >}}
If you are using Minikube to run this tutorial you should specify some special flags as described [here](#considerations-when-using-minikube).
cccsss01 marked this conversation as resolved.
Show resolved Hide resolved

# Obtain the Required Files

This guide requires a number of **.yaml** files. To obtain this directory of files clone **https://github.com/spiffe/spire-tutorials** and obtain the **.yaml** files from the **spire-tutorials/k8s/quickstart-helm** subdirectory. Remember to run all kubectl commands in the directory in which those files reside.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This I think isn't true anymore when in the helm-charts repo?


Set up a Kubernetes environment on a provider of your choice or use Minikube. Link the Kubernetes environment to the kubectl command.

# Install with Helm
```bash
$ helm repo add spiffe https://spiffe.github.io/helm-charts/
$ helm update
$ helm -n spire install spire spiffe/spire -f values.yaml --create-namespace
```
# Verify
## Verify Namespace
Run the following command and verify that *spire* is listed in the output:

```bash
$ kubectl get namespaces
```
## Verify Statefulset
This creates a statefulset called **spire-server** in the **spire** namespace and starts up a **spire-server** pod, as demonstrated in the output of the following commands:

```bash
$ kubectl get statefulset --namespace spire
@@ -107,26 +72,8 @@ $ kubectl get services --namespace spire
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
spire-server NodePort 10.107.205.29 <none> 8081:30337/TCP 88m

## Verify Agent
This creates a daemonset called **spire-agent** in the **spire** namespace and starts up a **spire-agent** pod along side **spire-server**, as demonstrated in the output of the following commands:

```bash
$ kubectl get daemonset --namespace spire

As a daemonset, you'll see as many **spire-agent** pods as you have nodes.

# Register Workloads

In order to enable SPIRE to perform workload attestation -- which allows the agent to identify the workload to attest to its agent -- you must register the workload in the server. This tells SPIRE how to identify the workload and which SPIFFE ID to give it.

1. Create a new registration entry for the node, specifying the SPIFFE ID to allocate to the node:
> **Note** change -selector k8s_sat:cluster:demo-cluster to your cluster name

```shell
$ kubectl exec -n spire spire-server-0 -- \
Comment on lines +54 to +60
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The controller-manager provided with the chart will automatically do this. Its the recommended way.


In this section, you configure a workload container to access SPIRE. Specifically, you are configuring the workload container to access the Workload API UNIX domain socket.

The **client-deployment.yaml** file configures a no-op container using the **spire-k8s** docker image used for the server and agent. Examine the `volumeMounts` and `volumes configuration` stanzas to see how the UNIX domain `spire-agent.sock` is bound in.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe some comment about the spire-agent with the parameters "api watch" makes it function as a normal workload and not as the spire-agent itself. Otherwise we might confuse the user into thinking they need their own spire-agent in each workload?

You can test that the agent socket is accessible from an application container by issuing the following commands:
32 changes: 32 additions & 0 deletions examples/quick-start/client-deployment.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,32 @@
apiVersion: apps/v1
kind: Deployment
metadata:
name: client
labels:
app: client
spec:
selector:
matchLabels:
app: client
template:
metadata:
labels:
app: client
spec:
hostPID: true
hostNetwork: true
dnsPolicy: ClusterFirstWithHostNet
containers:
- name: client
image: ghcr.io/spiffe/spire-agent:1.7.0
command: ["/opt/spire/bin/spire-agent"]
args: [ "api", "watch", "-socketPath", "/run/spire/agent-sockets/spire-agent.sock" ]
volumeMounts:
- name: spire-agent-socket
mountPath: /run/spire/agent-sockets
readOnly: true
volumes:
- name: spire-agent-socket
hostPath:
cccsss01 marked this conversation as resolved.
Show resolved Hide resolved
path: /run/spire/agent-sockets
type: Directory
54 changes: 54 additions & 0 deletions examples/quick-start/values.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,54 @@
# You can enable config/features that affect all services here.
global:
k8s:
# -- This is the value of your clusters `kubeadm init --service-dns-domain` flag
clusterDomain: cluster.local
spire:
# -- The name of the Kubernetes cluster (`kubeadm init --service-dns-domain`)
clusterName: demo-cluster
# -- The trust domain to be used for the SPIFFE identifiers
trustDomain: example.org
# -- Set the jwt issuer
jwtIssuer: oidc-discovery.example.org
# -- Override all instances of bundleConfigMap
bundleConfigMap: ""

image:
# -- Override all Spire image registries at once
registry: ""

# telemetry:
# prometheus:
# enabled: true
# podMonitor:
# enabled: true
# # -- Allows to install the PodMonitor in another namespace then the spire components are installed into.
# namespace: "kube-prometheus-system"
# labels: {}

# subcharts
spire-server:
# -- Enables deployment of SPIRE Server
enabled: true
nameOverride: server

controllerManager:
# -- Enables deployment of Controller Manager
enabled: true

spire-agent:
# -- Enables deployment of SPIRE Agent(s)
enabled: true
nameOverride: agent

spiffe-csi-driver:
# -- Enables deployment of CSI driver
enabled: true

spiffe-oidc-discovery-provider:
# -- Enables deployment of OIDC discovery provider
enabled: false

tornjak-frontend:
# -- Enables deployment of Tornjak frontend/UI (Not for production)
enabled: false