Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Switch to use fileca in e2e tests #309

Merged
merged 2 commits into from
Jan 7, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
30 changes: 28 additions & 2 deletions .github/workflows/verify-k8s.yml
Original file line number Diff line number Diff line change
Expand Up @@ -131,6 +131,18 @@ jobs:
# local reigstry, even when pushing $REGISTRY_NAME:$REGISTRY_PORT/some/image
sudo echo "127.0.0.1 $REGISTRY_NAME" | sudo tee -a /etc/hosts

- name: Generate temporary CA files
run: |
openssl req -x509 \
-newkey ed25519 \
-sha256 \
-keyout "${{ github.run_id }}-key.pem" \
-out "${{ github.run_id }}-cert.pem" \
-subj "/CN=ed25519" \
-days 36500 \
-addext basicConstraints=critical,CA:TRUE,pathlen:1 \
-passout pass:"${{ github.run_id }}"

- name: Deploy fulcio-dev
run: |
# Reduce the resource requests of Fulcio
Expand Down Expand Up @@ -173,12 +185,26 @@ jobs:
server.yaml: |-
host: 0.0.0.0
port: 5555
ca: ephemeralca
gcp_private_ca_version: v1
ca: fileca
fileca-cert: /etc/fulcio-secret/cert.pem
fileca-key: /etc/fulcio-secret/key.pem
fileca-key-passwd: "${{ github.run_id }}"
ct-log-url: ""
log_type: prod
EOF

# Create secret needed to use fileca
cat <<EOF > config/fulcio-secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: fulcio-secret
namespace: fulcio-dev
data:
cert.pem: $(cat ${{ github.run_id }}-cert.pem | base64 -w 0)
key.pem: $(cat ${{ github.run_id }}-key.pem | base64 -w 0)
EOF

kubectl create ns fulcio-dev

ko apply -Bf config/
Expand Down
7 changes: 7 additions & 0 deletions config/deployment.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -52,6 +52,9 @@ spec:
volumeMounts:
- name: fulcio-config
mountPath: /etc/fulcio-config
- name: fulcio-secret
mountPath: /etc/fulcio-secret
readOnly: true
Comment on lines +55 to +57
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I believe this will cause production to break, but I believe you can make it optional (and then it won't)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does the optional field added in volumes section below accomplish this?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think so. Tried adding optional in another branch and got yelled at:

error: error validating "STDIN": error validating data: ValidationError(Deployment.spec.template.spec.containers[0].volumeMounts[1]): unknown field "optional" in

- name: oidc-info
mountPath: /var/run/fulcio
resources:
Expand All @@ -62,6 +65,10 @@ spec:
- name: fulcio-config
configMap:
name: fulcio-config
- name: fulcio-secret
secret:
secretName: fulcio-secret
optional: true
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That should make it so that it doesn't block starting it. The one interesting bit is what happens if you do want to specify the secret and it is not there yet but you'd like it to actually block. IIRC though, we watch that mount point so I think it will pick it up once it becomes available. @nsmith5 check me out :)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks like --fileca-watch flag defaults to true, so I think thats correct @vaikas

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fileca will never start without a valid key-pair. We never want to boot up the server if we can't serve requests so I wrote it to try and load an initial key-pair and bail hard if the doesn't work (ref https://github.com/sigstore/fulcio/blob/main/pkg/ca/fileca/fileca.go#L48)

I don't think I know what Kubernetes would do in this case. My hope would be a pod crashloop until the secret mounts, but I don't know the optional spec well enough to know that is the case. Once the pod from this deployment is scheduled and the secret wasn't available at schedule time, does that mean it will never get that secret volume mounted by pods scheduled later when the secret exists will get it? Or does it mean that Kubernetes will try to mount it even to an existing pod once the secret does exist?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The watch behaviour is also pretty paranoid about putting the server in a state where it can't serve requests so it looks for updates, but if, for instance, the cert doesn't match the key or the key can't be decrypted with the password it will just keep using the existing certs and ignore the updates from the filesystem

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ran Matt's suggestion, and things appear to work as we expect:

  1. Create a Pod with an optional secret volume and an image that crash loops if the secret isn't there:
cat << EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: boop-deployment
  labels:
    app: boop
spec:
  replicas: 1
  selector:
    matchLabels:
      app: boop
  template:
    metadata:
      labels:
        app: boop
    spec:
      containers:
      - name: boop
        image: bash
        command:
        - "bash"
        - "-c"
        - "[[ -f /boop/bop.txt ]] || (echo bad && exit 1) && (echo good && while true; do sleep 1; done)"
        volumeMounts:
        - name: boop-secret
          mountPath: /boop
          readOnly: true
      volumes:
      - name: boop-secret
        secret:
          secretName: boop-secret
          optional: true
EOF
$ k get pods
NAME                               READY   STATUS             RESTARTS   AGE
boop-deployment-5c48fd856b-tv9lz   0/1     CrashLoopBackOff   2          33s
  1. Create the secret and see if it stops crash looping.
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Secret
metadata:
  name: boop-secret
data:
  bop.txt: $(echo boop | base64)
EOF
$ k get pods
NAME                               READY   STATUS    RESTARTS   AGE
boop-deployment-5c48fd856b-tv9lz   1/1     Running   3          49s

yay?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@vaikas @nsmith5 Did you have anything else on this before we merge?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nope looks good from me! Good work @jdolitsky :D

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for checking in with Dr. Empirical 😆

Copy link
Contributor

@vaikas vaikas Jan 7, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No. I think it might behoove us to file an issue in k8s to document the behaviour in case they reserve the right to change this behaviour, or at least make sure that other folks don't have to ponder what the behaviour is. But I'm happy with the change for sure.

- name: oidc-info
projected:
sources:
Expand Down