Podtato-head is a prototypical cloud-native application built to colorfully demonstrate delivery scenarios using many different tools and services. It is intended to help application delivery support teams test and decide which of these to use.
The app comprises a set of microservices in podtato-head-microservices
and a set of examples
demonstrating how to deliver them in delivery
. The services are defined with
as little additional logic as possible to enable you to focus on the delivery
mechanisms themselves.
Find the following set of delivery scenarios in the delivery
directory. Each
example scenario delivers the same end result: an API service which communicates
with other API services and returns HTML composed of all their responses.
Each delivery scenario includes a walkthrough (README.md) describing how to a) install required supporting infrastructure; b) deliver podtato-head using the infrastructure; and c) test that podtato-head is operating as expected.
Each delivery scenario also includes a test (test.sh) which automates the steps
described in the walkthrough. You can pause a test after tests run and before
teardown by setting the env var WAIT_FOR_DELETE=1
, as in WAIT_FOR_DELETE=1 ./delivery/flux/test.sh
. This lets you examine what the README and scripts do.
"Single" deployment means the action effects the state of the resources only once at the time of invocation. "GitOps" deployments mean the action checks the desired state periodically and reconciles it as needed.
The following scenarios deploy the multi-service app:
- Single deployment via Kubectl
- Single deployment via Helm
- Single deployment via Kustomize
- Single deployment via Ketch
- GitOps deployment via Flux
- Helm-based operator deployment
The following scenarios deploy the single-server app:
- Single deployment via Kapp
- GitOps deployment via ArgoCD
- Canary deployment via Argo Rollouts
- Multi-Stage delivery with Keptn
- CNAB with Porter air-gapped deployment
- GitOps deployment via KubeVela
- GitOps deployment via Gimlet CLI
Here's how to extend podtato-head for your own purposes or to contribute to the shared repo.
podtato-head's services themselves are written in Go; entry points are in
podtato-head-microservices/cmd
. The entry point to the app is defined in cmd/entry
and a
base for each of the app's downstream services is defined in cmd/parts
.
HTTP handlers and other shared functionality is defined in podtato-head-microservices/pkg
.
To run local tests on the Go code, run make podtato-head-verify
.
Build an image for each part - entry, hat, each arm and each leg - with make build-microservices-images
.
NOTE: To apply capabilities like image scans and signatures install required binaries first by running
[sudo] make install-requirements
.
To test the built images you'll need to push them to a registry so that
Kubernetes can find them. make push-microservices-images
can do this for
GitHub's container registry if you are authorized to push to the target repo (as
described next).
To push to your own fork of the podtato-head repo:
-
Fork podtato-head if you haven't already
-
Create a personal access token (PAT) with
write:packages
permissions and copy it -
Set and export env vars
GITHUB_USER
to your GitHub username andGITHUB_TOKEN
to the PAT, for example as follows:export GITHUB_USER=joshgav export GITHUB_TOKEN=goobledygook
NOTE: You can also put env vars in the
.env
file in the repo's root; be sure not to include those updates in commits.
To test the built images as running services in a cluster, run make test-microservices
. This spins up a cluster using kind
and deploys the services
using the kubectl
delivery scenario test.
These tests also rely on your GITHUB_USER and GITHUB_TOKEN env vars if you're using your own fork.
All delivery scenarios are expected to run on any functional Kubernetes cluster
with cluster-admin access. That is, if you can run kubectl get pods -n kube-system
you should be able to run any of the tests.
If you don't have a local Kubernetes cluster for tests, kind is one to consider.
NOTE: If you use a cluster without support for LoadBalancer-type services,
*which is typical for test clusters like kind, you may need to replace
*attributes which default to LoadBalancer
with NodePort
or ClusterIP
.
For example:
# update type property in `service` resources
find delivery -type f -name "*.yaml" -print0 | xargs -0 sed -i 's/type: LoadBalancer/type: NodePort/g'
# update custom serviceType property in Helm values file
find delivery -type f -name "*.yaml" -print0 | xargs -0 sed -i 's/serviceType: LoadBalancer/serviceType: NodePort/g'
See CONTRIBUTING.md.
See LICENSE.