permalink | layout | title | duration | releasedate | description | tags | guide-category | |
---|---|---|---|---|---|---|---|---|
/guides/working-with-pipelines/ |
guide-markdown |
Build and deploy applications with pipelines |
30 minutes |
2020-02-19 |
Explore how to use Pipelines with Application Stacks |
|
pipelines |
Kabanero uses pipelines to illustrate a continuous input and continuous delivery (CI/CD) workflow. Kabanero provides a set of default tasks and pipelines that can be associated with application stacks. These pipelines validate the application stack is active, build the application stack, publish the image to a container registry, scan the published image, and then deploy the application to the Kubernetes cluster. You can also create your own tasks and pipelines and customize the pre-built pipelines and tasks. All tasks and pipelines are activated by Kabanero's standard Kubernetes operator.
To learn more about pipelines and creating new tasks, see the pipeline tutorial.
The default Kabanero tasks and pipelines are provided in the Kabanero pipelines repository. Details of some of the primary pipelines and tasks are described below.
-
This file is the primary pipeline that showcases all the tasks supplied in the Kabanero repo. It validates that the application stack is active, builds the application stack, publishes the application image to the container registry, does a security scan of the image, and conditionally deploys the application. When running the pipeline via a webhook, the pipeline leverages the triggers functionality to conditionally deploy the application only when a pull request is merged in the git repo. Other actions that trigger the pipeline run, will validate, build, push, and scan the image.
This task validates the stack is allowed to build and deploy on the cluster. It checks the digest of the stack image specified in the .appsody-config.yaml
file of the project and validates that it matches the digest of the stack image that's active on the cluster. If the digests do not match, the pipeline fails and none of the other steps execute.
-
This file builds a container image from the artifacts in the git-source repository by using
appsody build
. The appsody build command leverages the Buildah options. After the image is built, the image is published to the container registry that is configured. The build-push-task also generates theapp-deploy.yaml
that is used by thedeploy-task
. If there is already a copy of theapp-deploy.yaml
file in the source repository, it is merged with the new one generated by this step. -
Deploy-task
uses theapp-deploy.yaml
file to deploy the application to the cluster by using the application deployment operator. By default, the pipelines run and deploy the application in thekabanero
namespace. If you want to deploy the application in a different namespace, update theapp-deploy.yaml
file to point to that namespace. -
The
image-scan-task
task will initiate a container scan of the image published by thebuild-push-task
using OpenSCAP. The results of the scan are published in the logs of the task.
For more tasks and pipelines, see the kabanero-pipelines repo.
The pipelines can be associated with an application stack in the Kabanero custom resource definition (CRD). This is an example CRD:
apiVersion: kabanero.io/v1alpha1
kind: Kabanero
metadata:
name: kabanero
spec:
version: "0.6.0"
stacks:
repositories:
- name: central
https:
url: https://github.com/kabanero-io/collections/releases/download/0.5.0/kabanero-index.yaml
pipelines:
- id: default
sha256: 14d59b7ebae113c18fb815c2ccfd8a846c5fbf91d926ae92e0017ca5caf67c95
https:
url: https://github.com/kabanero-io/kabanero-pipelines/releases/download/0.6.0/default-kabanero-pipelines.tar.gz
When the Kabanero operator activates the CRD, it associates the pipelines in the pipelines archive with each of the stacks in the stack hub. The default pipelines are intended to work with all the stacks in the stack hub in the previous example. All of the pipeline-related resources (such as the tasks, trigger bindings, and pipelines) prefix the name of the resource with the keyword StackId
. When the operator activates these resources, it replaces the keyword with the name of the stack it is activating.
The default tasks and pipelines can be updated by forking the Kabanero Pipelines repo and editing the files under pipelines/incubator
. The easiest way to generate the archive for use by the Kabanero CRD is to run the package.sh script. The script generates the archive file with the necessary pipeline artifacts and a manifest.yaml
file that describes the contents of the archive. Copy the package.sh
file to the root directory of your pipelines project and run it. It generates the pipelines archive file under ci/assests
.
Alternatively, you can run the Travis build against a release of your pipelines repo, which also generates the archive file with a manifest.yaml
file and attaches it to your release.
If you are publishing your application stack images to any registry other than Docker hub, you can specify your custom registry when you initialize a stack by using the --stack-registry
option in the appsody init
command. Specifying a custom registry updates the stack name in the .appsody-config.yaml
to include the registry information that is consumed by the pipeline.
Alternatively, you can use a configmap to configure the custom repository from which your pipelines pulls the container images.
-
After you clone the
kabanero-pipelines
repository, find thestack-image-registry-map.yaml
configmap template file. Add your container registry URL to this file in place of thedefault-stack-image-registry-url
statement.cd kabanero-pipelines/pipelines/sample-helper-files/ vi stack-image-registry-map.yaml
-
If your custom application stack image is stored in an internal OpenShift registry, the service account that is associated with the pipelines must be configured to allow the pipelines to pull from the internal registry without configuring a secret. If your custom application stack is stored in a container registry with an external route, follow these steps to set up a Kubernetes secret:
-
Find the
default-stack-image-registry-secret.yaml
template file in the cloned kabanero-pipelines repo (kabanero-pipelines/pipelines/sample-helper-files/
) and update it with the username and token password for the container registry URL you specified previously. -
Create a Base64 format version of the username and password for the external route container registry URL.
echo -n <your-registry-username> | base64 echo -n <your-registry-password> | base64
-
Update the
default-stack-image-registry-secret.yaml
file with the Base64 formatted username and password.vi default-stack-image-registry-secret.yaml
-
Apply the
default-stack-image-registry-secret.yaml
file to the clusteroc apply -f default-stack-image-registry-secret.yaml
-
-
Apply the following configmap file, which will set your container registry.
oc apply -f stack-image-registry-map.yaml
NOTE: If a value is specified in both the config map and in the .appsody-config.yaml
and they are different, the config map takes precedence.
Explore how to use pipelines to build and manage application stacks.
-
Kabanero foundation must be installed on a supported Kubernetes deployment.
-
A pipelines dashboard is installed by default with Kabanero's Kubernetes operator. To find the pipelines dashboard URL, login to your cluster and run the
oc get routes
command or in the Kabanero landing page. -
A persistent volume must be configured. See the following section for details.
-
Secrets for the git repo (if private) and image repository
Follow these steps:
-
Set up a persistent volume to run pipelines
Pipelines require a configured volume that is used by the framework to share data across tasks. The pipeline run creates a Persistent Volume Claim (PVC) with a requirement for five GB of persistent volume.
-
Static persistent volumes
If you are not running your cluster on a public cloud, you can set up a static persistent volume using NFS. For an example of how to use static persistent volume provisioning, see Static persistent volumes.
-
Dynamic volume provisioning
If you run your cluster on a public cloud, you can set up a dynamic persistent volume by using your cloud provider’s default storage class. For an example of how to use dynamic persistent volume provisioning, see Dynamic volume provisioning.
-
-
Create secrets
Git secrets must be created in the
kabanero
namespace and associated with the service account that runs the pipelines. To configure secrets using the pipelines dashboard, see Create secrets.Alternatively, you can configure secrets in the Kubernetes console or set them up by using the Kubernetes CLI.
You can use the pipelines dashboard webhook extension to drive pipelines that automatically build and deploy an application whenever you update the code in your Git repo. Events such as commits or pull requests can be set up to automatically trigger pipeline runs.
If you are developing a new pipeline and want to test it in a tight loop, you might want to use a script or manually drive the pipeline.
-
Log in to your cluster. For example,
oc login <master node IP>:8443
-
Clone the pipelines repo
git clone https://github.com/kabanero-io/kabanero-pipelines
-
Run the following script with the appropriate parameters
cd ./pipelines/sample-helper-files/./manual-pipeline-run-script.sh -r [git_repo of the Appsody project] -i [docker registery path of the image to be created] -c [application stack name of which pipeline to be run]"
-
The following example is configured to use the dockerhub container registry:
./manual-pipeline-run-script.sh -r https://github.com/mygitid/appsody-test-project -i index.docker.io/mydockeid/my-java-microprofile-image -c java-microprofile"
-
The following example is configured to use the local OpenShift container registry:
./manual-pipeline-run-script.sh -r https://github.com/mygitid/appsody-test-project -i docker-registry.default.svc:5000/kabanero/my-java-microprofile-image -c java-microprofile"
-
Follow these steps to run a pipeline directly from the command line:
-
Login to your cluster. For example,
oc login <master node IP>:8443
-
Clone the pipelines repo.
git clone https://github.com/kabanero-io/kabanero-pipelines cd kabanero-pipelines
-
Create pipeline resources.
Use the
pipeline-resource-template.yaml
file to create thePipelineResources
. Thepipeline-resource-template.yaml
is provided in the pipelines /pipelines/sample-helper-files directory. Update the docker-image URL. You can use the sample GitHub repo or update it to point to your own GitHub repo. -
After you update the file, apply it as shown in the following example:
oc apply -f <stack-name>-pipeline-resources.yaml
The installations that activate the featured application stacks also activate the tasks and pipelines. If you are creating a new task or pipeline, activate it manually, as shown in the following example.
oc apply -f <task.yaml>
oc apply -f <pipeline.yaml>
A sample manual-pipeline-run-template.yaml
file is provided in the /pipelines/sample-helper-files directory. Rename the template file to a name of your choice (for example, pipeline-run.yaml), and update the file to replace application-stack-name
with the name of your application stack. After you update the file, run it as shown in the following example.
oc apply -f <application-stack-name>-pipeline-run.yaml
You can check the status of the pipeline run from the Kubernetes console, command line, or pipelines dashboard.
-
Log in to the pipelines dashboard and click `Pipeline runs' in the sidebar menu.
-
Find your pipeline run in the list and click it to check the status and find logs. You can see logs and status for each step and task.
Enter the following command in the terminal:
oc get pipelineruns
oc -n kabanero describe pipelinerun.tekton.dev/<pipeline-run-name>
You can also see pods for the pipeline runs, for which you can specify oc describe
and oc logs
to get more details.
If the pipeline run was successful, you can see a Docker image in our Docker registry and a pod that’s running your application.
To find solutions for common issues and troubleshoot problems with pipelines, see the Pipelines Troubleshooting Guide.