-
Notifications
You must be signed in to change notification settings - Fork 0
/
kubernetes-packaging-applications-helm.txt
96 lines (96 loc) · 128 KB
/
kubernetes-packaging-applications-helm.txt
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
Course Overview
Course Overview
Hi, everyone. My name is Philippe, and welcome to my course, Packaging Applications with Helm for Kubernetes. You need to install applications in Kubernetes and you are looking for the right tool to do it, you are at the right place. Helm makes it much easier to install applications in Kubernetes and manage versions, so it's definitely worth learning. This course teaches you to use Helm version 3 with a lot of comprehensive diagrams with a focus on understanding the basic concepts. Moreover, you can practice with a set of 11 labs with source code, where you'll learn to use Helm step by step to build a Helm chart and customize it with templates, and finally, to install a production‑ready release of a real GuestBook application, or even a WordPress blog. Some of the major topics we will cover include building a Helm chart, customizing a chart with Helm templates, managing dependencies and versions, using Helm repositories. By the end of this course, you will be able to build your own Helm chart and install any application release in Kubernetes with Helm. Before beginning the course, you should be familiar with basic concepts related to Kubernetes, but you don't have to be an expert. In fact, you can also learn Kubernetes best practices by first learning Helm and using existing stable Helm charts. I hope you'll join me on this journey to learn Helm with the Packaging Application with Helm for Kubernetes course at Pluralsight.
Course Introduction
Introducing the Course
Hello. My name is Philippe. I have been working as a freelance DevOps for over 20 years. I am a big fan of Helm, and I use it in all my Kubernetes projects. I am a certified Kubernetes application developer, and I will be your coach for this Helm course. Let me introduce the course. In real life, when you have a lot of files, you try to be organized, and you classify them in archives. In the IT world, you do the same. You try to pack your resources and organize them with package managers. Most of the time, Kubernetes applications contain many Kubernetes objects, pods, services, ingress, persistent volumes. So a tool is needed to package the definitions of those resources. That tool is Helm. Helm makes it much easier to install applications in Kubernetes and manage application dependencies and versions. As you follow this course, you might wonder what Helm is and how it works internally. What is a Helm chart and how two charts depend on each other. Finally, you might also have some questions about the Helm repository, and of course, you would like to know how to install your applications in Kubernetes with Helm. This course answers all these questions. In the Discovering Helm module, we answer the two first questions, what Helm is and how it works. In the following module, you'll learn to build your first chart step by step from scratch. Then, you'll learn how to customize a chart by writing a Helm template. We go into detail in that module so that you can write a chart, just like a Helm expert would do. Then, you'll learn how to manage dependencies between charts and deal with versions. This is indeed one of the main features of package managers. Finally, you'll learn how to reuse existing charts from the Helm stable repository. To complete the theoretical modules, you are invited to practice with the 11 labs provided. In these labs, you'll use Helm step by step from scratch to completed task. First, you'll install the Kubernetes environment with Helm. Then, you'll build your first Helm chart and a more complex umbrella chart. Of course, you will install an application release with Helm in your Kubernetes cluster and update it with new revisions. In the following modules, you'll customize your chart with Helm templates so that it's reusable. Then, you'll learn to manage dependencies with your chart. And finally you'll pack it, publish it to a repository, and learn how to work with existing charts from the stable Helm repository. The sample application you are going to install in Kubernetes with Helm is a guest book for events app where the event's participant can leave a message with feedback about an event. The application has a common architecture that you can find nowadays in many projects, a single page application for the front end built with Angular, a JavaScript back‑end API running on Node.js, and a NoSQL database hosted on MongoDB. You can find all the source code and resources needed for the labs in my GitHub repository. The set of labs follows a logical story, which starts with Kubernetes YAML files and ends with a customizable chart ready for production depending on the stable MongoDB chart. I recommend that you complete all the labs. But if you want to skip some, you can also run each lab independently as I provide begin files and final files when needed. To successfully complete this course, you should know the basic Kubernetes concepts, what a pod is, what a service is. However, you are not required to be a Kubernetes expert. In fact, you can even learn Kubernetes by learning Helm. In doing so, you'll learn Kubernetes best practices right away. For example, when installing an application with Helm stable charts, you use secrets for your passwords, readiness and liveness to control container restart, ingress to expose your services, and many other good Kubernetes designs you might miss if you start using Kubernetes without Helm stable charts. Hiding the complexity of application installation is one of the main advantages of using a package manager. There are no programming skills required except if you want to modify the sample application. Finally, of course, you should be familiar with Unix shell. Although the Helm client can also be used on Windows, I recommend using it with Linux or macOS shell. The goal of this course is to ensure that each and every attendee has a great experience. If you are a project manager, you can learn to understand Helm concepts by watching this course. I will try to teach the main concepts with explicit illustrations so that you understand the main ideas of Helm. If you are more dev than ops, you'll learn to build and customize a chart, both in the course and in the labs. And if you are more ops than dev, you'll learn how to install applications in Kubernetes with Helm and manage dependencies between versions, again both in the course and in the labs. And now let's get to the heart of the subject, but a small note before we start. In this course, we are talking about Helm 3. Thanks to Pluralsight team, the course is already up to date. Keep in mind that the architecture of Helm 2 is different, and Helm 2 command line and chart structure might differ a little bit. But Helm 2 charts should be compatible with Helm 3. So by learning Helm 3, you will be ready for any Helm project. But in case you're working on a project with Helm 2, I will mention what the differences are between the two versions in each module.
Discovering Helm
Why Helm?
Hi, this is Philippe. In this module, we are going to discover this awesome tool named Helm. First of all, we'll find good reasons why you should use Helm. Then we'll define what Helm is and go over the main concepts, including charts, templates, and repositories. And, finally, I'll show you the big picture of how Helm works. If you are taking this course, you may have already installed or you might be planning to install some applications in Kubernetes. Here is the default way to do it. First, you build your application in the container. Then, you wrap that container in a pod, and you run that pod in a Kubernetes cluster. But that's not enough. You'll need more Kubernetes objects for your application. As you might know, the pod's IP can change, and the pod can be replicated. So if you want to access your application, you need a service that is going to load balance external traffic to your pods. If you want to expose that service to the world, you can use an ingress. It's a reverse proxy that maps URLs to your application using the service's definitions. You might also need other objects like ConfigMap to store configuration parameters or a secret to store some passwords. And if your application is stateful, you'll need even more Kubernetes objects to define a volume. As you can see, that's really a lot of objects just to install a single application. But how do you install those objects? The entry point to a Kubernetes cluster is the Kubernetes API. You can access this API either directly with a REST client or with a higher‑level client, such as a Go Client, or with the command line tool kubectl. To install an application with kubectl, you first have to define the descriptions of your Kubernetes objects in YAML files, and then install them with kubectl create commands. Usually, there is one file for each object. So far, so good. But that method has some limitations. With kubectl, you don't install the application as an atomic set of Kubernetes objects. Rather, you deploy each object separately. However, these objects may depend on each other, and the order in which you install them is usually also important. So we would like to group these related objects in a package and install that package as one single entity. Let's now imagine that you have a second version of your application. That version has a new pod that relies on the ConfigMap and secrets. One more time, with kubectl, the Kubernetes objects in each version are independent. You don't have the concept of an application's version. What if you want to roll back from this application version to that version? You can't do that easily with kubectl unless you keep track of your installation history and roll back each object by hand. This can be hard work. Helm gives an answer to those limitations. And even more, it also allows you to customize deployments for different releases and manage dependencies between applications.
Demo: Working without Helm
To convince you how useful Helm is, let me show you how hard it is to install and upgrade an application with kubectl. The Globomantics company has just released the first version of the application Guestbook for Events. DevOps want to install it right away in Kubernetes by hand with kubectl. Later in the course, they will learn how to install it with Helm. Here is the application. It's a simple guestbook for events where participants can leave a message about the event. The first version of the application is just a front end that stores messages locally. Globomantics DevOps just have to install a pod, a service, and the ingress. If you already have a Kubernetes environment, you can do the lab. You will find all the necessary files in my GitHub repository at the following location. If you don't have a Kubernetes environment, don't worry. We are going to install one together in the next module, and you can do the lab after completing that module. Now, we can check that the pod is already running and test this first version of the application. But a new version has already been released by the dev team. It's more realistic with a back‑end API and a database. Globomantics DevOps still want to install it with kubectl. Let me show you all the hard work that will take. First, we install the ConfigMap containing the back‑end API URL, and we update the front end to a new version. The service and the ingress are the same as in version 1. Then we install the back end that consists of a secret with database URI, a pod containing the application, and a service to access the pod. Note that the order is important. We have to create the secret before the pod. Finally, we install the database. The secret contains the credentials, the persistent volume, and persistent volume claim that are needed for persistence, and, of course, a pod and its service. Okay, DevOps did it. The new version of the application is up and running. You can also do it. The needed resources are in my GitHub repository. Again, if you don't have a Kubernetes environment yet, don't worry. Wait a little bit, and we'll install one in the next module. We can check that everything is running in the Kubernetes dashboard, and we can test the new version of the application. And now guess what? The project manager asks them to roll back. All those kubectl commands can quickly become boring and error prone. Globomantics DevOps understand they need a tool for packaging and versioning their app. That tool is Helm.
What Is Helm?
But tell me, what is Helm? Helm is a package manager for Kubernetes. Whatever IT field you work in, you have probably already worked with a package manager. When you have to deal with a lot of resources, source codes, or binary files in computer science, you usually use packages. And when you have to deal with a lot of packages which have dependencies, you need a package manager. Whether you have a system or development background, you have probably already used such tools. In the system world, you use apt to install Debian binaries or applications or yum to install RPM packages. If you are Java developer, you use Maven to build and deploy your artifacts. As a JavaScript developer, you use npm to install your node modules or pip if you develop in Python. In the Kubernetes world, you have the equivalent tools. The packages are called charts. There are a bunch of Kubernetes resources definitions, the YAML files, and Helm is the package manager that manages those charts. To continue with the analogy, in short, we can summarize Helm with the following comparison. If you want to install MySQL database on a Linux Debian box, you run apt install mysql. It will install all dependencies, needed libraries, and the database itself. And it can also be used to update your database software in the future. When you want to install MySQL in a Kubernetes cluster, you can similarly run helm install mysql stable/mysql. And all needed pieces of software are going to be installed in your Kubernetes cluster so that you get your database up and running. Later, you can update your database instance with the helm upgrade command. As you can see, it's quite similar. If you take a broad point of view, it's not surprising. Kubernetes can be seen as an operating system for a cluster of machines. It completely abstracts the infrastructure so any useful technology for an operating system, such as package manager, can be replicated to it. How does Helm work? Instead of using kubectl command for each Kubernetes object, we embed the Kubernetes object definitions in a package called a chart. That chart is then passed to Helm, and Helm connects to the Kubernetes API to create the Kubernetes objects that your application release will run. The Helm library uses the Kubernetes client to communicate with the Kubernetes API. So it uses the REST Kubernetes API and its security layer as any other Kubernetes client would do. This is true in Helm 3. Helm 2's architecture is different, and we'll talk about it later on. So with Helm, you install your application as an entity defined by your chart and not as a set of independent Kubernetes objects. The chart is the definition of your application, and the release is an instance of that chart. But where does Helm store the release configuration and history? Helm stores released manifests inside Kubernetes as secrets. If you are curious, we'll look at them in one of the next modules. This provides a kind of persistence and history for all the different releases installed with Helm. It's centralized in the cluster, and it's stored in the same namespace as your application. So if you or someone else uses the Helm client somewhere else, you will have access to the configuration of the previously installed release. You might have another question in mind. What if I modify the Kubernetes objects with a tool other than Helm? Helm 3 gives a great answer to that question. Helm 3 compares the three manifests: the old chart, the new chart, and live state. And it creates a patch that merges the updates as best as possible. I'll give you two examples. First, imagine that you have installed a release of a chart. Then, someone else updates a ConfigMap with the kubectl command represented by the small part in green here. Now imagine that you decide to install a new version of the chart. By comparing the old chart, the new chart, and the live state, Helm is able to deduce that it should keep the manual changes and the new chart updates as long as they don't conflict. The result is a running instance combining both updates. That's what is called a three‑way merge patch. This is very useful, for example, if you're working with Helm at the same time as other tools that inject Kubernetes object themselves, like logging software or service meshing software. A second example. Imagine you update your chart, then a third party changes the configurations with kubectl. What happens if you do a rollback? Are those changes lost? Again, Helm 3 compares the three states and applies a patch update with a nice merge of both updates. Let's now talk about namespaces. In Kubernetes, you can group resources in virtual clusters called namespaces. By default, Helm installs Kubernetes objects in the default Kubernetes namespace. But if you specify it, it can install objects in other namespaces. In Helm 3, as I mentioned before, the configuration of your release is stored in the same namespace as your release. In this course, we will use the default namespace. But just keep in mind that you can use Helm with different namespaces just like you do it with kubectl. What about Helm 2? There is a big difference in the Helm architecture between Helm 2 and Helm 3. Helm 2 consists of two components, a client‑side command line tool, helm, and a server‑side component called tiller. The Helm command line app communicates with tiller using a gRPC protocol. The tiller component runs inside a pod in your Kubernetes cluster and calls the HTTP Kubernetes API just like any other client. That tiller component manages your releases and stores the Helm charts and installation history in a ConfigMap by default in the system namespace. But as you can imagine, the tiller component needs a lot of rights to create, delete, update Kubernetes objects. For that reason, the Helm 2 installation had to be secured both in the cluster by restricting the tiller's rights with a service account and by encrypting the gRPC course. Helm3's architecture is simpler. In Helm 3, there is no more tiller, no more gRPC communication. The Helm library simply uses the Kubernetes client to communicate with the Kubernetes API. So it uses the REST Kubernetes API and its security layer as any other Kubernetes client would do. And Helm 3 stores the release manifests inside the Kubernetes namespace as secrets. This was a short introduction, so don't worry if you find it a little bit abstract. We will go more into the practical Helm world in the next modules. In the module, Building Helm Charts, you'll learn how to create a chart and how to use Helm to install an application in Kubernetes. In the module, Customizing Helm Templates, you'll learn how to customise those charts so that you can reuse them in many cases. And, finally, in the last two modules, we'll see how to manage dependencies between charts and how to store or retrieve them from repositories. But first things first, let's install a local Kubernetes cluster with the latest Helm version for the demo.
Installing a Local Kubernetes Cluster with Helm
Discovering Lab Environment
Hi. This is Philippe. In this module, we are going to install this awesome tool named Helm. We need a Kubernetes environment to run Helm and run our demo. Here is the environment for our labs. Our Minikube Kubernetes cluster with one node runs in Docker. On the Kubernetes client side, we have two main tools, the minikube command line tool to start and manage the Kubernetes cluster and the kubectl command line tool to install and manage the Kubernetes objects. On the client side, we will, of course, also install Helm to package and manage our application in the Kubernetes cluster, and we will configure Helm to use the official Helm stable charts repository.
Installing Kubernetes and Kubectl
Let's install a local Kubernetes cluster with the minikube and kubectl command line. If you already have such an environment on your computer or on a cloud provider, you can use the one you have and move on to the next section to install Helm directly. But if you don't have a Kubernetes cluster, I'll show you how to install one in this demo. So first, make sure you have Docker version 18.09 or higher installed on your host. If not, go to the Docker website and install it. Next, download minikube. Choose the link corresponding to your platform on the minikube site and don't load it. Install it to your local binaries directory and check its version Then, start it with the minikube start command. By default, it should use the Docker driver. It can take a long time because it has to download images, start containers, and configure your Kubernetes cluster. This is indeed a Kubernetes cluster running in Docker. Then, install kubectl if you don't have it already installed, download it, make it executable, and move it to your bin folder. Check that the client and server versions of Kubernetes are compatible. You can also check that the cluster is running with minikube status command. Next, we add the ingress support because we need it to access the demo. And finally, we configure the name resolution for the demo by first getting the cluster node's ID with minikube ip command and then reserving the two domain names to that IP in the hosts file, one for the front end and one for the back‑end API. A small note for macOS users. As the ingress is not exposed on the minikube IP, you'll have to run minikube tunnel and map localhost IP to the two domain names in your host file.
Installing Helm
Now that you have a Kubernetes cluster with ingress support and a kubectl command line tool, let's install Helm. First, we will install the Helm command line tool, and then we will configure it to use the official Helm stable charts repository. This is required because since Helm version 3, there is no repository configured by default. Installing Helm is quite straightforward. Go to the Helm website, helm.sh, and search for the installation documentation. You can choose the installation file specific to your platform or simply install the binary. I recommend that you install the binary as in the Docker and Kubernetes world, most tools are Go binaries. They are very lightweight and include all dependencies so they run right out of the box. Copy the link of the Helm library corresponding to your platform and download it. When it's downloaded, extract it. You should have a README and one Helm binary file. Copy the Helm binary file to the bin folder. And now if we run helm version ‑‑short, we can see that the Helm client is installed. But you might be wondering, in which Kubernetes cluster will Helm install the packages? Well, Helm is, in fact, using the same configuration as the kubectl command line, and the kubectl config view shows that we only have one Kubernetes cluster called minikube. And of course, the current‑context refers to that unique cluster. So, Helm is going to install the packages to that minikube cluster. Now, Helm is installed. But by default, Helm 3 is not configured to use any repository. So, if you want to install existing packages, you have to add at least one repository containing some charts. Let's install the official Helm charts repository with the helm repo add command. We will examine Helm repository commands more in detail later. As a small preview, let's jump ahead a bit into the course and install a MySQL demo in our cluster with the helm install command using the stable/mysql chart from the official Helm repository. That's it. We already have MySQL database server running in our Kubernetes cluster Great, isn't it? Don't worry if it went too fast. We are going to learn that stuff in detail in the following modules.
Cleaning Helm
Everything is installed. That's nice, but how can we uninstall and clean a Helm installation? There are several components to delete, the sample release we have installed, the Helm releases's configuration stored in secrets, and eventually the Helm binary itself and its local configuration and cache files. So, let's see how we can clean what we did in the previous lab. As you can see, there are some Kubernetes objects installed in the cluster, including a pod, a service, and a deployment. And there are also some secrets in which Helm stores the configuration history. We could, of course, delete those Kubernetes resources by hand with kubectl delete commands, but it's not advisable to do so. Instead, we'll go a little bit further in the course and use the helm uninstall command. Now we can see that the pod is terminating and the release's configuration is no longer stored in secrets in the cluster. By this demo, we have shown that Helm is not very intrusive. It just stores some configuration secrets in the namespace of your application and deletes them when you uninstall it. Note that this is true only for Helm 3. Helm 2 was more intrusive with a server‑side component called Tiller. Also, note that Helm stores some configuration and cache data on the client side. If you are curious, you can find out their location with the helm env command. To achieve a complete cleaning, you could also delete those directories in the helm binary.
A Word About Helm 2
Before going further, let's have a word about Helm 2. As I said in the introduction, this course is about Helm 3, but many projects still use Helm 2, so it's worth knowing a little bit about how it can be installed. First, we'll see how to install Helm 2 and then how to configure Tiller's security. As we saw previously, Helm 2 is composed of two components, a command line tool, Helm, and a server‑side component, Tiller, running in a pod in your Kubernetes cluster. If you want to install Helm 2, you have to download it and run the helm init command. That helm init command automatically installs Tiller in your default Kubernetes cluster. But that's not enough because Tiller runs inside your Kubernetes cluster as a pod. That pod runs with some privileges, the privileges of a service account. By default, Tiller runs under the kube‑system namespace's default service account and the kube‑system namespace's default service account has the cluster‑admin role. In other words, Tiller has all the rights to the whole Kubernetes cluster. If you are running in a dev or a secured and trusted environment, that's not an issue. But if you want to go into production with Helm 2, you'll have to restrict Tiller's rights by creating and configuring a Tiller service account. Moreover, because the communication between Helm 2 and Tiller is a non‑encrypted gRPC communication, you have to secure it with SSL certificates and keys on both sides. So, Helm 2 installation is not that complicated, but configuring the Tiller security needs some work and has to be done with care. With Helm 3, it's much easier because we rely on the default Kubernetes client and its security layer. In Helm 3, there is no more Tiller, so no more security issues.
Summary
In this module, you learned how to install a Kubernetes environment with Helm. You then learned how to clean everything in the last section. And finally, we briefly discussed Helm 2 installation in which the security configurations for Tiller are a little bit trickier. In the last demo, I installed the guestbook application with kubectl and raw Kubernetes YAML files. Now that your environment is running, you can do the labs if you want. But in the next module, you are going to learn how to pack those YAML files in a Helm chart and install the same guestbook application with Helm. That's more interesting.
Building Helm Charts
Helm Chart Structure
Hi, this is Philippe. In this module, we're going to learn about Helm charts. We'll learn how to build a Helm chart and how to install a release of that chart. I mentioned in the introduction that a Helm chart is a package. As with any package, it's always interesting to open it and see what it contains. Here is a chart structure. We'll first have a preview of it and then go more into detail in the later modules. The chart is a folder that can also be compressed as an archive. By convention, the folder name has the name of the chart. The chart properties are stored in a chart.yaml file. In it you can find the chart name, chart version, and other metadata. We'll look at this file later. The chart folder has a templates subfolder. That template subfolder contains your Kubernetes object definition files, so your YAML files. Why is that folder called templates then? Well, it's rarely raw YAML files that are inside. Instead, there are customizable templates with placeholders that are replaced by values sometimes using helper functions. We'll learn about that templating feature in detail in the next module called Customizing Helm Charts. If your chart has subcharts or depends on external charts, you can either add them as archives in the charts subfolder or reference them as dependencies in the chart.yaml file or in the requirements.yaml file. But note that the requirements.yaml file is only there for Helm 2 compatibility. It's still supported in Helm 3, but the recommended way to do it in Helm 3 is to add the dependencies in the chart.yaml file. We'll examine this more in detail in the module named Managing Dependencies. How can you document a chart? The chart can be documented in a README markdown file. The LICENSE file, which is optional, of course contains the license of the chart. And if you want to display some information to the user after your chart is installed or updated, for example, some useful information such as what to do next, the URL and port numbers of your services, or a quick howto, this can be added in the NOTES.txt file. Finally, another component that could be considered part of the documentation is the values.schema.json file, which defines the structure of the values.yaml file. We'll talk about it in the next module. So that's the preview of the full chart structure, the chart.yaml file with the metadata, what is related to the templates with YAML files shown in purple, what is related to the dependencies in orange, and what is related to the documentation in green. To be complete, let me mention two additional folders, the tests subfolder, which contains pod definitions used for testing, and the crds folder, which is used to create Kubernetes custom resources definitions. They are treated separately from other Kubernetes objects because they are installed before other Kubernetes objects and are subject to some limitations. Let's now go deeper into the chart.yaml file. This file contains the name of the chart and an optional description. You can also add some keywords that would be useful to search for the chart in a repository. What is the type property for? As you will see in the next module, a chart can contain helper files that have some logic functions that help to build a chart but do not create any Kubernetes artifacts. Sometimes you may want to have a chart that exclusively contains such abstract functions. In other words, a chart that would be a library of functions, functions that could be shared or reused but not used to create release artifacts on their own. In that case, you can tag your chart as a library with the type attribute. This is a new feature in Helm 3, and to be honest, we don't find a lot of library charts yet. Most of the time, you will tag your chart not as a library but as an application. There are also several properties related to versions. Be sure not to confuse the following. First, the apiVersion. This is the version of Helm API, v2 for Helm 3, v1 if you are still using Helm 2. Be careful because it's not very intuitive. There is a shift between the apiVersion number and the Helm version number, v2 is for Helm 3. Next, the appVersion. This is a version of the application you plan to install with Helm. It can be any version number or string. Last but not least, the version is the version of the chart. It has to follow semantic version 2.0 specifications with a patch, minor, and major number. Note that the appVersion and chart version are not related. You could have a new appVersion if your app changes but keep the same chart version because the chart structure and templates remain the same. Or you could have the opposite, the same application version but a new chart version because the chart files changed. Finally, the chart.yaml file also contains the dependencies configuration. We'll look at this in the module named Managing Dependencies. Now that we have an overview of chart structure, let's create our first chart.
Demo: Building a Helm Chart
Globomantics DevOps want to learn Helm, so this time they install the application with Helm. They structure the YAML files into a Helm chart and use Helm to install a new release. Remember the first version of the application was just a front end, storing messages locally. Globomantics DevOps need to install a pod, a service, and an ingress. All files needed for the demo can be found in my GitHub repository. First, they create a directory for the chart. Inside that directory, they add a Chart.yaml file. The apiVersion is v2 because they are using Helm 3. The name of the chart is simply guestbook, and it's the first version of the chart, so they set the version number to 1.0. As this is the first version of the guestbook for event application, the app version is 1.0, and the description refers to guestbook 1.0. Then, the DevOps create a template directory inside the chart and copy the files definitions of the Kubernetes object to that templates directory. If we look at those YAML files, we see that they are standard Kubernetes YAML file definitions, one for the pod, one for the service, and one for the ingress. That's it. Globomantics DevOps have created their first chart, one of the simplest charts man can make. They are ready to install that chart, but first we need to understand the concept related to Helm release.
Defining Helm Concepts
In this section, we're going to define some important Helm concepts. The chart is the definition of our application. When the chart is installed in the Kubernetes cluster by hand, we say that a release is running, so the chart is the definition of the application and the release is an instance of the chart running in the Kubernetes cluster. Usually we install one release of a chart, but in some cases you might need to install multiple releases of the same chart. For example, a dev and test release of the same application on different clusters, or two releases of a database in the same Kubernetes cluster. If you want to install two releases of the same chart on the same Kubernetes cluster, it's possible, but your Kubernetes objects must not conflict. For example, the name of each release's service must be different and the exposed ports should not be the same. This is why the official charts from the Helm repository are highly customizable. The names of the objects, for example, are all based on the release name. We'll see how to do that later in the course. If you made some change in your application and want to install it, you don't have to install a new release. Instead, you can update an existing release and make a new revision of that release. This is another important concept in Helm, release revision. This is not considered as a new release, it's a new revision of the same release. Don't confuse release revision with the chart version that we saw previously in the Chart.yaml file. The chart version refers to a change in the chart's file structure, meaning a change in the application definition. For example, if there are new Kubernetes objects like a service account and a persistent volume, the chart structure changes so the chart version should also change. On the other hand, a release revision refers to a change in the running instance of that chart, either because the chart itself changed and the release was updated or simply because the chart did not change, but the same chart version is installed with different values. Now that you know the Helm architecture and concepts, and before using Helm to install our first chart, let's list the main commands we need in the next demo. Helm install installs a chart as a release. Helm upgrade upgrades a release to a new revision. Helm rollback rolls backs a release to a previous revision. For example, if you find a bug and want to go back to the previous revision. Helm history lists the revision history of a releas. Helm status displays the status of a release, which objects are installed, and their running status. Helm get shows the details of a release manifest and current values. Helm uninstall uninstalls a release from the Kubernetes cluster. Note that in Helm 2 we use helm delete instead of helm install. And finally, helm list lists all release names with some basic information. There are also some other commands more relevant to the next modules that we'll see later. If you are used to Helm 2, note that there some small differences compared to Helm 3 commands. First, when you install a chart, the name was, by default, auto generated in Helm 2. If you want a custom name in Helm 2, you have to specify it with ‑‑name. Conversely, if you want to generate a release name in Helm 3, you have to set it with ‑‑generate‑name parameter. The Helm 2 helm delete command has been renamed helm uninstall, and by default it now purged the Helm history in the cluster. In Helm 2, we had to add ‑‑purge to get the same result. If you want to keep the Helm history in Helm 3, you have to use ‑‑keep‑history parameter. A final difference in the helm command is the helm get command. In Helm 2, it could be directly followed by the release name and would get all the information about the release. In Hem 3, you have to write helm get all to have the same behavior, or you can be more precise and get only the manifest, the values or other things like notes and hooks.
Demo: Installing a Helm Chart
Now that they have all the Helm concepts and commands in mind, Globomantics DevOps will install the first version of guestbook for events, but this time with Helm. Then, they will upgrade, roll back, and delete the release. Installing the application with Helm is not a hard job. They go one directory up the chart further and run helm install followed by the name of the release demo‑guestbook and the name of the chart, guestbook. That's it. One command, one line. Helm reads the chart and asks the Kubernetes API to create a release. And soon, the application is running. Globomantics DevOps can check this with the kubectl get pod command or with some Helm commands. Helm list gives the names of the installed releases. The name of this first release is demo‑guestbook, and they can get the release manifests with helm get manifest followed by the name of the release. It has a service and, as part of the deployment, a pod and an ingress. Finally, they can enjoy testing their first application installed with Helm. This is the guestbook for the Concert For Climate 1.0. Now a minor change occurred, and the Globomantics dev team built a new version, Guestbook 1.1. Let's see how DevOps install this new version They open the chart file, change the appVersion to 1.1, and update the description. But they do not change the chart version because the chart is the same. In the Pod definition, they change the image version to frontend:1.1. And to upgrade the release, DevOps chose to run helm upgrade demo‑guestbook and guestbook, the name of the chart. And soon, the 1.1 version of Globomantics guestbook for the Concert For Climate is running. DevOps can use kubectl to check that the new image is used. They can see that this is the second revision of the demo‑guestbook release. Here's the new version. If you refresh the browser, the version number has changed. Great. Now, there is a bug in this new version, and Globomantics manager asks to roll back. This is quite easy with Helm. DevOps run helm rollback, the name of the release, and the revision number. In this case, they want to roll back to the first one. To get a history of all the changes, they can run helm history with the name of the release, and we can see that this is already the third revision of our release, one install, one upgrade, and one rollback to revision 1. Finally, if they have to delete the release, they could run helm uninstall name of the release, which will delete all Kubernetes objects and Helm release configuration from the Kubernetes cluster. As usual, if you want to do this lab, you can find the resources in my GitHub repository.
Demo: Building an Umbrella Helm Chart
Now remember, there was a more advanced version of the application, version 2.0. In this demo, we learn how to build a more advanced chart called an umbrella chart for that version. That new version has a front end, a back end API, and a database, so many more Kubernetes objects. Globomantics DevOps now want to install them with Helm. Let's first create the chart. If you want to follow this demo, all the files are in my GitHub repository. Globomantics DevOps have to do exactly the same job as they did for the front end. They have to create a chart for the front end, one for the back end, and one for the database. Then, those three charts are embedded into a wrapping guestbook chart. This can be done by moving them into the charts subdirectory. That kind of chart is commonly named an umbrella chart. You can do the demo yourself as self‑learning or watch following recording. First, DevOps creates a guestbook directory and, inside that directory, a new Chart.yaml file. As it is a new version of the application, the app version changes. And it's also a major change for the chart itself, so the major number of the version of the chart also changes. Inside the charts directory, they create a front end subchart. It has its own definition, referring to version 2 of the front end because the front end application changed. And as we already had a chart for the front end, the chart version also changes for 1.1.0. They just copy the YAML files related to the front end to the templates directory and also the ingress definition. They do exactly the same for the back end. The back end is the first version as an application and as a chart. The back end files are copied, including a pod, a secret, and an ingress. And finally, a chart is created for the database. The app version is here along with the version of the MongoDB image used, and it is the first version of the chart. It contains a templates directory where all MongoDB YAML files are copied, one for the pod, one to expose it as a service, and one for the secret containing the password. It also has two persistent volume and persistent volume claim files to define the storage. Here is a quick review of the new chart structure. The main umbrella chart guestbook with the chart definition contains three subcharts, one for the front end, one for the back end, and one for the database. All these subcharts contain their respective Chart.yaml file and some Kubernetes objects, definitions, and YAML files. Now back to the root folder as they are ready to install the new version of the application. First, the helm list ‑‑short to see which release is running. And now Globomantics DevOps are very excited because they can install the new version of the guestbook with just one command line, helm upgrade with the name of the release followed by the name of the chart. They can check that everything is up and running with kubectl or look at all the manifests of the installed Kubernetes objects with helm get manifest and the name of the release, We can see the secrets, the persistent volumes, the services, and if we go down, the deployments and the ingress. All Kubernetes objects are there, installed and running. Globomantics DevOps are happy. They test this version 2 of the guestbook for the Concert For Climate.
Summary
In this module, you started by learning the chart structure. Then, you built your first chart for the guestbook application. We then defined some important Helm concepts, release, revision, and chart version and reviewed some Helm commands. With this knowledge, we were able to install, upgrade, roll back, and delete applications in Kubernetes with Helm. In the last module, we installed the guestbook application with kubectl and Kubernetes YAML files. In this module, you learned how to pack YAML files in a Helm chart, build an umbrella chart, and install the guestbook application with Helm. But we just copied the raw YAML files without any changes. All values are hard‑coded. In the next module, we'll build some templates with values that can be replaced and functions that add some logic. With these, your charts will not be hard‑coded any more, and they can be reused for other projects.
Customizing Charts with Helm Templates
Why Helm Templates?
Hi, this is Philippe. In this module, we learn how to customize Helm charts with Helm templates. First, we'll explain why we need Helm templates. Then we'll discover how the Helm template engine works, what it is based on, and when it runs. Later, we'll go through a couple of sections, some about the Helm template values and others about the Helm template logic. So first, let's find two good reasons why we need Helm templates. Remember what we did to release a new version of the application in the previous module? Well, it was not a state‑of‑the‑art work for DevOps. We edited the frontend.yaml file by hand and changed the hard‑coded image version from 1.0 to 1.1. What do you think about this? Personally, I had hate hard coding values that are supposed to be changed. Instead, they should be externalized and automatically replaced when we call the helm install command. That's exactly what using a Helm template aims for. Here is a second reason why we need Helm templates. Remember that we should be able to install two releases off the same chart on different clusters, on the same cluster, or even in the same namespace. But in the same namespace, the name of all the Kubernetes resources must be unique. So, if you want to install two releases of the same application, we need a way to generate unique names for each of the Kubernetes objects. The solution is to generate the names of the Kubernetes objects based on the Helm release name. For example, here are two Kubernetes service definitions for the front end. The service name is prefixed with the name of the release, one for our dev release, and one for our test release. To generate those names based on the release name, we need a tool. That tool is the Helm template engine. Of course, you may say, yes, but I can just in start the releases on different namespaces or on different clusters instead. You are right, but the goal of a good Helm chart is to make it completely configurable and make sure it can be installed in any cases without any name conflicts, even if it's installed as two different releases in the same namespace.
What Is Helm Template Engine?
Helm templates are processed by your template engine. You may have already used other template engines in IT projects. If you're from the system world, you often use a directive to inject environment variables' values into your shell scripts. This can be considered a kind of template, even if there is no rendering. As a web developer, you often use directives to display data in HTML pages. Solutions are available in many languages, including PHP, JSP, ASP, and Express view engine. And finally, as a lazy but smart developer, you may have used a code generator, like Velocity from Apache or the JavaScript Yeoman code generator, or, more recently, Go templates. The principle is always the same. You insert directives in your code. The directives are distinguished from the rest of the code with some characters by convention, percent or curly braces. And those directives are replaced by values or execute some code when they are processed by the temperate engine. For example, here's how the Go template engine works. In a template, you place some directives between curly braces. Just for the record, that convention is called the mustache syntax because if you rotate it by 90 degrees, you get a mustache. When the template engine runs, those directives execute code or are replaced by values set in objects. Here, the .name directive is replaced by the value of the name property of the object, which is myservice. The result is a manifest where the directive has been replaced by the value of the name property. If you are interested in using Go template, you can have a look at the Go template documentation or see some examples in my HitHub repository in the Go‑Template project. The Helm template engine is actually based on the Go template engine. The difference is that the values used to replace the directives can come from different sources. Some values are defined in a values.yaml file, and some are predefined data that are, for example, in the chart definition or part of the release runtime metadata. We'll look at all those values in more detail later on. The Helm template also provides additional functions, some from the Helm project itself and some that are part of the Sprig project. Except for those add‑ons, the Helm template engine works the same as the Go template engine. In fact, it is the Go template engine. Not surprising if you know that Helm is written in Go. And the process is the same. The template contains directives, and those directives are executed or replaced by values to generate a manifest. But where and when does the Helm template engine run? It runs on the client side. When you lunch the helm install or helm upgrade command, before sending the file definition to the Kubernetes API, Helm first processes your temperate with the template engine, which executes the directives or replace them with values to create a manifest. Then Helm sends the result to the Kubernetes API. Note that this is also true for Helm 2. Even though Helm 2 has a Tiller server‑side component, the execution of the Helm template also happens on the client side, so the template remains on the client side. It's not stored in the Helm secrets on the service side. That means that you have to version or back up your template file somehow with a versioning system like Git, for example. Helm doesn't store a history of the templates on the server side. It only stores a history of the processed templates, so the manifest, in some secrets in the Kubernetes server. But for information and debugging purposes, Helm also stores the values that have been used to generate the manifest so that you can check the current values for a given release. Just for your information, let me show you where Helm hides the data. The manifest files and values are stored in Kubernetes secrets in the Base64 encoded gzip archive. You can try to decode It by hand. Let's do it for fun. The Helm release name is test‑demo. If we look at the secrets in our Kubernetes cluster, we can see one secret that has been created by hand for the test‑demo release. And with this tricky long command, which gets the data of the secrets, decodes it twice from Base64, and unzips it, you can get the content. Those are manifest files encoded in JSON and also their values. This is the hard way, and it's just for instructive purposes. In practice, if you want to get the values or the manifest of your release, you can use the helm get manifest or helm get values command. So we learned that a Helm template is executed when the chart is installed, but is there any way to test our templates before installing the chart? Yes, there are two ways, a static one that can run offline without a Kubernetes cluster, helm template followed by the name of the chart, and a dynamic one, helm install with two options ‑‑dry‑run and ‑‑debug, which makes some requests to the Kubernetes API like a normal installation, but asks it to not actually commit any changes. It's called dry run. And the debug flag allows you see the result of the template engine execution in the console. There are some differences between the two. The static method works locally and does not contact the Kubernetes API, so it has fewer features, such as generating release names and some runtime checks. I would suggest using the static method in the first stages of your development and the dynamic one later when you want to test in more detail against the real cluster. Note that the dynamic method debug parameter outputs in the standard error, so you have to redirect as shown on this slide.
Playing with Helm Template Data
So, the template contains directives that are replaced by values or that execute code. First, let's focus on the values. Which data are available in the Helm template? Values for templates can be supplied in different ways. They can be defined in the values.yaml file located at the root of the chart directory or in any other YAML file. But in that case, you have to set it with ‑f parameter. Finally, you can also set custom values in the command line with ‑‑set name=value. Note that when the user sets a custom value, that value overrides the values defined in the chart's values file. Those values are organized in a nested way, and you can access them with .Values, dot, refers to the root, and Values to the value's data. Then add .property to access the child, add .subProperty for the grandchild, and so on. You can also set the value of a child property directly by separating parent and child properties with a dot. For example, here we set the name property of the service property. Note that values can also contain arrays or objects. Here, the multiple labels are part of an array, an array off maps. The key of the first and unique element of each map is name. So you can set the value of that element with the following syntax: setting the first element, then the map key. When there is a structured data in an IT field, there is usually a way to define that structure. That's called a schema, database schema, XML schema, and so on. Here, we would also like to define the structure of the Helm values. There is a way to do this. Every YAML file can be written in a JSON format. For example, the YAML file on the previous slide can be written as this JSON file. And for this JSON file, there is a way to define the structure, which is the JSON schema. You can find the full specification of JSON schema at the following address. Here is an example of a JSON schema that defines the structure of our JSON file. As you can see, there is a service object that contains three required properties, type, name, and port. If we look further down in the schema file, there is also one non‑required property named labels. That labels property is an array of objects that have a property called name. That schema has to be stored at the root of your chart in a file named values.schema.json. What is it useful for? The advantage of this schema is that it allows Helm to validate the value.yaml file, The validation occurs each time you call helm install, helm update, or helm template. Helm validates the structure and the types, and it also validates the required values. For example, if you remove the port property from the values file and run helm template, you get the following error message: service: port is required, because that part property is defined as required in the JSON schema. Another example, if you put a string for the port number, you also get an error message, Invalid type. Expected: integer, because according to the schema, the port value type must be in an integer. Note that the schema feature is only supported since Helm version 3. As said in the beginning of the module, data can come from other sources than the values file. They can come from the chart file. Note that in this case we access the data with .Chart and not .Values. And also note that the first letter of the chart's property is in uppercase in the template. Or they can come from the release's runtime data and be accessed with .Release. There you can get the release name, revision number, and other useful data. You can also get the data about the Kubernetes cluster with .Capabilities. It can be useful if you want your Helm chart to be different depending of the Kubernetes versions. You can also include the content of files in your template with .Files object. Note that the file path is relative to the root of your chart and that the files cannot be located in the template directory. And finally, you can access some data about the template itself, such as its name. Here is a simple Helm template example using different data sources. That template has several directives, two directives to replace data from the Values, service.type and service.port, and two directives to build the service name from the release name and the chart name by using the .Release and .Chart built‑in objects. So that, if we have two releases of the same chart, the service name is going to be different. Note that the label also is based on the chart name, and the selector matches that label. The manifest should be the following file, based on the values that are in the values.yaml file and the name of the release and the name of the chart coming from built‑in objects. In all the following slides, the same color convention will be used: the template in orange, the values in blue, and the manifest, which is the output of the Helm template engine, in green. That output is usually a Kubernetes object definition that is also called a manifest. Now, what about the values in the case of an umbrella chart? As a reminder, an umbrella chart is a parent chart containing sub‑charts. Note that we also have parent and sub‑charts when charts depend on each other. We'' see that more in detail in the next module. Keep in mind that every sub‑chart can be used as a standalone chart or as a sub‑chart. So each sub‑chart contains its own values.yaml file, which contains the default values for that chart. The parent chart also has a values.yaml file with its own properties, but it can override the values from a child chart under a property that has the name of that chart. Here, for example, we'll override the MongoDB username and password properties of the back‑end chart. The way to do this is by adding a back‑end property in the parent chart, and nested in that back‑end property, we redefine the MongoDB secret property of the child chart. In fact, internally, Helm merges all those values into one single entity. If you are curious, you can have a look at the values compiled by Helm. Run helm get all and the release's name, and you'll see that Helm computes a set of values containing values from the parents chart and its children's charts. But tell me, what is that global property? A reserved name for a property is the name global. A global property, when defined in a parent chart, is available in the chart and all its sub‑charts. It can be accessed with the same .Values.global directive whether you are in the parent or sub‑chart template. This is a convenient way to declare a common property for a parent chart and all its sub‑charts. Note that the global property will be passed downward from the parent to the sub‑charts, but not upward from the child chart to the parent chart.
Demo: Customizing Frontend Chart Values
Globomantics DevOps are now big Helm fans and already plan to reuse their charts for other applications. For that reason, they customized their Helm chart templates so that they are reusable. Globomantics DevOps edit their chart. First, they customize the front end. Let's start with a config map. As you can see, there are hardcoded values in this manifest. The name of the config map is accurate, and the config data are hardcoded. If DevOps want to install the chart as several releases, they need to make that name dynamic rather than static. A solution to make it unique is to base it on the release name and the chart name. So, Globomantics DevOps replace it with the Release.Name dash the Chart.Name. Like this, they are sure that the config map has a unique name among all the releases config maps in the Kubernetes namespace. Next, to make that chart reusable, they externalize the values to a values.yaml file. Here is how to do so. First, create the values.yaml file. Then, in that file, add a config object with two properties, guestbook‑name and backend‑uri. Note that the template properties do not support the dash, so we'll replace it with an underscore. Then, back to the config map definition, replace the hardcoded strings with the directive that will generate the values from the values file. The first one can be accessed from the root, .Values.config.guestbook_name, and the second one from the backend_uri. As you can see, in this template, we have properties from the built‑in values, Release and Chart, and some from custom values from the values.yaml file. The other templates can be updated the same way. The front end also contains hardcoded strings frontend for the deployment, the labels, and the container. Let's replace it with a dynamically generated name. Again, the name of the release dash name of the chart. And we can replace the front‑end string with the same generated name anywhere as it is needed in the file. For the label, not for the image, but for the container name and the reference to the config map that we have just changed. There are other hardcoded values Globomantics DevOps would like to externalize, for example the replicas number if they want to scale the application easily. So, let's create a replica count value in the values.yaml file and use a directive to replace it in the template. Also, they would like to change the image easily if a new version of the application has been deployed The image name has two parts, the repository and the tag. So, let's create an image object in the values.yaml file with two properties, the Docker Hub repository, phico/frontend, and the tag, 2.0. Note that the tag must be a string. If it's a number, the .0 would be removed by the template engine. And again, two directives are used to replace those values in the template. That way, if the Globomantics dev team releases a new version of the application, DevOps do not have to edit the decrement file anymore. They just change the image tag in the values.yaml file and run and upgrade. What's next? The service. The service also has a hardcoded front‑end name that can be replaced the same way with our dynamic name Release‑Chart. The port number is hardcoded, and it might change in the future. So, it has to be externalized to the values.yaml file. DevOps would like to be able to change the service type to nodeport when they are in a development environment instead of the default cluster IP. So, they add a service object with the port property and a type property in the values.yaml file. And they replace the values with directives in the template, one for the port and one for the service type. Last, the ingress. As you can see for now, we have one ingress for both the front end and the back end. This is not a good design. A chart should be standalone and should not depend on other charts. So, DevOps decide to split it between the back end and the front end. They cut the part corresponding to the back end and pass it back into a new ingress in the back‑end chart, and they only keep the part related to the front‑end chart. The ingress also has string hardcoded. Let's change this with a dynamic name. And the host name is a variable that could change. So, an ingress subject with the host property is added to the YAML file, and a directive is used to inject that value into the template. That's it for the front end. DevOps have achieved the first step of a template build, which is to replace hardcoded values. In the next module, they will add some logic to the template with functions. But before, they have to do the same job for the back end and for the database. We are not going to follow along because it's quite long, but you are free to try it. The initial resources are in the lab7 begin folder, and the result is in the lab7 final folder. When this is done, DevOps first checks the templates with the command helm template name of the chart. It prints the manifest built by the template engine. We see the secret and the config maps. Notice that the name is the concatenation of the release name and the chart name. RELEASE‑NAME is the default name used by helm template command, which, as a reminder, is a static template rendering, not calling the Kubernetes API. All resources are generated. The persistent volumes, the services, and notice that, in the deployment, the image is based on the repository and tag coming from the values.yaml file. DevOps can run a second check with helm install ‑‑dry‑run ‑‑debug. Notice that they now have more data, including debug data where you can find bugs in your template, computed values as they are seen by the template engine, and the generated manifest. Notice that the release name is now demo‑guestbook. If everything is okay, DevOps can run a helm install without a dry‑run to install the actual release. All the resources are being created. And if we wait a little bit, we can check that the services are available and that the pods are running. There is an error with the back end. Let's look at the minikube dashboard to analyze this. The back end is failing. Let's check the logs. MongoDB not found. Ah yes, we get it. Now, the database service name is dynamically generated based on the release name, so it's not MongoDB anymore as I called it in the MongoDB URI in the back‑end secret. We'll solve that issue in the next demo.
Adding Logic to Helm Template
Our templates have not been very clever so far. They have just replaced some directives in templates with values in manifest files. But they can be much more clever if they use functions and logic. In the following sections, you'll learn how to use functions or pipelines, how to modify the scope with the with function, and how to control white space and indentation. Then we'll list the logical operators and use them in flow controls. Finally, you'll learn how to use variables, and we'll conclude by examining modularity with helper functions and some templates.
Using Functions and Pipelines
In this section, you'll discover functions and pipelines. They are two different syntaxes to achieve the same goal, namely, run simple logic in your template. With the function syntax, you write the function name first, then the argument. For example, quote value is a function that puts the value in quotes. In fact, you can see that this syntax is similar to what you can find in any function and language, but without the parentheses. Pipeline works the opposite way. You write the value first, and that value is trimmed from one pipe to another. A pipe applies a transformation to the value, which is equivalent to the function's implementation. For example, here the value is trimmed to the quote pipe. The result is the value in quotes. In fact, you can see that it works exactly the same as a Unix shell pipe. Both functions and pipelines can be used with more than one argument. For functions, you separate the arguments with spaces. For example, here the default function takes two arguments, a default value and a value. If the value is null or empty, the default function returns the default value. This is equivalent to a function with multiple arguments in many other languages where arguments are separated by commas. With the pipeline syntax, the last argument becomes the value trimmed to the pipe, and the others follow the pipeline name. Here's how to call the same default function with pipeline syntax. The advantage of pipelines over functions is that they can be changed easily. For example, here is a pipe of a pipe that turns a value into uppercase and puts it in quotes. Here is a pipeline example named default. It generates a default value if the value does not exist. If there is a value for services named property, the output is that value as, by default, the output is the chart name, which is the default value. Where is the list of available functions and pipelines? Some are built in in the text/template package, but not very many are there. Most of them come from the Sprig project, and the Helm project also brings a few add‑ons. Note that you cannot build your own custom template function as you can would Go template. This is a limitation of Helm, but it's not a big issue because there are a lot of functions available, and they can be combined in helper functions to make a more advanced one. Here are the main functions available in Helm templates. As I mentioned in the introduction, most functions are from the Sprig project. The Sprig project has many more functions, and you can, of course, use them in your Helm templates. But here I have only listed the functions most commonly used in Helm. You have here both the function syntax and the pipeline syntax. We already saw the default function before. The quote function or pipeline puts the value in quotes, upper in uppercase, lower in lowercase. Trunc truncates a value to a number of characters. Trunc 63 is often used in Helm charts because Kubernetes letters are limited to 64 characters. This is a nice way to avoid names that are too long, but sometimes it may cut a long name just before a dash. So you can use trimSuffix ‑ followed by the value to remove it. If you want to store some passwords in Kubernetes secrets, b64enc is useful to encode them in B64, and those passwords can be generated previously with a random randAlphaNum function. Another function that you might see quite often is toYaml. It's used to copy a YAML snippet to the template. Most often, it's used to generate Kubernetes annotations. Finally, the Go template's function printf is available to output a formatted string. Note that it's not used with pipeline syntax, but Sprig provides other functions to build strings, for example the list function followed by a list of strings that are joined with the join type. An example of that technique is in the lab. Here is an example of trunc and trimSuffix. As you can see, the trunc pipeline truncates the very long name of the service, and trimSuffix ensures that there is no trailing dash. Including a password in a secret and putting it in quotes can be achieved with two chained pipes, a Base64 pipe chained to a quote pipe. Here is the secret file resulting from that template and the following password value.
Modifying Scope with "With"
We saw that values are organized in a nested way. Sometimes you may want work with a subset of the values without repeating the complete path from the root to the value every time. Here is how to do it using scopes. Without specifying the scope or template looks like this. Each property is accessed from the root value, and you have to repeat the full path in each directive. By defining the scope with the with function, you can restrict the scope to the service property. And from there, all properties are accessed relative to that service property without specifying the parent path. The manifests generated by the two templates are the same, well almost the same. There is once more difference because the with and end directives generate additional carriage returns that are found in the manifest file. This is an issue, but don't worry because the solution is in the next section.
Controlling Space and Indent
Here we learn how to control whitespaces, carriage returns, and indents. So here is our template with the with scope function. Here I showed the additional carriage returns and where they are found in the manifest. To solve that issue, we can remove one carriage return with a dash at the beginning of the directive. In fact, we could also remove the carriage return by adding a dash at the end of the directive. This is more logical, but as we have two carriage returns, one after the spec line and one after the with line, removing the first one has the same effect. The dash removes all spaces and carriage returns before the directive if it's located at the beginning of the directive and all spaces and carriage returns after the directive if it's placed at the end of the directive. Let's consider this second example. Here we would like to insert a port number and use a with function. This time, we have to remove three carriage returns so that the port is next to the port label. So we'll add three dashes, one at the beginning and one at the end of the with function and one more at the beginning of the end directive. But be careful not to add a fourth one because if you add one more dash, the output will be wrong. It looks like this. Note that by default, all the indentations from the template are preserved. But if for some reason you want to modify the indentation, you can do it with the indent function. Let's consider this example. It's not very useful, but it's explicit enough. A property named tcp contains the string value protocol: TCP. And we want to generate it in the manifest. Without indentation, the manifest is not what we are expecting. The indentation is wrong. The protocol is not aligned with the other ports properties. To solve that issue, we can use the indent function to align the protocol property with the other properties. We could also, of course, have indented the directive in the template without using the indent function. But in some cases, the indent function might still be needed, for example if you are using dashes to remove carriage returns because, as we said in the previous slide, the dashes also remove spaces, so they have an impact on the indentation. Some functions are inherited from Go template package. One of them is often used in Helm templates, specifically the printf function. It generates a formatted string with some values. Here we use it to print a string that consists of the release name and the chart name separated by a dash.
Logical Operators and Flow Control
The Helm template, of course, also allows you to compare and combine values. In other languages, there are operators for that purpose. But in Helm templates, operators are functions. You can compare values with equal, not equal, greater than, and lower than functions. And you can define logical expressions with or, and, and not. For all of them, note that the syntax is the function name followed by the two values to be compared. Here are some examples of logical operator usage taken from this table of Helm templates from the main Helm repository. A combination of and and or, if, adminEmail exists and either serviceAccountJson or existingSecret exists, then the content is rendered. This is taken from the OAuth2 proxy chart in the deployment.yaml file. Another example here uses the empty function to check that a list of values is empty. The not negates the result of the empty function. It's taken from the nfs‑server chart. And, finally, another combination of or and and from the grafana chart that I let you discover. Most of the time, operators are used in conditions. Let's learn now how to control flows in Helm templates with conditions and loops. Here is the syntax of the conditions directive in Helm templates. The if function contains the value to evaluate. It is terminated by the end directive, and it can contain an embedded else or even other nested if directives. If the evaluated value is true, the inner content is rendered. A common method to make some Kubernetes resources optional is to evaluate a property named enabled in an encapsulating if directive as shown in this example. To loop around the list of values coming from a YAML array, you can use the range function terminated by an end directive. Note that the scope inside the range is restricted to the values you are iterating on. Here, the first range loops on the hosts items and is scoped to the host items. So all evaluations in the range are done relative to the content of the host items. That's why we have relative paths like .hostname to access the properties of the different hosts. And the embedded loop iterating on the path items is restricted in scope to those path items. You can pause the video. I'll let you analyze this example and how the template loops on the arrays defined in the YAML file to generate the following manifest file. Because the value is scoped to the range, you might be wondering, how can I access the parent's values when I am inside a range? The solution is in the next section.
Using Variables
Using Variables. When do you need variables? You can use variables to store some data and organize your code like you do in any other language. But they are especially useful as a workaround of scope restrictions. Inside a with or range directive, you cannot access value from the root as shown in this example with .Values. This syntax does not work because the scope is restricted to inside the with, so all references are relative to that scope. And in our range example, you cannot access the host item properties like .hostname from the inner loop iterating on path. You also cannot access the release properties from the releases built‑in object. To get around this, you can define a variable before the with or range directive. Prefix the variable name with the dollar sign followed by a colon and equals sign and the value set to the variable. That variable is accessible anywhere in the scope where it is defined. So you can refer to it inside the with function. That's how you can bypass the scope restriction of the with function. Same for the range. We can define a temporary variable with the host item. That variable can be used inside the subrange. And for the release or chart data, you can use dollar sign. The dollar is called the global variable. It refers to a built‑in variable that allows you to access the root data. Note that the common practice consists of declaring the variable inside a range directive as done here.
Calling Helper Function and Sub Templates
As a smart and lazy DevOp, you always try to reuse your code. Let's imagine that the logic needed to build the label becomes more and more complex and ends up looking like this. You don't want to copy‑paste that big piece of code over and over again in your templates. The way you can reuse code in Helm templates is by using sub‑templates, also named helper functions. Helper functions are Helm snippets that are located in helper files. So the code is copied in that _helpers.tpl file and wrapped with a define function. The define function takes the name of the sub‑template as argument. Be aware that sub‑template names are global, so to guarantee that the name is unique, it's recommended to prefix that name with the name of the chart. It can be useful, for example, in the case of an umbrella chart to avoid a conflict between functions defined in the parent chart and the sub‑charts. When that sub‑template is defined, you can reuse it anywhere in your chart with an include function, which takes, as arguments, the name of the sub‑template and the scope. The scope is the default scope that is used as the root in the sub‑template. Here we pass the root object of the template, but we could pass a more restricted scope. Where do you store that helpers file? In the templates directory. Why isn't it processed by the Helm template engine to generate a manifest then? The answer is because it is prefixed with an underscore. Files prefixed by an underscore are not rendered as Kubernetes objects. In fact, you could put functions in any files prefixed with an underscore, but by convention, the Helm community often uses _helpers.tpl files. By the way, if you want Helm to completely ignore some files that are in your chart directory, you can add their name with or without wildcards in a .helmignore file in the root of your chart. But now, imagine that you want to create a chart that contains only sub‑templates like this, a chart that would not create any Kubernetes manifest, rather an abstract chart that only contains functions that could be shared and reused by other charts. Using Helm 3, there is a way to do this other than using _helpers files. You can tag the chart as a library. A chart that has the type property set to library in the Chart.yaml file will not render any of its templates. So this is a chart that is used only to define sub‑templates that can be reused and shared. It's not used to create Kubernetes objects on its own. For your information, Go Templates also has a sub‑templates feature. You could perfectly use the Go Templates syntax with the template directive, but there is a subtle difference between the two directives. A common need in Helm templates is to indent the output of the helper functions. To achieve this, you need to pass the output to an indent pipeline. But because the template directive has no output, there is no way to pass the output of a template function to the indent function or any other function. This is the reason why the Helm team introduced an include directive that returns an output. With the include directive, you have the same behavior as with the template directive, plus the possibility to indent your code. Last, but not least, the NOTES.txt file. As I mentioned in the beginning of the course, this is a nice way of documenting your chart. Each time a user installs your chart, the content of that file is printed in the console. And, of course, that file is also a template so you can build its content dynamically. For example, you can display the list of URLs to access your application as shown in this example. It loops on the hosts items defined in the value.yaml file and builds a list of URLs that can be used to access the application.
Demo: Adding Template Logic
In this demo, Globomantics DevOps upgrade and improve the chart with some functions. When first looking at the backend deployment template, you'll notice that the name of the Release.Name of the chart is used in many places. If it has to change, that means you have to change it everywhere. It would be better to externalize it. So first, DevOps create an _helpers.tpl file inside the templates directory. They open that file, copy the code snippet, and embed it in a define directive with a name. Keep in mind that this name is global to the parent chart and all sub‑charts, so, to avoid any conflict, they prefix the name with the name of the chart. Then, that code snippet can be included in the templates by substituting it with the include directive, which takes two arguments, the function name and the scope. Now DevOps can freely change the content of the function and the new implementation is automatically going to be used in the templates. For example, they can add an if/else directive to allow the user to override the fullname in the values.yaml file. And they also choose to use the printf formatting function with two arguments. The result is transformed with two pipes, 1 to truncate the fullname if it's larger than 63 characters, and 1 to trim the dash if the truncated name finishes with the dash. This complex logic can now be reused in all the other backend templates. We substitute the include directive in the service.yaml, in the ingress, and in the secret. Now let's come back to the bug we had in the last demo. The backend cannot access the database because the database services name now depends on the release name, and if we look at the decoded MongoDB URI, it looks like this. The host name is hard‑coded MongoDB. Globomantics DevOps are going to solve this issue by dynamically building this URI with the release name. First, they split the URI data into a username, a password, a chart name used to define the host, and a port and a database connection string. Then, instead of using hard‑coded string, they build it dynamically in the secret.yaml file. They restrict the scope to the MongoDB URI object with a with directive. Then they list the items needed to build the URI, the protocol, the username, and the password coming from the values file, followed by the host name, but now the host name is dynamically built from the release name and the chart name database. Finally, the port and the database connection string. All those strings are joined with the join pipeline, and that string is encoded in base64 and put in quotes as is required for Kubernetes secret files. This implementation might not be the best, but it gives an example of the with function and some pipelines. It uses the list function and the join pipeline to construct a string. Quite a nice complete example. Note that the username and password are useful here if the backend chart is used as a standalone chart. But here the backend and database are part of a umbrella chart, so it is more convenient to define them in the top chart value file. Look at how Globomantics DevOps override those default values. They go in the top chart values.yamlfile, they create a backend property, and as a child of this property, they copy the block with the secret property object. That way they can override the username and password from the parent chart. This is a common practice when you reuse existing charts from the Helm repository. We'll see that in the next module. Now the bug should be fixed. First, a quick helm template guestbook to check whether everything is okay. We see that the fullname is built from the release and chart's names, as before, but this time by the helper function. We have the MongoDB URI string built and encoded. And all the other manifests are the same. Now we can have upgrade the release with helm upgrade, name of the release, name of the chart to fix the bug. Let's check that the pods are running and open the default browser to test the application. Everything seems to be okay. We can leave some messages and they are stored in the database by the backend. Globomantics DevOps are excited. They can reuse their charts for a frontend, a backend API, or a database in other applications. If you want to test this yourself, all the files are in my GitHub repository. Start with the lab 8 begin folder and the solution is in the lab 8 final folder.
Demo: Installing Dev and Test Releases
The Globomantics dev and test team have one special request. For now, the hosts mapped in the ingress are defined in the chart's values files. And if they want to install two releases of the same chart, one for the dev and one for the test, they have to change the host in the values.yaml file. They would like something more flexible where the host name is also dynamically generated from the release name. Let's do this in the next demo, and, at the same time, learn how to loop through a list of values in a Helm template. First, I have to tell you a bit more about the architecture of the application. The frontend is a single page application built with Angular. When the user connects, the page and its JavaScript are downloaded. Then, the page itself calls the backend API with HTTP requests launched by the JavaScript code. Those requests also come from the external world, so they also have to be done through the ingress. That's why we have two ingresses, one for the frontend and one for the backend API. In the first part of the demo, Globomantics DevOps will disable the ingress that was defined in the frontend and backend charts. Then, they'll build a new ingress in the umbrella chart. To disable the ingress for the backend, they add an if directive. If the ingress enabled value is true, the content is rendered. This is a common practice in Helm templates to make some features optional. Then, in the values.yaml file for the backend, they set that enabled property to true by default. Why true? Because that way we get an ingress by default if the chart is used as a standalone. They do exactly the same for the frontend; add an if directive and activate the ingress by default for a standalone frontend. But those values are going to be overridden and set to false at the top level in the parent chart to disable the ingress. At the top level, in the umbrella chart, we create a templates directory and add an ingress.yaml file. Then, we edit the values.yaml file of that umbrella chart and first disable the backend ingress by overriding the enabled property, and then do the same for the frontend ingress. Then we add an ingress object with two host definitions, one for the frontend, the domain of that host is frontend.minikube.local, and it refers to the frontend chart, and one for backend, accessisble at the domain backend.minikube.local, referring to the backend chart. Now, let's build the ingress manifest from that ingress object. We first set the ingress file definition header with the name built from the release and the chart's names. Note that we could use the frontend helper function because helper functions are global, but it isn't very nice to use children functions in the parent chart. It would be better to use a library chart. Then, we build the ingress rules. We loop on the hosts and build the host name dynamically as the release name followed by a dot and the domain name. As so, our frontend is accessible with a URL that looks like releasename.frontend.minikube.local. And the root path request is forwarded to the backend service, so either the frontend service or the backend API service. Both are named according to the release.name‑ name of the chart for a given host. By the way, don't confuse here ingress's backend and our backend API. Finally, before testing this new chart, let's add a NOTES.txt file to explain to the user which URLs he can access. the application from. This file is part of the templates directory. This is a text file containing some directives, which are also evaluated by the Helm template engine. And the result is displayed at the end of a helm chart install command. If you want to run this demo yourself, you first have to configure your DNS and host file so that the dev and test sub‑domains point to the minikube IP. One way to do this is to add mappings for each dev and test release in the hosts file. Globomantics DevOps are now proud to announce to the dev and test team that they can deploy two independent releases of the same chart, one for dev release and one for test release, and access them separately. They first test the template rendering. We can check whether the ingress is dynamically configured by looping through the host values. Okay, ready to install. First, let's delete the previous release with helm uninstall to free some memory in our Kubernetes cluster. Then, install a dev release with helm install dev. And to customize it without editing the values.yaml file, we can add a ‑‑set to override the value. Here we override the guestbook name to DEV. Note that we see the result of our NOTES.txt template, which shows us where to access the applications. Now let's install a test release the same way, overriding the guestbook_name to TEST. We can check that all the pods are running, three for the dev release, and three for the test release. And finally, let's test the dev release at dev.frontend.minikube.local. The name of the guestbook is DEV, but if we request test.frontend.minikube.local, the name of the guestbook is TEST. So we really have two different releases, one for dev and one for test, running in the same Kubernetes cluster and in the same namespace. All names are dynamically built. If you want to test this by yourself, all the files are in my GitHub repository. Start in the lab 9 begin folder and the solutions are in the lab 9 final folder. Note that you will have to build the backend URI dynamically in the frontend chart as we did in lab 8 for the MongoDB URI. I did not show that part in this demo.
Summary
Here is a small summary of this quite long module. After defining the template engine itself, we used it to replace simple values in Helm templates. Then, we added some logic to the template with functions and pipelines. We learned how to restrict the value's scope with the width directive. That directive included unwanted spaces and carriage returns, so we learned how to delete them. Then, I listed the logical operators, and we used them in flow controls, conditions with if directives, and loops with range directives. Range introduced another scope issue because we were not able to access the parent values. So we introduced the concept of variables. Finally, we learned how to create sub‑templates with helper functions. So, in this module, you learned how to customize a chart. You now have all the knowledge you need to build a Helm chart. After you build your first chart, you'll probably want to share it and reuse it with or without other charts. This is the subject of the next two modules, where we talk about dependencies and repositories. In the previous module, we created Helm charts with raw Kubernetes YAML files. Those charts are not reusable. In this module, we built some templates with values that can be replaced and functions to add some logic. Now our charts are not hard‑coded anymore, and they can be reused with other projects. We are ready to share them, so in the next module, we'll learn how to manage dependencies between charts and how to publish them in Helm repositories.
Managing Dependencies
Packaging a Chart
Hi, this is Phillippe. In this module, we are going to manage Helm charts dependencies and work with repositories. First, we'll describe how to package a chart in a compressed archive. Then, we'll learn what a Helm repository is and how to publish a chart in a repository. Finally, I show you how to define dependencies between charts and how to make dependencies optional with tags and conditions. In this section, we learn how to package our chart in an archive. It's more convenient to build an archive before publishing a chart in a repository. Until now, we have only worked with exported forms of charts as unpacked folders. But before publishing a chart in a repository, it has to be packed. Helm chart packages are simple Unix tar gzip compressed archives. You could build it with the tar command line, but you should not do this because Helm provides a special command for that task, helm package name of the chart. This command compresses your chart folder in a tar.gz archive, but it also adds the chart version number to the archive file name. That chart version number comes from the Chart.yaml file.
Publishing Charts in Repository
Now that we have some chart packages, we want to share them by publishing them in a repository. To make a chart available for other projects, you have to publish it in a Helm repository. But what is a Helm repository? A Helm chart repository is a location where packaged charts can be stored and shared. It's a simple HTTP server containing package chart files and an index.yaml file describing these charts. The index.yaml file can be created with the helm repo index command in the folder containing the compressed charts. When the archives and index.yaml file already you can upload them to any HTTP server, or you can also use ChartMuseum. ChartMuseum is an HTTP server that is a dedicated Helm repository. It provides a nice API to interact with the repository. We are going to use ChartMuseum in the demo. The repository server can also host provenance files. They provide a way to sign a chart to verify its origin and trust it. It's not used often, but if this is required by your security policies, be aware that you can sign a chart with the helm package ‑‑sign command as long as you provide a valid PGP key and that a chart can be verified locally with helm verify plus the name of the compressed chart as long as the provenance file is provided. A chart can also be verified during installation with ‑‑verify. So we have briefly learned how to create a repository. I'll show a real example in the next lab. But once the repository has been created and some charts have been published, how can the Helm client use those charts? This can be done in two steps. The first step is to define the repository in the Helm configuration. I'll show you right now. The second step is to define the dependencies. We'll see this later. Helm maintains a list of repositories. Helm can work with more than one repository at a time, and you can add or remove repositories from the repository list. A custom repository can be added to the list with the helm repo add command followed by the name given to the repository and its server's URL. You can also remove a repository from the list with helm repo remove name of the repository. It's no more complicated than that. Note that there is no deferred repository in the list when you install Helm. That's why one of the first things we did when we installed Helm was to add the official stable Helm repository with the helm repo add command. Unless you only want to work with private charts, you will have to do this step. Unfortunately, that Helm stable repository is not maintained anymore. So you'll have to rely on third‑party repositories. We'll look at it in the next module. Note that in Helm 2, this step was not required. The stable Helm repository was already included in the default repository list.
Demo: Packaging and Publishing Charts
Globomantics DevOps want to make it easier to reuse their charts in other projects. So they are going to pack them into archives and publish them in a repository. Then, they will modify the umbrella chart so that it depends on the three subcharts published in the repository. Let's do it. First, they move the subcharts to a dist directory. So there is no chart in the charts subdirectory for now. Then, they go in the dist directory, which contains the unpacked content of the charts, back end, database, and front end and run helm package on those three subcharts. That command creates three archives that are ready to be uploaded to a repository. But before doing so, those charts archives must be defined in an index.yaml file. That file can be generated in the folder containing the archives by using the helm repo index command. If we look at it, we can see some entries describing the packed charts. Now they are ready to upload the archives and the index file to an HTTP server. They decide to install ChartMuseum Server. ChartMuseum is an HTTP server that is a dedicated Helm repository with a nice API. First, they download ChartMuseum binary. You can find the link in the GitHub ChartMuseum project. Make it executable and save it to the local bin folder. ChartMuseum needs a storage location for the repository. For this demo, it will be stored locally in the home directory, helm/repo. Then, ChartMuseum can be started with the following parameters to use the local storage. It runs and listens on port 8080. For the demo, we'll leave this window open. Finally, in another window, the repository can be populated by just copying the chart archives to the local storage. You could also upload them with HTTP upload request to the ChartMuseum API. And now, let's make an HTTP request to ChartMuseum to get the list of charts. We can see that the charts have been published. In your own projects, you'll set up and use a cloned Helm repository or use an existing one. But in this demo, you have learned how to do it yourself locally with ChartMuseum. It's a good way to understand how the process works. Great. Globomantics DevOps have packed and published their charts to a local repository. Now they can build the umbrella chart, as well as any other charts with dependencies to the charts available in the repository. If you want to run this lab, all the files are in my GitHub repository. Start with the lab_10 begin folder, and the solution is in the lab_10 final folder.
Defining Dependencies
And here is the second step that will allow us to use charts from repositories, defining the dependencies. How can we define dependencies between charts? The guestbook and _____ chart that we built in the demo depends on three subcharts, the frontend, backend, and database charts. The way we managed the dependencies in the previous modules was by copying the unpacked subcharts into the charts subfolder. We could also copy the charts as compressed archive in the charts folder. This is a manual way of managing dependencies. But sooner or later, we'll have to deal with a lot of dependencies between versions. So we need an automatic way to manage dependencies between charts. The dependencies can be defined in a Chart.yaml file. Add a dependencies property, and under that property, set one or multiple dependency definitions. A dependency block defines the subchart name, the version range compatible with your chart, and the repository URL where the archive of the chart can be downloaded. Note that the version property is a version number or a range of version numbers following SemVer 2.0 syntax. The chart is supposed to be compatible with any versions of the subchart that are in the specified range. In this example, our chart is compatible with backend 1.2.2 and all backend 1.2.2 patch versions because of the tilde character before the version number. It's also compatible with all the minor changes to the front end because of the caret character before the version number. Another way to define version ranges is by using x as a wild card. For example, here, our chart is compatible with any 7.8.x version of the database. The ability to define the dependencies in the Chart.yaml file came about with the release of Helm 3, but most existing charts have been written for Helm 2. This is the reason you will not find them in the Chart.yaml file, but rather in a requirements.yaml file. Don't worry. This is still compatible with Helm 3. So in Helm 2, the dependencies were not defined in the Chart.yaml file. Instead, they are in the requirements.yaml file located at the root of the chart. It contains exactly the same content and uses the same syntax as the dependencies property in the Chart.yaml file. Note that Helm 2 charts are compatible with Helm 3. So defining the dependencies in the requirements.yaml file is still supported in Helm 3. However, defining the dependencies in the Chart.yaml file is recommended if you are working with Helm 3. For your information, here are some range notations with the corresponding versions. A tilde or x wildcard defines the range of patch versions. A caret or double x.x wildcard defines the range of minor versions. And you can also define your own custom ranges of versions that way. More information can be found in the documentation of Go's implementation of SemVer 2.0. You might have already used the same conventions if you have worked with Node.js, for example. The npm JavaScript package manager uses the same SemVer syntax in the package.json. So, once the dependencies are defined in the Chart.yaml file or in the requirements.yaml file if you are working with Helm 2, how can you download them from the repository to your charts directory? You can do this by running helm dependency update on your chart. Helm looks for dependencies defined in the Chart.yaml file and downloads the required charts in your charts directory. You can check which charts are available by running helm dependency list name of your chart. And if there are some changes in the required charts, you can run helm dependency update again to sync the changes. But sometimes you don't want to retrieve new subchart versions because you would like to avoid compatibility issues between your chart and new subchart versions. In that case, you can work with the frozen list of dependencies with the same version numbers. They are defined in the chart.log file. This file is automatically generated when you run helm dependency update, and it contains only the dependencies with fixed version numbers rather than ranges. Note that in Helm 2, this file is named requirements.lock. If you need to stick with the same subchart versions, you can run helm dependency build followed by the name of your chart. Note that this command uses build instead of update. This command is based on the Chart.lock file instead of the Chart.yaml file That way, you are sure to get the same versions of the subcharts and avoid any compatibility issues. Again, there is an analogy with Node.js and npm with package.json and package‑lock.json files.
Adding Conditions and Tags
Okay, now tell me, what if I want certain subcharts to be option because I want to install dependencies in some releases, but I don't need them in other releases. This can be done with conditions and tags. In the chart.yaml file, you can add a condition property for each dependency. That condition property contains the names of the properties that will be evaluated to determine if the chart is optional and or not. The condition properties are values within the chart. If one of the properties does exist and is a boolean value, it is evaluated, and the condition is applied. If it's true, the dependency is included as it's rejected. Note that only the first valid property is evaluated. If none of the properties exist, the condition is ignored. So in this example, if you run helm install, the back end is installed because the condition property, backend.enabled, exists and is true. The front end is also installed because there is no condition associated to it. And the database is not installed because the property database.enabled exists and is set to false. But note that if the database property does not exist, the database will be installed by default. If multiple subcharts have an optional feature and don't need to be installed, there is another, more convenient way to do a partial installation. Instead of using conditions, you can tag the subcharts with the same tag and use that tag to make them all optional at once. For example, here we would like to make all charts related to the API optional, so the back end and the database. The way to do this is to tag those two subcharts with a tag property named api and set that api property to true or false. Note that conditions override tags. So the tag only works if the condition properties do not exist. Don't get confused. The condition and tags are not evaluated during dependency update. When you run helm dependency update, Helm just unloads all the charts, no matter what their conditions, tags, and values are. Conditions and tags only have a role when you install a chart. If you run helm install demo guestbook, some charts are installed and others are not depending on the conditions, tags, and values. Keep in mind that you can modify the values with ‑‑set. In this first example, we'll make the database chart required. The second example does not install charts tagged with the api property. So the back end and the database are not installed. Well, they are not installed as long as there are no conditions set for those subcharts because remember conditions override tags.
Demo: Managing Dependencies
In this demo, Globomantics DevOps are going to build the umbrella chart. However, this time instead of copying the subcharts to the charts folder, they will use Helm dependencies. First, they have to configure Helm to use the new ChartMuseum repository. This can be achieved by running helm repo add command and passing the repository name ChartMuseum and the URL of the local ChartMuseum repository. Then, they run helm repo list to check that the repository is available and helm repo update to get the latest Helm chart information from the repository. Let's check which charts are available inside the ChartMuseum repository. Great, the backend, frontend, and database charts are available. Now, the dependencies have to be defined. So, Globomantics DevOps go upon directory to where the umbrella guestbook chart is located and edit the chart.yaml file. Inside that file, they define the dependencies. For each subchart, they include the name of the chart, its range of compatible versions, and the repository where it's located. All charts are published on the localhost server at port 8080. They save that file and run helm dependency update on the guestbook chart. Helm first connects to the repository to get the latest chart definitions and then downloads the subchart archives that are defined in the dependencies from the repository to the charts directory. We can check this. The charts folder was empty, and now it contains the archives of the three dependencies. And the template directory only contains the ingress.yaml and NOTES.txt file. We can also list all current dependencies with helm dependency list guestbook. A detailed view of the dependencies with the version range and repository URL is displayed, and their status is ok. Finally, Globomantics DevOps are ready to install a new development release of the guestbook application, but this time with a number of charts which uses Helm dependencies rather than subfolders. As you can see, the chart is installed. We can check with helm list kubectl or helm get manifest commands. The result is the same as before. The only difference is that this time, Globomantics DevOps have used subcharts published in the repository as dependencies, and they could easily do the same for other projects. Okay, let's delete that release for the next demo. If we look at the content of the chart, we see that the Chart.lock file has been created. Let's view that file. Well, it's the same as the Chart.yaml file except that it contains fixed version numbers instead of ranges of versions. Now, let's imagine that the dev team has released a new version of the front end, and a patch chart is packed and published in the local repository. But Globomantics DevOps might not want to run helm dependency update because it could break the guestbook application if the new subchart is not compatible. Instead, they can run helm dependency build guestbook, build instead of update, which is based on this Chart.lock file with fixed version numbers for all subcharts. If you want to run this lab, all the files are in my GitHub repository. Start with the lab10 begin folder, and the solutions is in the lab10 final folder.
Demo: Controlling Dependencies with Conditions and Tags
Now the Globomantics Dev team has a special request for a lightweight release containing only the front end to test the UI on the local cluster. Globomantics DevOps need to provide a Helm chart to install the umbrella chart without the back end and the database. For that purpose, they added the Chart.yaml file and added condition and tag properties. The back end is not installed if the backend.enabled property exists and is set to false. They also add an api tag to that subchart. And it's the same for the database. It's not installed if the database.enabled exists and is set to false. The database chart is also part of the API, so it's also tagged with an api tag. Let's now define the values in the values.yaml file. All conditions and tags are true by default, the back end enabled condition, the database enabled condition, and the api tag. So using helm install guestbook would install the full application with the three subcharts. But if DevOps need to install a partial guestbook application, they can do it by setting the conditions to false for the back end and for the database. Look, this time, only the front end has been installed. Here, this was done by setting conditions, but the same result can be achieved with a single tag. First, edit the values.yaml file. I'll erase the properties evaluated for conditions because the conditions would override tag. Let's delete the release and run helm install while setting the tag api to false. That command achieves the same partial installation. It installs only the front end. Globomantics DevOps have now mastered Helm dependencies and are ready to provide many charts reusing their existing charts. If you want to run this lab, all the files are in my GitHub repository. Start with the lab_10 begin folder, and the solution is in the lab_10 final folder.
Summary
In this module, you learned how to packaged your chart in an archive. This is indeed needed to publish a chart in a Helm repository. Then you have learned how to define and manage dependencies between charts. And I have shown how to use conditions and tags to make some charts optional. In the previous modules, the charts were stored in the charts subfolder as unpacked charts. In this module, you have learned how to pack them into archives and publish them to the local Helm repository. And we have configured the umbrella chart so that it depends on those subcharts. Now the charts are automatically downloaded each time we run helm dependency update, and we can reuse them in other projects. As you can imagine, all the DevOps have already built nice charts for well‑known products. In the next demo, we'll change our database chart for the official stable MongoDB chart from the stable Helm repository. We learned how to depend on it and how to customize its values for our application.
Using Existing Helm Chart
Using Existing Helm Charts
Hi. This is Philippe. You've learned how to build Helm charts in the previous module. But as you can imagine, other DevOps might have already done the same work for well‑known products. In this module, we'll see what the Helm stable repository is and how to search for existing charts. Once you have found a chart that fits your needs, you'll learn how to use it and set up its values. In this course, we have followed the path that you come across in many IT fields. You start with some source code files, here, the Kubernetes YAML files. And when you have a lot of sources or compiled files, you package them usually in archives, here the Helm charts. When you want to share the archives, you publish them to repositories, here the Helm repository. And when you have to deal with a lot of repositories, you use a repository hub, like GitHub or Docker Hub for example. Here is a table with some analogies with other IT fields. Sometimes the analogy is perfect, and sometimes it doesn't completely match. Helm takes the best of all words. Its packaging feature looks like Maven with Java or npm JavaScript package managers with great version dependency support. Plus, remember, it has the whole template support, and the repository and hub part looks more like what you can find in the Docker world with Docker Hub. We already saw how to add the Helm stable repository with helm repo add command. But keep in mind that this repository is no longer maintained, and its charts are deprecated . So, where can you find the latest Helm charts? Well, you can search the Helm repository hub. The Helm repository hub can be accessed at one of the following addresses. This is a registry with a lot of third‑party Helm repositories. You can just search for a project, and you get the list of charts available for that project and their repositories. For example, here we found the MongoDB chart maintained by the Bitnami organization in the bitnami repository, and you get the nice documentation of that chart and how to install it. To install that chart, you first have to add the bitnami repository to Helm. Then, install the chart. If you prefer the command line, here are the main options, helm repo list retrieves the list of repositories, their names and their URLs. As a reminder, you can add or remove repositories from the list with helm repo add and helm repo remove commands. To search a repository for a chart, use helm search and a keyword. Since Helm 3, you have to specify whether you want to search in the list of repositories with the repo command or in the hub. Once you have found the chart you are interested in, you can view its documentation with the helm inspect command, either a global one with all commands or others limited to the chart or values. If you have the choice, I recommend that you look at formatted documentation either in the Helm GitHub repository or in the Helm help website rather than the raw documentation in the console. To download a chart directly without using dependencies, you can run help fetch. It's useful if you want to look at the chart source code before using it as a dependency. Then, once you have added the stable chart as a dependency in the Chart.yaml file or in the requirements.yaml file if you're working with Helm 2, you can run helm dependency update chart_name to download the stable dependency in your chart subfolder. Of course, all these commands can also be used for custom charts from other repositories besides the stable repository. Just make sure that the third‑party repository has been previously added to the list of repositories.
Customizing Existing Charts
When you reuse an existing chart, you often need to customize it to meet your needs. We have already seen how to overwrite child charts values with the values in the parent chart. But as we said in the previous module, there are other techniques. We'll see about them in this section. In the previous module, we copied the values from the child chart to the parent chart and moved them under a property that has the name of the child chart. In other words, we overrode the values from the child chart with values defined in the parent chart. This is the default way for customizing existing charts. But sometimes you might want to do the opposite, export values from the child chart so that they are available in the parent chart. To be honest, this is not done often, but there are two ways to do it, and it's good to know. The first way is to define an exports property in the child chart. To make that exports property available in the parent chart, you can add import‑values property in the dependency section of the Chart.yaml file in the block corresponding to the child chart or in the requirements.yaml file if you are still working with Helm 2. That way, you can access any values from that child chart as if they were defined in the parent chart. For example, MongoDB URI can be accessed as .Values.mongodb_uri without specifying that it's from the child chart. Note that the data property it's used to embed the exported property, mongodb_uri. This technique has some limitations. First, it requires values of the child chart to be under an exports property. Secondly, it might cause name conflicts if it's used with multiple child charts. Fortunately, there is another way that we'll see in the next slide. If you want to have a look at an example, go to my GitHub repository in the bonus section of lab 10, helm_dependencies_export. The other technique is to use a child parent mapping. This time, you can export any property from the child to the parent by defining a mapping in the Chart.yaml file or in the requirements.yaml file if you are still working with Helm 2. In the block corresponding to the child chart, add an import‑values property with two subproperties, a child property containing the name of the child property that has to be exported and a parent property containing the name of the parent property mapped to the exported child property. In this example, you can access data from the child property with the mapped frontend_data property in the parent chart. If you want to see an example of that code, have a look at the bonus section of lab10_helm_dependencies_child‑parent in my GitHub repository. I find those techniques quite tricky, and maybe you do too. And that might be the reason why these methods are rarely used, even if they could add a nice chart introspection feature to Helm. I don't think you absolutely need to use these methods. You can do everything by overwriting child values from the parent chart, and also using global variable can help in many situations. We already saw it in one of the last modules. But as a reminder, here is how it works. A reserved name for a property is the name global. A global property, when defined in the parent chart, is available in the chart and all its subcharts. It can be accessed with the same .Values.global directive whether you are in a parent or in a subchart template. This is a convenient way to declare a common property for a parent chart and all its subcharts. And note that the global property will be passed downward from the parent to the subcharts, but not upward from the child chart to the parent chart.
Demo: Using Stable MongoDB Charts
Globomantics DevOps can scale the guestbook app by editing the replicas in the values.yaml file. So they are ready for production except for the MongoDB database. To be ready for production with the database, they need to provide a production‑ready MongoDB replica set with the primary instance, secondary replicas, and an arbiter. They are not MongoDB experts, but they can accomplish that task easily with MongoDB stable Helm chart. This is one of the main advantages of using a package manager. You can install a complex project and the package manager hides the complexity. For now, they are using a single MongoDB server chart built from scratch. That chart was called database. Let's remove it. Within the list of default repositories, they search for a MongoDB chart. There are several results. Let's choose the first stable/mongodb chart for the demo. They first inspect the documentation with the command line and inspect readme name of the chart, but it's not very readable. So, they go to the Helm help website and search for MongoDB chart to access a nicely formatted version of the document. Here is the stable /mongodb chart. Note that it can now be found in the bitnami repository. What is important for Globomantics DevOps at first is the version of the chart, 7.8.4, the mongodbRootPassword property, the replicaSet.enabled property that they must set to true because it is false by default, and the key for authentication in the replicaSet. All the other properties are set by default for now. We just set the persistent.size to less than 8Gi because we don't want to fill up all the minikube host storage. This is just an initial test. And afterward, they would be able to tune the chart for security, monitoring with Prometheus, and other configurations. Then, they edit the chart file to add a dependency to the MongoDB chart. The name of the chart is mongodb. The application should be compatible with current version, 7.8.4. And the repository URL is the URL of the stable Helm repository. The condition property changes to mongodb.enabled. Now the command helm dependency update downloads the MongoDB chart from the stable Helm repository to the chart directory. Let's check this by looking in the directory or by running helm dependency list on the chart. Okay, now it's time to customize the stable MongoDB chart for the application. Let's open the chart's values.yaml file, change the database property for mongodb, which is the new name of the new subchart, and enable the replicaSet feature, giving it a key, which is the string password. Then, also set the mongodbRootPassword to the string password. And finally, limit the persistent volume size to 100Mi. The backend mongodb_uri connection string also has to be updated. The admin username is root for this new chart. The name of the chart is mongodb, and the connection string must include the replicaSet's name. That's it. The new MongoDB chart is configured for an initial test. Globomantics DevOps launch helm install guestbook with the dev release name and wait a while because the MongoDB instances have to be created. After a couple of minutes, they check whether everything is running with kubectl get pods. Look at that, our backend API, our frontend, a mongodb‑primary instance, a secondary replica, and more replicas can be managed by the arbiter. Globomantics DevOps have come a long way from simple YAML files up to a production‑ready chart with a MongoDB replica set. Congrats. They can be proud, and you can be proud if you followed them all the way through. If you want to run this lab, all the files are in my GitHub repository. Start with the lab11 begin folder, and the solutions are in the lab11 final folder.
Demo: Installing Wordpress in Kubernetes in 1 Minute
And now a small bonus as icing on the cake. We are going to install a WordPress site with a MariaDB database in Kubernetes with Helm. How long do you think this will take? Less than 1 minute. Welcome to the magic of Helm. Look at this. First, look for a WordPress chart in the stable repository. There is one chart available, stable/wordpress. Open the Helm hub website to learn a little bit more about it. Search for WordPress. Here is the official WordPress chart. It has the name bitnami/wordpress, but it's exactly the same. Look at the documentation. We can install it from the stable repository. It's just that the chart can be published in more than one repository. We could also add bitnami repository to our config and install if from there. The documentation shows all the available values to customize the WordPress installation. We'll leave all the defaults for this demo. Run helm install demo‑wordpress stable/wordpress. And that's it. Check with kubectl that the pods are running. The MariaDB database and the WordPress site are both running. To access the site, you can use the default user user and to retrieve the password as described in the NOTES.txt file. Kubectl gets the secret, and then you decode that secret. Here is the password. As we don't have an external load balancer with our Minikube instance, we can access the service using a node port. Run kubectl get service. The WordPress service is running on port 31822, and the IP of the node is retrieved by minikube ip command. Open the browser, add the node's IP with the node pod port, and your WordPress blog is ready. Connect to the editing console with the default user and the password that we decoded before. Copy it, and you can add or update a post in the blog. Let's congratulate Globomantics DevOps. Even if, let's be honest, it was not a hard job. They just reused an existing chart. But that's also one of the main advantages of a package manager, isn't it? It allows clever DevOps to be lazy, but still efficient. Of course, you can now read the doc and customize the installation as needed and scale the application. You don't need any files to run this lab. You just need a Helm environment. So just do it.
Summary
In this module, you've learned how to use existing charts from the stable Helm repository. We have reviewed some useful Helm commands and discovered the Helm hub website. Then, you've learned how to customize an existing chart by overwriting the values and exporting them or exporting with a child‑parent mapping. Finally, we'll start a stable MongoDB chart in our guestbook demo application and, as a bonus, we installed a WordPress blog in Kubernetes with Helm in less than 1 minute. In the previous module, we were using MongoDB running on a singer server. We built that simple MongoDB chart from scratch. But for production, a more advanced MongoDB configuration is needed. As you can imagine, all the people have already built nice charts for MongoDB. In this module, we've exchanged our database charts with the MongoDB chart available in the stable Helm repository. We configured the dependencies for that chart and customized its values. That way, we can configure it as a MongoDB replica set with a primary server, secondary servers, and arbiter. Well, we have come a long way from kubectl commands and simple YAML files up to a production‑ready chart with MongoDB replica set. Congratulations, and thank you for joining me on this Helm journey.