Note
|
This repository contains the guide documentation source. To view the guide in published form, view it on the Open Liberty website. |
Deploy microservices in Open Liberty Docker containers to Kubernetes and manage them with the Kubernetes CLI, kubectl.
Kubernetes is an open source container orchestrator that automates many tasks involved in deploying, managing, and scaling containerized applications.
Over the years, Kubernetes has become a major tool in containerized environments as containers are being further leveraged for all steps of a continuous delivery pipeline.
Managing individual containers can be challenging. A few containers used for development by a small team might not pose a problem, but managing hundreds of containers can give even a large team of experienced developers a headache. Kubernetes is a primary tool for development in containerized environments. It handles scheduling, deployment, as well as mass deletion and creation of containers. It provides update rollout abilities on a large scale that would otherwise prove extremely tedious to do. Imagine that you updated a Docker image, which now needs to propagate to a dozen containers. While you could destroy and then re-create these containers, you can also run a short one-line command to have Kubernetes make all those updates for you. Of course this is just a simple example. Kubernetes has a lot more to offer.
Deploying an application to Kubernetes means deploying an application to a Kubernetes cluster.
A typical Kubernetes cluster is a collection of physical or virtual machines called nodes that run containerized applications. A cluster is made up of one master node that manages the cluster, and many worker nodes that run the actual application instances inside Kubernetes objects called pods.
A pod is a basic building block in a Kubernetes cluster. It represents a single running process that encapsulates a container or in some scenarios many closely coupled containers. Pods can be replicated to scale applications and handle more traffic. From the perspective of a cluster, a set of replicated pods is still one application instance, although it might be made up of dozens of instances of itself. A single pod or a group of replicated pods are managed by Kubernetes objects called controllers. A controller handles replication, self-healing, rollout of updates, and general management of pods. One example of a controller that you will use in this guide is a deployment.
A pod or a group of replicated pods are abstracted through Kubernetes objects called services that define a set of rules by which the pods can be accessed. In a basic scenario, a Kubernetes service exposes a node port that can be used together with the cluster IP address to access the pods encapsulated by the service.
To learn about the various Kubernetes resources that you can configure, see the official Kubernetes documentation.
You will learn how to deploy two microservices in Open Liberty containers to a local Kubernetes cluster.
You will then manage your deployed microservices using the kubectl
command line
interface for Kubernetes. The kubectl
CLI is your primary tool for communicating with and managing your
Kubernetes cluster.
The two microservices you will deploy are called name
and ping
. The name
microservice simply
displays a brief greeting and the name of the
container that it runs in, making it easy to distinguish it from its other replicas. The ping
microservice
simply pings the Kubernetes Service that encapsulates the pods running the name
microservice. This demonstrates
how communication can be established between pods inside a cluster.
You will use a local single-node Kubernetes cluster.
The first step of deploying to Kubernetes is to build your microservices and containerize them with Docker.
The starting Java project, which you can find in the start
directory, is a multi-module Maven
project that’s made up of the name
and ping
microservices. Each microservice resides in its own directory,
start/name
and start/ping
. Each of these directories also contains a Dockerfile, which is necessary
for building Docker images. If you’re unfamiliar with Dockerfiles, check out the
Using Docker containers to develop microservices guide,
which covers Dockerfiles in depth.
If you’re familiar with Maven and Docker, you might be tempted to run a Maven build first and then
use the .war
file produced by the build to build a Docker image. While it is by no means a wrong
approach, we’ve setup the projects such that this process is automated
as a part of a single Maven build. This is done by using the dockerfile-maven
plug-in, which automatically
picks up the Dockerfile located in the same directory as its POM file and builds a Docker image from it.
If you’re using Docker for Windows ensure that, on the Docker for Windows General Setting page, the option is set to Expose daemon on tcp://localhost:2375 without TLS
. This is required by the dockerfile-maven
part of the build.
Navigate to the start
directory and run the following command:
mvn package
The package
goal automatically invokes the dockerfile-maven:build
goal, which runs during the
package
phase. This goal builds a Docker image from the Dockerfile located in the same directory
as the POM file.
During the build, you’ll see various Docker messages describing what images are being downloaded and built. When the build finishes, run the following command to list all local Docker images:
docker images
Verify that the name:1.0-SNAPSHOT
and ping:1.0-SNAPSHOT
images are listed among them, for example:
WINDOWS | MAC
REPOSITORY TAG
ping 1.0-SNAPSHOT
name 1.0-SNAPSHOT
open-liberty latest
k8s.gcr.io/kube-proxy-amd64 v1.10.3
k8s.gcr.io/kube-scheduler-amd64 v1.10.3
k8s.gcr.io/kube-controller-manager-amd64 v1.10.3
k8s.gcr.io/kube-apiserver-amd64 v1.10.3
k8s.gcr.io/etcd-amd64 3.1.12
k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64 1.14.8
k8s.gcr.io/k8s-dns-sidecar-amd64 1.14.8
k8s.gcr.io/k8s-dns-kube-dns-amd64 1.14.8
k8s.gcr.io/pause-amd64 3.1
LINUX
REPOSITORY TAG
ping 1.0-SNAPSHOT
name 1.0-SNAPSHOT
open-liberty latest
k8s.gcr.io/kube-proxy-amd64 v1.10.0
k8s.gcr.io/kube-controller-manager-amd64 v1.10.0
k8s.gcr.io/kube-apiserver-amd64 v1.10.0
k8s.gcr.io/kube-scheduler-amd64 v1.10.0
quay.io/kubernetes-ingress-controller/nginx-ingress-controller 0.12.0
k8s.gcr.io/etcd-amd64 3.1.12
k8s.gcr.io/kube-addon-manager v8.6
k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64 1.14.8
k8s.gcr.io/k8s-dns-sidecar-amd64 1.14.8
k8s.gcr.io/k8s-dns-kube-dns-amd64 1.14.8
k8s.gcr.io/pause-amd64 3.1
k8s.gcr.io/kubernetes-dashboard-amd64 v1.8.1
k8s.gcr.io/kube-addon-manager v6.5
gcr.io/k8s-minikube/storage-provisioner v1.8.0
gcr.io/k8s-minikube/storage-provisioner v1.8.1
k8s.gcr.io/defaultbackend 1.4
k8s.gcr.io/k8s-dns-sidecar-amd64 1.14.4
k8s.gcr.io/k8s-dns-kube-dns-amd64 1.14.4
k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64 1.14.4
k8s.gcr.io/etcd-amd64 3.0.17
k8s.gcr.io/pause-amd64 3.0
If you don’t see the name:1.0-SNAPSHOT
and ping:1.0-SNAPSHOT
images, then check the Maven
build log for any potential errors.
In addition, if you are using Minikube, make sure your Docker CLI is configured to use Minikube’s Docker daemon and not your host’s as described in the previous section.
Now that your Docker images are built, deploy them using a Kubernetes resource definition.
A Kubernetes resource definition is a yaml file that contains a description of all your deployments, services, or any other resources that you want to deploy. All resources can also be deleted from the cluster by using the same yaml file that you used to deploy them.
To deploy the name
and ping
applications, first create the kubernetes.yaml
file in the start
directory:
link:finish/kubernetes.yaml[role=include]
This file defines four Kubernetes resources. It defines two deployments and two services. A Kubernetes deployment is a resource responsible for controlling the creation and management of pods. A service exposes your deployment so that you can make requests to your containers. Three key items to look at when creating the deployments are the label
, image
, and containerPort
fields. The label
is a way for a Kubernetes service to reference specific deployments. The image
is the name and tag of the docker image that you want to use for this container. Finally, the containerPort
is the port that your container exposes for purposes of accessing your application. For the services, the key point to understand is that they expose your deployments. The binding between deployments and services is specified by the use of labels — in this case the app
label. You will also notice the service has a type of NodePort
. This means you can access these services from outside of your cluster via a specific port. In this case, the ports will be 31000
and 32000
, but it can also be randomized if the nodePort
field is not used.
Run the following commands to deploy the resources as defined in kubernetes.yaml:
kubectl apply -f kubernetes.yaml
When the apps are deployed, run the following command to check the status of your pods:
kubectl get pods
You’ll see an output similar to the following if all the pods are healthy and running:
NAME READY STATUS RESTARTS AGE
name-deployment-6bd97d9bf6-4ccds 1/1 Running 0 15s
ping-deployment-645767664f-nbtd9 1/1 Running 0 15s
You can also inspect individual pods in more detail by running the following command:
kubectl describe pods
You can also issue the kubectl get
and kubectl describe
commands on other Kubernetes resources, so feel
free to inspect all other resources.
Next you will make requests to your services.
WINDOWS | MAC
The default hostname for Docker Desktop is localhost
.
LINUX
The default hostname for minikube is 192.168.99.100. Otherwise it can be found using the minikube ip
command.
Then curl
or visit the following URLs to access your microservices, substituting the appropriate hostname:
-
http://[hostname]:31000/api/name
-
http://[hostname]:32000/api/ping/name-service
The first URL returns a brief greeting followed by the name of the pod that the name
microservice
runs in. The second URL returns pong
if it received a good response from the name-service
Kubernetes Service. Visiting http://[hostname]:32000/api/ping/[kube-service]
in general returns either
a good or a bad response depending on whether kube-service
is a valid Kubernetes Service that can be accessed.
To use load balancing, you need to scale your deployments. When you scale a deployment, you replicate its pods, creating more running instances of your applications. Scaling is one of the primary advantages of Kubernetes because replicating your application allows it to accommodate more traffic, and then descale your deployments to free up resources when the traffic decreases.
As an example, scale the name
Deployment to 3 pods by running the following command:
kubectl scale deployment/name-deployment --replicas=3
Use the following command to verify that two new pods have been created.
kubectl get pods
NAME READY STATUS RESTARTS AGE
name-deployment-6bd97d9bf6-4ccds 1/1 Running 0 1m
name-deployment-6bd97d9bf6-jf9rs 1/1 Running 0 25s
name-deployment-6bd97d9bf6-x4zth 1/1 Running 0 25s
ping-deployment-645767664f-nbtd9 1/1 Running 0 1m
Wait for your two new pods to be in the ready state, then curl
or visit the http://[hostname]:31000/api/name
URL. You’ll notice that the service will respond with a different name when you call it multiple times. This is because there are now three pods running all serving the name
application. Similarly, to descale your deployments you can use the same scale command with fewer replicas.
When you’re building your application, you may find that you want to quickly test a change. To do that, you can rebuild your docker images then delete and re-create your Kubernetes resources. Note that there will only be one name pod after you redeploy since you’re deleting all of the existing pods.
mvn package
kubectl delete -f kubernetes.yaml
kubectl apply -f kubernetes.yaml
This is not how you would want to update your applications when running in production, but in a development environment this is fine. If you want to deploy an updated image to a production cluster, you can update the container in your deployment with a new image. Then, Kubernetes will automate the creation of a new container and decommissioning of the old one once the new container is ready.
A few tests are included for you to test the basic functionality of the microservices. If a test failure
occurs, then you might have introduced a bug into the code. To run the tests, wait for all pods to be
in the ready state before proceeding further. The default properties defined in the pom.xml
are:
Property | Description |
---|---|
cluster.ip |
IP or hostname for your cluster, |
name.kube.service |
Name of the Kubernetes Service wrapping the |
name.node.port |
The NodePort of the Kubernetes Service |
ping.node.port |
The NodePort of the Kubernetes Service |
Navigate back to the start
directory.
WINDOWS | MAC
Run the integration tests against a cluster running with a hostname of localhost
:
mvn verify -Ddockerfile.skip=true -Dcluster.ip=localhost
LINUX
Run the integration tests against a cluster running at the default Minikube IP address:
mvn verify -Ddockerfile.skip=true
You can also run the integration tests with an IP address of 192.168.99.100
:
mvn verify -Ddockerfile.skip=true -Dcluster.ip=192.168.99.100
The dockerfile.skip
parameter is set to true
in order to skip building a new Docker image.
If the tests pass, you’ll see an output similar to the following for each service respectively:
-------------------------------------------------------
T E S T S
-------------------------------------------------------
Running it.io.openliberty.guides.name.NameEndpointTest
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 0.673 sec - in it.io.openliberty.guides.name.NameEndpointTest
Results :
Tests run: 1, Failures: 0, Errors: 0, Skipped: 0
-------------------------------------------------------
T E S T S
-------------------------------------------------------
Running it.io.openliberty.guides.ping.PingEndpointTest
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0, Time elapsed: 2.222 sec - in it.io.openliberty.guides.ping.PingEndpointTest
Results :
Tests run: 2, Failures: 0, Errors: 0, Skipped: 0
When you no longer need your deployed microservices, you can delete all Kubernetes resources by running the kubectl delete
command:
kubectl delete -f kubernetes.yaml
You have just deployed two microservices to Kubernetes. You then scaled a microservice and ran integration tests against miroservices that are running in a Kubernetes cluster.