Scenario-102: 1-Node K8S Deployment II
- Objective: Single node deployment for stateful service
- Requirements:
1. Use yaml to start one mysql server service with 1 instance.
2. Use yaml to start one mysql client service with 2 instances.
3. Delete the db server instance, make sure a new one will be created automatically.
4. When db server is recreated, make sure no data loss.
See kubernetes.yaml
- Use persistent volumes in 3 steps
Creating and using a persistent volume is a three step process:
1. Provision: Administrator provision a networked storage in the
cluster, such as AWS ElasticBlockStore volumes. This is called as
PersistentVolume.
2. Request storage: User requests storage for pods by using
claims. Claims can specify levels of resources (CPU and memory),
specific sizes and access modes (e.g. can be mounted once read/write
or many times write only). This is called as PersistentVolumeClaim.
3. Use claim: Claims are mounted as volumes and used in pods for storage.
https://blog.couchbase.com/stateful-containers-kubernetes-amazon-ebs/
- Highlights
Q: How does the volume process work?
Q: How PersistentVolumeClaim know use which PersistentVolume?
To setup mysql service, here we use mysql image in docker hub.
- Start vm
# start a VM to host our deployment
minikube start
# Create k8s volume, deployment and service
kubectl create -f ./kubernetes.yaml
See kubernetes.yaml
- Check k8s web UI Dashboard
minikube dashboard
- List k8s resources
# list deployments
kubectl get deployment
# list service
kubectl get services
# list pods
kubectl get pods
- Run functional test
Open mysql client to access the mysql server
Use phpmyadmin, create a database and a table TODO
phpmyadmin url:
minikube service my-dbclient-service --url
dbserver_url:
"$(minikube service my-dbserver-service --url)"
TODO: remove http, add screenshot
- Mysql server resilient test
- If one instance is down, another will be started automatcially.
TODO
From Web UI, delete the mysql server Pod.
We should see ReplicationController will start a new one.
Confirm the database and table still persist, which were created in last step.
- Delete k8s resources
kubectl delete -f ./kubernetes.yaml
- Destroy env
minikube delete