Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix #4: Adds files needed for deployment on Kubernetes #494

Closed
wants to merge 1 commit into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
62 changes: 62 additions & 0 deletions Dockerfile-deploy
Original file line number Diff line number Diff line change
@@ -0,0 +1,62 @@
# This file will produce the telescope's image needed for deployment (ex. on Kubernetes)
#
# CLI command to create an image using this file:
# Assumption: `telescope-deploy` would be the image name. If you change this name, change the image name also on `telescope.yaml`
# $ docker build -f Dockerfile-deploy -t telescope-deploy . ----> notice the dot at the end!
# to put the image on Docker Hub:
# $ docker login ----> you need an account; use your credential
# $ docker tag telescope-deploy <DockerId>/telescope-deploy:<version> ----> ex. $ docker tag telescope-deploy username/telescope-deploy
# needs Docker Hub ID. We may ignore version (we are using latest in `telescope.yaml` which is the default version)
# otherwise make them consistent in image tag and in YAML file.
# $ docker push <DockerId>/telescope-deploy:<version> ----> ignore version if you ignored in image tag


# Dockerfile-deploy

# First stage of building image
###############################

# Use `node` long term stable image as the parent to build this image
FROM node:lts as builder

# Change working directory on the image
WORKDIR "/telescope"

# Copy package.json and .npmrc from local to the image
# Copy before bundle app source to make sure package-lock.json never gets involved
COPY package.json ./
COPY .npmrc ./

# Install all Node.js modules on the image
RUN npm install --no-package-lock --production


# Second stage, making the final image
# only keep run-requirment from last stage
##########################################

# Use `node` long term stable and slim version image as the parent to build final image
# The slim version has minimal packages needed to run application
# It doesn't have version control, compiler, and etc which makes it significantly smaller and more secure
FROM node:lts-slim

# Change working directory on the image
WORKDIR "/telescope"

# Copy dependencies from last stage
COPY --from=builder /telescope/node_modules /telescope/node_modules

# Bundle app source
COPY . .

# Set the environment to production
ENV NODE_ENV production

# Change the run time user to not be root any more
# The node image has a run time user already set up
USER node

# Expose (manage) the server port on Kuberbetes

# run the image with this command
CMD ["npm", "start"]
121 changes: 121 additions & 0 deletions telescope.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,121 @@
# This file creates replicas number of Pods, all served with one Kubernetes Service resource
# each pod has 2 containers (telescope and redis) as has been specified in following Deployment resource

# Change <DockerID> with the Docker Hub namespace (ID/username) in this file
# To deploy project on Kubernetes we must have telescope image from `Dockerfile-deploy` file pushed on Docker Hub (instruction in `Dockerfile-deploy`).
# We need `kubectl` installed, and for local simulation of the deployment, `minikube` installed and running.
# deploy the app using following command
# $ kubectl create -f telescope.yaml
# to watch the pods as they gets created
# $ kubectl get pods --watch
# to see stdout of containers (telescope or redis) in one specific instance (pod)
# $ kubctl logs <pod name as shown by above command> -c [telescope | redis] ---> ex. $ kubectl logs telescope-7b567445fc-pn7cb -c telescope
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

spelling error, change to kubectl

# finally access to the app which is running in Kubernetes cluster using this command
# $ minikube service telescope
# we can scale the app using the following command (example: from 3 replicas to 5)
# $ kubectl scale --replicas=5 telescope
# re-creating and replacing resources defined in a YAML file (ONLY IN MINIKUBE):
# $ kubectl replace --force -f telescope.yaml -----> Will cause service outage; Do not use in real deployment in Kubernetes

# Service resource
## telescope needs a service resource to be accessible to the outside the cluster
## Service resource also guarantees the same IP address after each restart

# version of Kubernetes API- using stable version
apiVersion: v1
# resource type and label
kind: Service
metadata:
name: telescope
# specifications of resource
spec:
selector:
name: telescope
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had to useapp: telescope here otherwise an endpoint wasn't being assigned to the service

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I didn't need to have a named endpoint. I tried to prevent any Kubernetes vocabulary that I thought it was unnecessary. Why do you think we need to have a named endpoint?

Copy link
Contributor

@c3ho c3ho Jan 12, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was never able to get the deployment and the service up and running through the yaml at the same time, the deployment I verified works, the service not so much. Whenever I tried using the service with minikube service telescope , I wouldn't be able to access the website

Until the change, these were my steps to get it running

  1. kubectl create -f telescope.yaml
  2. kubectl delete services telescope
  3. kubectl expose deployment telescope --type=NodePort
    or kubectl expose deployment telescope --type=LoadBalancer
  4. minikube service telescope

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was not able to get the localhost although the backend was working. So if you could get the localhost with this change, I should change it as well.

ports:
# the port that cluster is listening (exposed)
- port: 3000
# the port in each pod that Service maps traffic on
targetPort: 3000
# the exposed port will be manipulated as nodePort. If we don't define it, it would be a random number greater than/equal 30000
# this is the port number we will see when we access to service by cluster's IP address
nodePort: 30000
# This resource needs to communicate with outside of the cluster
# and share traffic between instances of app
type: LoadBalancer
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Although this "works" for minikube, we should change this to NodePort, since we don't actually have a load balancer

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a LoadBalancer and I have it here by purpose. I actually commented about the reason right at the line above: share traffic between instances of app. We have 3 replicas of the app (line 58 of this file) running at the same time.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think there's a difference between having replicas and actually load balancing. If you want to use the load balancing service, you need to use minikube tunnel as well (I think).

https://minikube.sigs.k8s.io/docs/tasks/loadbalancer/
https://blog.codonomics.com/2019/02/loadbalancer-support-with-minikube-for-k8s.html

---
# Deployment resource

# version of Kubernetes API- stable version for deployment resource
apiVersion: apps/v1
kind: Deployment
metadata:
name: telescope
labels:
# related to the service
app: telescope
spec:
# at least 3 instances (pods) for resilient deployment, proper traffic management, and smooth updating the app
replicas: 3
selector:
matchLabels:
app: telescope
# label and specification of Pod
template:
metadata:
labels:
app: telescope
spec:
# shared file system between containers (redis and telescope)
volumes:
- name: storage
emptyDir: {}
# containers in this Pod (we have two)
containers:
# first container
- name: telescope
# change DockerID to telescope's Docker Hub username
image: <DockerID>/telescope-deploy:latest
# Always pull, because we are using the latest image version
imagePullPolicy: Always
ports:
- containerPort: 3000
livenessProbe:
httpGet:
# this path is implemented in our app. we can also check with home page ('/') to get 200 response
path: /health
port: 3000
initialDelaySeconds: 30
timeoutSeconds: 5
readinessProbe:
httpGet:
path: /health
port: 3000
initialDelaySeconds: 5
timeoutSeconds: 1

# second container
- name: redis
image: redis:latest
# Always pull, because we are using the latest image version
imagePullPolicy: Always
ports:
- containerPort: 6379
# mount storage to the shared file system
volumeMounts:
- name: storage
mountPath: /from-redis
# container health checks to apply restart policy on Pod
livenessProbe:
exec:
command:
- redis-cli
- ping
initialDelaySeconds: 30
timeoutSeconds: 5
readinessProbe:
exec:
command:
- redis-cli
- ping
initialDelaySeconds: 5
timeoutSeconds: 1