-
Notifications
You must be signed in to change notification settings - Fork 189
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Fix #4: Adds files needed for deployment on Kubernetes #494
Conversation
@raygervais, please check this PR out. |
documenting for future use: |
Sorry for the delay, holidays + work is never a fun combo. Looking into the docker-compose file and the Kubernete's declaration, this looks like a great starting point that telescope can iterate on. Regarding your variable substitution, a hacky way I had done in the best was to leverage SED/AWK to substitute and place in For your localhost issue, I'm trying to explore and help debug but am running into the following issues while trying to start the container you uploaded.
This is coming from running the following commands I'm executing locally:
Am I missing anything? I see the CERT issue and am curious if it's an upstream fix that your branch is missing, or a file on my own W10 system that's missing? |
@raygervais this might be pieces missing from our docker setup that only landed recently in doc form at https://github.com/Seneca-CDOT/telescope/blob/master/docs/Local_Login.md. cc @Grommers00, @agarcia-caicedo, @ragnarokatz, who all worked on parts of this, and might have more details. The |
Also, our errors when things are missing (certs, redis not running, etc.) could be way, way more helpful. I know what's going on, looking at those stacks, but there's no way someone not living and breathing all this code will. We should be logging useful messages: "Looks like you don't have certs, see this doc for details...", "Looks like Redis isn't running, see this doc for info ..." |
Thanks for the clarification @humphd, will look into doing that so I can run it locally. Likewise error messages can always be improved :). I discovered this during a quick google search, perhaps an GitHub ticket to raise and address? Though we use node, the concept is still valid I'd think compared to NGINX. https://www.humankode.com/ssl/how-to-set-up-free-ssl-certificates-from-lets-encrypt-using-docker-and-nginx |
RE: errors, it looks like more recent changes on
|
Filed #519 on certs for staging. Thanks for the link, @raygervais. |
@raygervais It looks like I had a combo too 😀
Essentially we only need to have the
to use the image that I had (or follow instruction on
this should show the outputs on the command line the same as the above screenshots. To get the localhost (which is not working) we need to run:
|
@raygervais @humphd by using the image I pushed on Docker Hub ( |
# to watch the pods as they gets created | ||
# $ kubectl get pods --watch | ||
# to see stdout of containers (telescope or redis) in one specific instance (pod) | ||
# $ kubctl logs <pod name as shown by above command> -c [telescope | redis] ---> ex. $ kubectl logs telescope-7b567445fc-pn7cb -c telescope |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
spelling error, change to kubectl
Hi @sukhbeersingh @kartik-budhiraja and @manekenpix , So I have some questions for guys related to Telescope and Kubernetes deployment:
|
@naiuhz |
I filed #526 to help prepare for this as well. |
Hey @naiuhz, I think you need another step You created a deployment with However, don't you need to create a service for it with the following step? Then finally run the service you created with |
Can we use GitHub's new package registry for our Docker images? https://help.github.com/en/github/managing-packages-with-github-packages/configuring-docker-for-use-with-github-packages. I'm wonder if this would be better than Docker Hub, since it's per-account. |
Filed #530 on getting the domains we need setup. |
@humphd We can actually build the images using the dockerfile we have locally to minikube, if we choose to go that route. Changing some settings in the yaml file will cause minikube to look for a local image instead of going straight to Docker Hub. |
For continuous deployment, it's more common to have Kubernetes pull pre-built images from a registry. Otherwise we have to build tooling on the deployment machine to get the code out of github, build it, build the Docker image, etc. |
Understood |
I think it is better if we use it. |
@Reza-Rajabi were you able to get this running on minikube? |
# specifications of resource | ||
spec: | ||
selector: | ||
name: telescope |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I had to useapp: telescope
here otherwise an endpoint wasn't being assigned to the service
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I didn't need to have a named endpoint. I tried to prevent any Kubernetes vocabulary that I thought it was unnecessary. Why do you think we need to have a named endpoint?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was never able to get the deployment and the service up and running through the yaml at the same time, the deployment I verified works, the service not so much. Whenever I tried using the service with minikube service telescope
, I wouldn't be able to access the website
Until the change, these were my steps to get it running
kubectl create -f telescope.yaml
kubectl delete services telescope
kubectl expose deployment telescope --type=NodePort
orkubectl expose deployment telescope --type=LoadBalancer
minikube service telescope
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I was not able to get the localhost although the backend was working. So if you could get the localhost with this change, I should change it as well.
nodePort: 30000 | ||
# This resource needs to communicate with outside of the cluster | ||
# and share traffic between instances of app | ||
type: LoadBalancer |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Although this "works" for minikube, we should change this to NodePort, since we don't actually have a load balancer
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a LoadBalancer and I have it here by purpose. I actually commented about the reason right at the line above: share traffic between instances of app
. We have 3 replicas of the app (line 58 of this file) running at the same time.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think there's a difference between having replicas and actually load balancing. If you want to use the load balancing service, you need to use minikube tunnel
as well (I think).
https://minikube.sigs.k8s.io/docs/tasks/loadbalancer/
https://blog.codonomics.com/2019/02/loadbalancer-support-with-minikube-for-k8s.html
A question I had, and wonder if I should file: do we want to split our node app up into two parts? Specifically:
The two of these are connected via Redis, but don't need to live in the same process. I wonder if we'd be smart to divide them so that the web server can scale separately to the feed parser, which won't really need to grow/shrink due to client traffic. |
That's a very smart catch.
I hope I didn't get the question wrong. |
The way that the "worker" portion behaves, we won't ever want more than one instance. It co-ordinates work for parallel processor functions, each of which run in their own process. So with a single "worker" instance, we can still parallelize via separate processes. The web server could need to scale horizontally, and it doesn't have any issues that will make it not work if there are more than 1. I think what I'm hearing here is that I should probably file an issue to split things up. Agree? We can figure out details there vs derailing this PR. |
I understand that we only need one worker. What I meant by more than one instance for the worker is the fact that the health check of the worker may fail at any point, so the more instances running guarantee the stability of the deployment. As I mentioned, the first scenario seems to have reliability concerns. |
Here's a problem, if both are connected via Redis don't we still need all of them inside one pod for the shared volume? |
@c3ho I don't think they need to be inside one pod, as long as different pods can see each other and communicate, it should be fine. |
@c3ho as @manekenpix says, as long as they share the same network, Redis will be the common "state" between all of the various pieces in our app. @Reza-Rajabi thanks for your clarifications above, I wasn't understanding your concern, but I get it now. I'm going to file another issue, and we can all figure out how best to divide things up, and how to do it safely. Sorry to invade this PR. It just made me think about this issue as I read the code. |
I moved the discussion to #566. |
As I said, it is brilliant. The separation improves the design even though it doesn't make the deployment lighter. And for sure if the high traffic occurs (i.e. scenario 3) this approach is definitely the best. |
@Reza-Rajabi Thank you for this contribution to the project. Telescope has changed significantly in the last few months and we're going to approach container deployment in a slightly different way. This PR will the base for the new approach, so we really appreciate your work on this issue. We hope we'll be able to count on you, if not as an active contributor, at least as a reviewer. |
Fixes #4
Type of Change: Enhancement
Description
This PR will hopefully close the Kubernetes related issue, but there are some goals that I couldn't achieve, and we may reach them with your reviews and helps in this PR, or we may make new issues about them. We can also keep #4 open. This pull request is not as well as the perfect picture that I have expected for # 4.
What is missing in this PR:
This will add two files to the repo.
Dockerfile-deploy
: This helps to make a more secure image with about half of the size of our current development image which we can eventually deploy it, for example, on Kubernetes.telescope.yaml
: This file builds a telescope app using the image we have made from the previous file and a Redis image. It orchestrates 3 instances of the app beside a Service resource of Kubernetes in a cluster so that we can have a resilient, manageable and seamlessly updatable deployment. Moreover, this file specifies the health check instruction for Kubernetes to verify the healthiness of each instance of the app and restart the instance if required.Documents:
About test:
creating the image and push it on Docker Hub
deploy on the cluster and watching pods are running
checking out one instance of the app is running and it is doing the health checks
Checklist