-
Notifications
You must be signed in to change notification settings - Fork 273
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(k8s): make hot reloading work for remote clusters #422
Conversation
e048b82
to
3997224
Compare
Oh and I should add - I started this because I was working on providing hot-reloading to user-defined Helm charts. The changes turned out to be necessary for that, but I split this part out of the PR because it's useful by itself. |
This moving of the hot reload operation to The original thought was: Since a hot reload operation doesn't depend on any other tasks, and since it should be executed as soon as possible (regardless of whether capacity was available in However, in hindsight, this approach (i.e. wrapping hot-reloading in a task) is simpler / "less special", and fits better into the general infrastructure. |
3997224
to
117baa3
Compare
@drubin Updated with changes related to your comments. Please take a look. |
117baa3
to
e26f946
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I tested this with the gatsby example. And it just stays stuck in hot reloading
garden-local dev --hot-reload=website -l silly
⠼ tasks → Processing task hot-reload.website.462f95f0-02e9-11e9-a702-f5de2e984789
⠹ website → Hot reloading...
Waiting for services website
Waiting for services website
Waiting for services website
Waiting for services website
Waiting for services website
Waiting for services website
Waiting for services website
Waiting for services website
Waiting for services website
Waiting for services website
Waiting for services website
Can't really figure out what its doing because it just hangs, the same applies to the hot reload example
e26f946
to
4b2fe31
Compare
I made a delightfully dumb mistake there 🤦♂️ Updated commit, and tested again, works for me. Might be worth a double-check when you have a moment. |
4b2fe31
to
d5eb203
Compare
The performance is about 3x worse than the nodeport version currently on master which is already noticeable. 120ms is on the border of it being too slow 400ms is simply not ok, and this will be much worse for remote clusters. Current master (I tested it once but its always been roughly 100s - 150ms, and I have tested this a lot)
This branch
I think the finding an open port and port-forwarding can't happen on every change we need to do them when we deploy the service. @edvald I think if we move the port-forwarding to a single tunnel and stream the changes down that we could get this down to acceptable limits? What do you think? |
Managing a single tunnel might be tricky, but worth exploring. I can take a look at that, but would prefer to defer it to another PR. This PR also addresses potential bugs when working locally, and I would be okay with an increased response time while we dig into it. |
I just took a stab at persisting tunnels, and noticed it'll be much cleaner to implement once I'm done with my Helm branch, because it changes the |
Re-visiting this. @drubin would you be okay with me splitting the re-used tunnels into a separate issue? Otherwise this needs to wait for another week or so, until my WIP Helm changes are merged. |
d5eb203
to
750ded5
Compare
750ded5
to
7ca3dc3
Compare
@drubin, regarding your requested changes, it appears @edvald already addressed them. I've also tested this on my end and everything seems to work fine, apart from the perf regression. The perf regressions themselves are addressed in #458 which is also a part of the 0.9 milestone. With that in mind, should we merge this one? |
Performance concerns to be addressed in separate issue #458, which will be done in the same release.
This amends the hot reloading flow for K8s containers to use port-forwarding instead of finding the hostname via ingress (which wasn't going to work in all cases anyway), which makes it work securely across the wire.
I also happened upon issues with the "hot reload scheduler" which wasn't handling promise errors correctly. I made a
HotReloadTask
instead, which serves the same purpose - which was to queue up and deduplicate rapid successions of sync attempts - but captures and handles errors properly (and does away with superfluous code along the way).