-
Notifications
You must be signed in to change notification settings - Fork 49
Conversation
bd8b9c8
to
7598b33
Compare
We're hitting the size limit of the ELB target group name (32 chars). |
ecbad1a
to
4dbc191
Compare
5a9d5dc
to
c384cff
Compare
Fixed. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just some small nits, otherwise LGTM. Nice work @johananl
assets/lokomotive-kubernetes/aws/flatcar-linux/kubernetes/workers/variables.tf
Show resolved
Hide resolved
assets/lokomotive-kubernetes/aws/flatcar-linux/kubernetes/workers/ingress.tf
Outdated
Show resolved
Hide resolved
@johananl I was manually testing this PR, where should |
Lokomotive creates a wildcard record which routes |
e3c0e75
to
ccbf8ca
Compare
@johananl if this should work fully automatically, perhaps we should add e2e test for it? Which just does the HTTP request which should succeed? |
ccbf8ca
to
ba0138a
Compare
I can look into that, although I wanted to keep the scope of this PR small and merge it ASAP as things are currently broken. |
This works for me. Only one difference from deploying on packet is that on packet we need to create a manual entry in route53 and here user is free to choose whatever URL they want. On AWS we need to make sure that your the URL is |
Opened #370 to track this.
Opened #371 to track this. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Works for me.
@johananl This LGTM, as I said in the previous reviews, I don't think I need to review again, the changes that might receive this PR are very small. Regarding the open comment, as I mentioned, mark it solved when you consider it solved (AFAICS you found problems to test your solution, I think whatever you consider is reasonable at this point, is fine). Regarding the other comments, they are nit-picking so whatever you decide (even leave it as it is), is fine by me :) If you need, of course, just ping me for a LGTM :) |
b70e252
to
fc2f475
Compare
All feedback was handled. Please check again. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM. Thanks a lot @johananl, this improvement is really important! :)
It should be easy to figure out which cluster an AWS resource belongs to. - Prefix EC2 `Name` tags with cluster name. - Prefix ASG name with cluster name. - Prefix ASG launch config with cluster and pool name. - Prefix LB target groups with cluster and pool name.
Worker pools are managed in a dedicated module. Invoking the workers module from the controller modules disallows having an arbitrary number of worker pools.
NLB listeners are tightly-coupled to the worker pool they serve. A listener can point at only one worker pool. Since we can have an arbitrary number of worker pools, we should create per-pool listeners (one for HTTP and another for HTTPs). Since we can't create two listeners with the same protocol and port, we parameterize the ports (with defaults of 80/443 for HTTP/HTTPS, respectively).
Bootkube shouldn't depend on workers, and in any case the `workers` module referenced isn't in use.
Target group name length is limited to 32 characters. We should keep the name short as possible to accommodate clusters with long names, and in addition the word "workers" doesn't add any meaning here.
ELB target group names are limited to 32 characters. Since we easily hit this limit when trying to include the cluster and pool name in the target group name, we move this information to AWS tags and let Terraform generate a target group name automatically.
Worker pool names already include the cluster name.
In order to expose Envoy using a NodePort service, we need to wire the NLB to the workers using a port that's within the NodePort range. We also fix the health checks accordingly and remove a SG rule which belongs to nginx.
Exposing services using hostPorts is discouraged by the official k8s docs and has similar problems as using host networking. On Packet, we expose Envoy as a service of type LoadBalancer so using hostPorts isn't necessary. On AWS, we want to expose Envoy as a service of type NodePort rather than using hostPorts. Therefore, we can remove the 'hostPort' fields from the Envoy manifest.
I've rebased and fixed conflicts. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sticky LGTM :-P
General
Fixes #349.
This PR fixes the handling of worker pools on AWS. Following is a summary of the changes:
80/443
, respectively. This is required in order to support an arbitrary number of worker pools with ingress using the same NLB.Testing
To test this PR, deploy a cluster with a config similar to the following
httpbin should be available at https://httpbin.example.com.