-
Notifications
You must be signed in to change notification settings - Fork 689
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ingress status #1976
Ingress status #1976
Conversation
Signed-off-by: Steve Sloka <slokas@vmware.com>
configuring how to setup Envoy service reference Signed-off-by: Steve Sloka <slokas@vmware.com>
004d3ae
to
952c085
Compare
@stevesloka FYI, this will have significant merge conflicts with #1964 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for working on this. I have a concern about tying the updating of the external IP address on k8s records to the dag generation cycle. I was hoping that the implementation would be more like the way the nginx controller does it, as a separate goroutine which is started by leader election and runs autonomously. I think this is the right way to do this because the inputs into updating the status of k8s Ingress objects are only the service record for Envoy itself.
@@ -99,7 +99,17 @@ func main() { | |||
} | |||
} | |||
|
|||
func newClient(kubeconfig string, inCluster bool) (*kubernetes.Clientset, *clientset.Clientset, *coordinationv1.CoordinationV1Client) { | |||
// Returns the current namespace in which the controller is running | |||
// If error returns empty string |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
s/returns the empty string/returns "projectcontour"
// Returns the current namespace in which the controller is running | ||
// If error returns empty string | ||
func getCurrentNamespace() string { | ||
ns, _, err := clientcmd.NewNonInteractiveDeferredLoadingClientConfig(clientcmd.NewDefaultClientConfigLoadingRules(), &clientcmd.ConfigOverrides{}).Namespace() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
wow, that's a function name and a half :)
How does this determine where its loading the kubeconfig from? If its magic, can we use the same magic for contour's --kubeconfig flag?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It's using client-go to determine which namespace Contour is deployed into. I was trying to avoid folks having to add extra args to deployments by automatically finding this information.
It can still be overridden by args if the user would like. I also didn't want to read the /var/run/secrets/kubernetes.io/serviceaccount/namespace
path in the pod which seems like hack when we have a k8s client (via client-go).
@@ -112,6 +122,7 @@ func newClient(kubeconfig string, inCluster bool) (*kubernetes.Clientset, *clien | |||
|
|||
client, err := kubernetes.NewForConfig(config) | |||
check(err) | |||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: maybe use \n on line 127 and onwards for consistency?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry I don't follow your comment. I didn't change any of this short of moving from kubernetes.ClientSet
to kubernetes.Interface
. The check(err)
stops execution of the program.
@@ -27,7 +27,7 @@ import ( | |||
|
|||
// newLeaderElector creates a new leaderelection.LeaderElector and associated | |||
// channels by which to observe elections and depositions. | |||
func newLeaderElector(log logrus.FieldLogger, ctx *serveContext, client *kubernetes.Clientset, coordinationClient *coordinationv1.CoordinationV1Client) (*leaderelection.LeaderElector, chan struct{}, chan struct{}) { | |||
func newLeaderElector(log logrus.FieldLogger, ctx *serveContext, client kubernetes.Interface, coordinationClient *coordinationv1.CoordinationV1Client) (*leaderelection.LeaderElector, chan struct{}, chan struct{}) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks like an unrelated change, maybe pull it out into its own PR to make this easier to review.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wanted to add a test which actually tested that I could set the status of an Ingress object. In doing that I can use the client-go fake
objects which gives me a fake k8s api. In order to use this I need to pass around the interface which I changed up in (https://github.com/projectcontour/contour/pull/1976/files/952c08508fffd3c71a7c0030f6a8e41511a3ba13#diff-b62d9e38d59686d41f8f41098ff436f9R112) to return an interface. That change bubbles down through all the other methods.
@@ -125,6 +127,11 @@ func doServe(log logrus.FieldLogger, ctx *serveContext) error { | |||
// step 1. establish k8s client connection | |||
client, contourClient, coordinationClient := newClient(ctx.Kubeconfig, ctx.InCluster) | |||
|
|||
// step 1b. get current namespace that Contour is running inside if not defined | |||
if ctx.envoyServiceNamespace == "" { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
setting ctx.envoyServiceName and ctx.envoyServiceNamespace are handled differently, is there a way to make them more similar?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The name come from a namespace, I'm using the check for empty to determine if a user hasn't already set the flag. If they haven't then I look up what namespace Contour is running in.
@@ -164,6 +170,7 @@ func newServeContext() *serveContext { | |||
Name: "leader-elect", | |||
}, | |||
UseExtensionsV1beta1Ingress: false, | |||
envoyServiceName: "envoy", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can envoyServiceNamespace
default to projectcontour
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It does if it cannot be looked up. I like the idea of requiring fewer flags to set on Contour, so I opted to look up the namespace which Contour is running in. If we take that out, we force users to set these flags and I can remove the lookup.
@@ -75,7 +75,7 @@ func WriteSecretsYAML(outputDir, namespace string, certdata map[string][]byte) e | |||
|
|||
// WriteSecretsKube writes all the keypairs out to Kube Secrets in the | |||
// passed Kube context. | |||
func WriteSecretsKube(client *kubernetes.Clientset, namespace string, certdata map[string][]byte) error { | |||
func WriteSecretsKube(client kubernetes.Interface, namespace string, certdata map[string][]byte) error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks like an unrelated change, I think this file and the one above can be pulled out into their own PR to make this simpler to review.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See the previous comment about testing with fakes.
I'm sorry, I was not correct. k8s Ingress status depends on the Envoy pod's service record and the ingress class setting of contour. With that said, I think this feature would be better implemented as stand alone as possible because it takes different inputs to the dag, and runs at a different timescale. |
Also, I think that it's important that our design here allow for other deployment methods - nginx gets around this by allowing you to either specify the service, or just IPs or FQDNs directly. Being able to cover 'monitor this service' and 'just let me specify the IPs or FQDNs' hits all the deployment scenarios I can think of, in cloud and otherwise. If there's a concurrency-safe struct somewhere that stores the IPs or FQDNs to go into the Ingress status stanza, then we can either poke a value in there directly at startup (for statically specified IPs), or have a goroutine pull them from an Informer. |
On 3 Dec 2019, at 3:15 pm, Nick Young ***@***.***> wrote:
Also, I think that it's important that our design here allow for other deployment methods - nginx gets around this by allowing you to either specify the service, or just IPs or FQDNs directly. Being able to cover 'monitor this service' and 'just let me specify the IPs or FQDNs' hits all the deployment scenarios I can think of, in cloud and otherwise.
If there's a concurrency-safe struct somewhere that stores the IPs or FQDNs to go into the Ingress status stanza, then we can either poke a value in there directly at startup (for statically specified IPs), or have a goroutine pull them from an Informer.
I think whatever approach we decide on this needs to run continuously; it needs to watch something which holds the IP addresses of the Service. The Service’s details can change at any time, so we need to be ready to flow that update to all relevant Ingress objects.
…
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub, or unsubscribe.
|
@youngnick The implementation allows for this. Right now I just implemented the service lookup. To allow users to specify their own, we need to add additional args. |
Doing it in the dag build gave me the ingress class, service info, etc all for free. I can throw this all away and re-implement in a goroutine. Thanks for the feedback! |
@davecheney do you have thoughts on how we could get the events copied to this new goroutine? I put the update in the dag update because we have access to the services/ingress objects via the watch+event handler. However, moving the Ingress status updates outside will need to hook into the Event Handler or get a copy of the services, ingress events from the k8s watcher. So to do this in a new routine we'd need something like this:
|
@davecheney is going to pick this up, closing since he'll have a new PR. |
Fixes #403 by updating Ingress resources with Envoy's service type=LoadBalancer status information.
This also adds two new args to contour:
envoy
)