Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[1.13] cli: Infer gloo deploy name #9719

Merged
merged 5 commits into from
Jul 4, 2024
Merged

Conversation

davidjumani
Copy link
Contributor

@davidjumani davidjumani commented Jul 2, 2024

Description

Infer the gloo deployment name in cases where the deployment name is not the default gloo. The gloo deployment is identified by the gloo=gloo label.

This will be forward ported

Context

#9163

Interesting decisions

Rather than adding another flag for the gloo deployment name which can soon ballon into a flag for the name of each deployment, I've decided to identify the deployment via the labels.

Testing steps

The following commands depend on the gloo deployment name :

  • glooctl check
$ kg get deploy
NAME             READY   UP-TO-DATE   AVAILABLE   AGE
discovery        1/1     1            1           5h33m
gateway-proxy    1/1     1            1           5h33m
gloo-deploy-v2   1/1     1            1           9m9s

$ ./_output/glooctl-darwin-amd64 check
Checking deployments... OK
Checking pods... OK
Checking upstreams... OK
Checking upstream groups... OK
Checking auth configs... OK
Checking rate limit configs... OK
Checking VirtualHostOptions... OK
Checking RouteOptions... OK
Checking secrets... OK
Checking virtual services... OK
Checking gateways... OK
Checking proxies... OK
No problems detected.
Skipping Gloo Instance check -- Gloo Federation not detected
  • glooctl get proxy
$ ./_output/glooctl-darwin-amd64 get proxy
+---------------+-----------+---------------+----------+
|     PROXY     | LISTENERS | VIRTUAL HOSTS |  STATUS  |
+---------------+-----------+---------------+----------+
| gateway-proxy | :::8080   | 1             | Accepted |
+---------------+-----------+---------------+----------+
  • glooctl proxy served-config
$ ./_output/glooctl-darwin-amd64 proxy served-config

#role: gloo-system~gateway-proxy

#clusters
connectTimeout: 5s
edsClusterConfig:
  edsConfig:
    ads: {}
    resourceApiVersion: V3
....................
  • glooctl get upstream -o wide
$ ./_output/glooctl-darwin-amd64 -o wide get upstream 

+-------------------------------+------------+----------+------------------------------+
|           UPSTREAM            |    TYPE    |  STATUS  |           DETAILS            |
+-------------------------------+------------+----------+------------------------------+
| default-kubernetes-443        | Kubernetes | Accepted | svc name:      kubernetes    |
|                               |            |          | svc namespace: default       |
|                               |            |          | port:          443           |
|                               |            |          |                              |
| gloo-system-gateway-proxy-443 | Kubernetes | Accepted | svc name:      gateway-proxy |
|                               |            |          | svc namespace: gloo-system   |
|                               |            |          | port:          443           |
.............

Checklist:

  • I have performed a self-review of my own code
  • I have commented my code, particularly in hard-to-understand areas
  • I have made corresponding changes to the documentation
  • I have added tests that prove my fix is effective or that my feature works

@github-actions github-actions bot added the keep pr updated signals bulldozer to keep pr up to date with base branch label Jul 2, 2024
@solo-changelog-bot
Copy link

Issues linked to changelog:
#9163

davidjumani and others added 2 commits July 3, 2024 09:44
Co-authored-by: Nathan Fudenberg <nathan.fudenberg@solo.io>
@nfuden
Copy link
Contributor

nfuden commented Jul 3, 2024

/kick Consul settings specify automatic detection of TLS services, but the rootCA reso

client, err := KubeClient()
if err != nil {
errMessage := "error getting KubeClient"
fmt.Println(errMessage)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit: caller should be responsible for logging the error.

Fine with leaving it here but could we use contextutils.LoggerFrom to get the logger?

for _, d := range deployments.Items {
if d.Name != GlooDeploymentName {
// At least 1 deployment exists, in case we dont find default update our error message
errMessage = "too many app=gloo deployments, cannot decide which to target"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
errMessage = "too many app=gloo deployments, cannot decide which to target"
errMessage = "too many deployments labeled gloo=gloo; cannot decide which to target"

Copy link
Contributor

@jbohanon jbohanon left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Comments will be addressed in forward ports

@@ -27,6 +27,9 @@ var (
return fmt.Sprintf("Gloo has detected that the data plane is out of sync. The following types of resources have not been accepted: %v. "+
"Gloo will not be able to process any other configuration updates until these errors are resolved.", resourceNames)
}

// Initialize the custom deployment name that is overwritten later on
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

nit:

Suggested change
// Initialize the custom deployment name that is overwritten later on
// Initialize the custom deployment name that is overwritten later on in the `CheckResources` function

@soloio-bulldozer soloio-bulldozer bot merged commit feb451a into v1.13.x Jul 4, 2024
14 checks passed
@soloio-bulldozer soloio-bulldozer bot deleted the infer-gloo-deploy-name branch July 4, 2024 02:03
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
keep pr updated signals bulldozer to keep pr up to date with base branch
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants