-
Notifications
You must be signed in to change notification settings - Fork 437
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[1.16] Warn on missing TLS secret #9974
[1.16] Warn on missing TLS secret #9974
Conversation
Issues linked to changelog: |
Validated in https://github.com/solo-io/solo-projects/pull/6840 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A few questions about the tests
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
stepped through the tests and they make sense given the back port changes.
Only real thoughts arent related to the backport and rather about how we can be louder about configuration that is noeffect such as when this and allowwarnings is false
paging mr dozer... hellooooo |
Backport #9875
Description
Updates the condition of a VirtualService referencing a TLS secret that does not exist from an error state to a warning state. This is to allow for eventual consistency with VS creation and TLS secret creation.
Fill out any of the following sections that are relevant and remove the others
API changes
Code changes
Test changes
[Describe] Kube2e: gateway [Context] Validation configuration [When] allowWarnings=false [Context] secret validation
Docs changes
TODO
Context
Users ran into this eventual consistency issue when applying a cert-manager
Certificate
resource at the same time as aVirtualService
resource. Because theCertificate
does not synchronously create the TLS secret, theVirtualService
is rejected by validation.Testing steps
# if you don't have a cluster, create one kind create cluster
# curl to validate that we're getting traffic curl -k --connect-to vs-1:8443:127.0.0.1 https://vs-1:8443
# curl to show we are still receiving traffic curl -k --connect-to vs-1:8443:127.0.0.1 https://vs-1:8443
# restart gloo deployment to roll the pod k rollout restart deploy/gloo -n gloo-system k rollout status deploy/gloo -n gloo-system
# curl to show that we are NO LONGER receiving traffic, even on the good VS curl -k --connect-to vs-1:8443:127.0.0.1 https://vs-1:8443
# restart gloo deployment to roll the pod k rollout restart deploy/gloo -n gloo-system k rollout status deploy/gloo -n gloo-system
# curl to show that we are receiving traffic on the good VS, but not on the invalid VS curl -k --connect-to vs-1:8443:127.0.0.1 https://vs-1:8443 curl -k --connect-to vs-2:8443:127.0.0.1 https://vs-2:8443
# curl to show that we are receiving traffic on both, now valid VS curl -k --connect-to vs-1:8443:127.0.0.1 https://vs-1:8443 curl -k --connect-to vs-2:8443:127.0.0.1 https://vs-2:8443
Checklist: