Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TCPIngress Tests #1179

Closed
9 of 14 tasks
shaneutt opened this issue Apr 13, 2021 · 3 comments · Fixed by #2041
Closed
9 of 14 tasks

TCPIngress Tests #1179

shaneutt opened this issue Apr 13, 2021 · 3 comments · Fixed by #2041
Assignees

Comments

@shaneutt
Copy link
Contributor

shaneutt commented Apr 13, 2021

TCPIngress Tests

The purpose of this issue is to ensure that we provide robust unit and integration test coverage of our TCPIngress API and relevant controller(s).

Acceptance Criteria

Integration and unit tests must cover the below checklists for this issue to be considered resolved, each checklist item should be a test case in Go.

Rule Of Thumb: default to unit tests where feasible, and fallback to integration tests when the functionality being tested makes more contextual sense to test in a fully functional Kubernetes cluster.

Validation Tests

  • several valid permutations of each available field in the TCPIngress API spec are covered
  • several invalid permutations of each available field in the TCPIngress API spec are covered

General Functionality Tests

  • TCPIngress resources can be created and route properly to the relevant Service, the route becomes available in under 1 minute
  • TCPIngress resources can be updated for a new backend Service, the routes change properly in under 1 minute
  • When TCPIngress spec changes, the behavior of persistent connections is tested (backend specific)
  • When TCPIngress spec changes, repeated parallel transient connections to the backend show no failures.
  • Long running persistent connections (where the backend server doesn't support reconnect mechanisms) disconnect and fail properly when pods change
  • TCPIngress resources can be deleted and the backend route is disconnected in under 1 minute

Status Tests

  • TCPIngress status is checked for consistent LoadBalancer state between creation and multiple updates to the spec

Specific Functionality Tests

  • TCPIngress ingress TLS configuration
  • TCPIngress ingress TLS configuration for multiple hosts

Performance Tests

  • Multiple TCPIngress resources can be rapidly created, each backend resolving in under 1 minute
  • Bulk parallel connections through TCPIngress succeed, no failures at KIC/Kong level
  • Multiple TCPIngress resources can be rapidly deleted, each backend route becomes disconnected in under 1 minute
@shaneutt shaneutt added this to the KIC 2.0 - Testing Renaissance milestone Apr 13, 2021
@shaneutt shaneutt removed this from the KIC 2.0 - Testing Renaissance milestone Aug 4, 2021
@mflendrich
Copy link
Contributor

This requires validation - the existing integration test may cover some of the scope.

@rainest
Copy link
Contributor

rainest commented Nov 29, 2021

several valid permutations of each available field in the TCPIngress API spec are covered
several invalid permutations of each available field in the TCPIngress API spec are covered

func TestFromTCPIngressV1beta1(t *testing.T) {
assert := assert.New(t)
tcpIngressList := []*configurationv1beta1.TCPIngress{
// 0
{
ObjectMeta: metav1.ObjectMeta{
Name: "foo",
Namespace: "default",
},
},
// 1
{
ObjectMeta: metav1.ObjectMeta{
Name: "foo",
Namespace: "default",
},
Spec: configurationv1beta1.TCPIngressSpec{
Rules: []configurationv1beta1.IngressRule{
{
Port: 9000,
Backend: configurationv1beta1.IngressBackend{
ServiceName: "foo-svc",
ServicePort: 80,
},
},
},
},
},
// 2
{
ObjectMeta: metav1.ObjectMeta{
Name: "foo",
Namespace: "default",
},
Spec: configurationv1beta1.TCPIngressSpec{
Rules: []configurationv1beta1.IngressRule{
{
Host: "example.com",
Port: 9000,
Backend: configurationv1beta1.IngressBackend{
ServiceName: "foo-svc",
ServicePort: 80,
},
},
},
},
},
// 3
{
ObjectMeta: metav1.ObjectMeta{
Name: "foo",
Namespace: "default",
},
Spec: configurationv1beta1.TCPIngressSpec{
TLS: []configurationv1beta1.IngressTLS{
{
Hosts: []string{
"1.example.com",
"2.example.com",
},
SecretName: "sooper-secret",
},
{
Hosts: []string{
"3.example.com",
"4.example.com",
},
SecretName: "sooper-secret2",
},
},
},
},
}
t.Run("no TCPIngress returns empty info", func(t *testing.T) {
parsedInfo := fromTCPIngressV1beta1(logrus.New(), []*configurationv1beta1.TCPIngress{})
assert.Equal(ingressRules{
ServiceNameToServices: make(map[string]kongstate.Service),
SecretNameToSNIs: make(map[string][]string),
}, parsedInfo)
})
t.Run("empty TCPIngress return empty info", func(t *testing.T) {
parsedInfo := fromTCPIngressV1beta1(logrus.New(), []*configurationv1beta1.TCPIngress{tcpIngressList[0]})
assert.Equal(ingressRules{
ServiceNameToServices: make(map[string]kongstate.Service),
SecretNameToSNIs: make(map[string][]string),
}, parsedInfo)
})
t.Run("simple TCPIngress rule is parsed", func(t *testing.T) {
parsedInfo := fromTCPIngressV1beta1(logrus.New(), []*configurationv1beta1.TCPIngress{tcpIngressList[1]})
assert.Equal(1, len(parsedInfo.ServiceNameToServices))
svc := parsedInfo.ServiceNameToServices["default.foo-svc.80"]
assert.Equal("foo-svc.default.80.svc", *svc.Host)
assert.Equal(80, *svc.Port)
assert.Equal("tcp", *svc.Protocol)
assert.Equal(1, len(svc.Routes))
route := svc.Routes[0]
assert.Equal(kong.Route{
Name: kong.String("default.foo.0"),
Protocols: kong.StringSlice("tcp", "tls"),
Destinations: []*kong.CIDRPort{
{
Port: kong.Int(9000),
},
},
}, route.Route)
})
t.Run("TCPIngress rule with host is parsed", func(t *testing.T) {
parsedInfo := fromTCPIngressV1beta1(logrus.New(), []*configurationv1beta1.TCPIngress{tcpIngressList[2]})
assert.Equal(1, len(parsedInfo.ServiceNameToServices))
svc := parsedInfo.ServiceNameToServices["default.foo-svc.80"]
assert.Equal("foo-svc.default.80.svc", *svc.Host)
assert.Equal(80, *svc.Port)
assert.Equal("tcp", *svc.Protocol)
assert.Equal(1, len(svc.Routes))
route := svc.Routes[0]
assert.Equal(kong.Route{
Name: kong.String("default.foo.0"),
Protocols: kong.StringSlice("tcp", "tls"),
SNIs: kong.StringSlice("example.com"),
Destinations: []*kong.CIDRPort{
{
Port: kong.Int(9000),
},
},
}, route.Route)
})
t.Run("TCPIngress with TLS", func(t *testing.T) {
parsedInfo := fromTCPIngressV1beta1(logrus.New(), []*configurationv1beta1.TCPIngress{tcpIngressList[3]})
assert.Equal(2, len(parsedInfo.SecretNameToSNIs))
assert.Equal(2, len(parsedInfo.SecretNameToSNIs["default/sooper-secret"]))
assert.Equal(2, len(parsedInfo.SecretNameToSNIs["default/sooper-secret2"]))
})
}
has general parse tests for all positive cases, but no negative cases

TCPIngress resources can be created and route properly to the relevant Service, the route becomes available in under 1 minute

t.Logf("verifying TCP Ingress %s operationalable", tcp.Name)
tcpProxyURL, err := url.Parse(fmt.Sprintf("http://%s:8888/", proxyURL.Hostname()))
require.NoError(t, err)
require.Eventually(t, func() bool {
resp, err := httpc.Get(tcpProxyURL.String())
if err != nil {
return false
}
defer resp.Body.Close()
if resp.StatusCode == http.StatusOK {
// now that the ingress backend is routable, make sure the contents we're getting back are what we expect
// Expected: "<title>httpbin.org</title>"
b := new(bytes.Buffer)
n, err := b.ReadFrom(resp.Body)
require.NoError(t, err)
require.True(t, n > 0)
return strings.Contains(b.String(), "<title>httpbin.org</title>")
}
return false
}, ingressWait, waitTick)
we use the same ingressWait 5-minute time limit for most tests.

The remainder is almost entirely Kong proxy behavior (beyond that we make changes to proxy config when asked). Manual testing against Kong shows that:

  • Once a TCP connection is routed, it remains routed: you can update the route that handled it, even to remove the criteria that originally included it, and packets through that connection will still go to their original destination using the same connection, for both port- and SNI-based routes.
  • A dead upstream (in this case, killing a netstat listen) will normally close the upstream connection however and FIN the downstream client.

TCPIngress resources can be updated for a new backend Service, the routes change properly in under 1 minute

Nothing here currently, and we don't have any other services readily available as addons. Our current strategy relies on upstream output since there's not a whole lot else go on (no Kong-Debug support in TCP to provide the service ID). We'd also need to explicitly close our connection and open a new one to get the new destination.

When TCPIngress spec changes, the behavior of persistent connections is tested (backend specific)

None currently. This should follow the observed Kong behavior (ignore the changes and keep the connection open and proxied until something else interrupts it) between deck's logic to find and edit existing routes and whatever happens internally in DB-less mode to update existing routes.

When TCPIngress spec changes, repeated parallel transient connections to the backend show no failures.

None currently and not sure I understand the test here. If it's just open connection handling, same as the above: once routed, a connection will proxy to its original upstream forever.

Long running persistent connections (where the backend server doesn't support reconnect mechanisms) disconnect and fail properly when pods change

This should just follow the regular observed FIN-on-close behavior.

TCPIngress resources can be deleted and the backend route is disconnected in under 1 minute

Handled by

t.Logf("tearing down TCPIngress %s and ensuring that the relevant backend routes are removed", tcp.Name)
require.NoError(t, c.ConfigurationV1beta1().TCPIngresses(ns.Name).Delete(ctx, tcp.Name, metav1.DeleteOptions{}))
require.Eventually(t, func() bool {
resp, err := httpc.Get(tcpProxyURL.String())
if err != nil {
return true
}
defer resp.Body.Close()
return false
}, ingressWait, waitTick)

TCPIngress status is checked for consistent LoadBalancer state between creation and multiple updates to the spec

Nothing across changes, but

t.Logf("checking tcpingress %s status readiness.", tcp.Name)
ingCli := c.ConfigurationV1beta1().TCPIngresses(ns.Name)
assert.Eventually(t, func() bool {
curIng, err := ingCli.Get(ctx, tcp.Name, metav1.GetOptions{})
if err != nil || curIng == nil {
return false
}
ingresses := curIng.Status.LoadBalancer.Ingress
for _, ingress := range ingresses {
if len(ingress.Hostname) > 0 || len(ingress.IP) > 0 {
t.Logf("tcpingress hostname %s or ip %s is ready to redirect traffic.", ingress.Hostname, ingress.IP)
return true
}
}
return false
}, 30*time.Second, 1*time.Second, true)

No TLS tests, which would also require differentiable services, and further a stream listen with ssl enabled in the KTF Kong addon.

Didn't look at performance stuff at all.

@rainest
Copy link
Contributor

rainest commented Dec 9, 2021

Checked items were either handled by existing tests or will be handled by tests added in #2041

The remainder have been broken out into their own issues:

#2067
#2068
#2069

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants