Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

support k8s < v1.19 & watch-ingress-without-class #12794

Merged
merged 2 commits into from
Oct 28, 2021

Conversation

prezha
Copy link
Contributor

@prezha prezha commented Oct 27, 2021

in combination with the pr #12702, this should restore backward compatibility b/w ingress & ingress-dns addons and older k8s versions (<1.19) and also fix the --watch-ingress-without-class flag that was introduced in ingress api v1 (and not recognised in v1beta1)

fixes #12793
fixes #12636
fixes #12536
fixes #12511
fixes #12152
improves #11987
improves #12189
etc. (potentially improves/helps/fixes some other issues as well)

btw, should it be needed, i believe a similar approach - ie, during runtime, if a user tries to enable an addon with an older k8s version: replace default addon image[s] with older/compatible ones (instead of "simulating" that the user-provided custom images - as i previously tried and it did not work, as that conflicts with the expected behaviour in assets.SelectAndPersistImages and it will break)

examples:

❯ minikube start -p k8s-1.18.20 --kubernetes-version=v1.18.20 --driver=kvm2
😄  [k8s-1.18.20] minikube v1.23.2 on Opensuse-Tumbleweed
✨  Using the kvm2 driver based on user configuration
👍  Starting control plane node k8s-1.18.20 in cluster k8s-1.18.20
🔥  Creating kvm2 VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
🐳  Preparing Kubernetes v1.18.20 on Docker 20.10.8 ...
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: storage-provisioner, default-storageclass

❗  /home/prezha/bin/k8s/kubectl is version 1.22.2, which may have incompatibilites with Kubernetes 1.18.20.
    ▪ Want kubectl v1.18.20? Try 'minikube kubectl -- get pods -A'
🏄  Done! kubectl is now configured to use "k8s-1.18.20" cluster and "default" namespace by default

❯ minikube -p k8s-1.18.20 addons enable ingress
    ▪ Using image k8s.gcr.io/ingress-nginx/controller:v0.49.3
    ▪ Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
    ▪ Using image docker.io/jettech/kube-webhook-certgen:v1.5.1
🔎  Verifying ingress addon...
🌟  The 'ingress' addon is enabled

❯ kubectl -n ingress-nginx get pod -o wide
NAME                                       READY   STATUS      RESTARTS   AGE   IP           NODE          NOMINATED NODE   READINESS GATES
ingress-nginx-admission-create-5h8fv       0/1     Completed   0          44s   172.17.0.2   k8s-1.18.20   <none>           <none>
ingress-nginx-admission-patch-p6lxz        0/1     Completed   0          44s   172.17.0.4   k8s-1.18.20   <none>           <none>
ingress-nginx-controller-9467f8778-q6wz4   1/1     Running     0          44s   172.17.0.2   k8s-1.18.20   <none>           <none>
❯ minikube start -p k8s-1.22.2 --driver=kvm2
😄  [k8s-1.22.2] minikube v1.23.2 on Opensuse-Tumbleweed
✨  Using the kvm2 driver based on user configuration
👍  Starting control plane node k8s-1.22.2 in cluster k8s-1.22.2
🔥  Creating kvm2 VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
🐳  Preparing Kubernetes v1.22.2 on Docker 20.10.8 ...
❌  Unable to load cached images: loading cached images: stat /home/prezha/.minikube/cache/images/k8s.gcr.io/kube-proxy_v1.22.2: no such file or directory
    > kubectl.sha256: 64 B / 64 B [--------------------------] 100.00% ? p/s 0s
    > kubelet.sha256: 64 B / 64 B [--------------------------] 100.00% ? p/s 0s
    > kubeadm.sha256: 64 B / 64 B [--------------------------] 100.00% ? p/s 0s
    > kubectl: 44.73 MiB / 44.73 MiB [-------------] 100.00% 11.24 MiB p/s 4.2s
    > kubeadm: 43.71 MiB / 43.71 MiB [-------------] 100.00% 10.29 MiB p/s 4.4s
    > kubelet: 146.25 MiB / 146.25 MiB [-----------] 100.00% 16.12 MiB p/s 9.3s
    ▪ Generating certificates and keys ...
    ▪ Booting up control plane ...
    ▪ Configuring RBAC rules ...
🔎  Verifying Kubernetes components...
    ▪ Using image gcr.io/k8s-minikube/storage-provisioner:v5
🌟  Enabled addons: storage-provisioner, default-storageclass
🏄  Done! kubectl is now configured to use "k8s-1.22.2" cluster and "default" namespace by default

❯ minikube -p k8s-1.22.2 addons enable ingress
    ▪ Using image k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1
    ▪ Using image k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1
    ▪ Using image k8s.gcr.io/ingress-nginx/controller:v1.0.4
🔎  Verifying ingress addon...
🌟  The 'ingress' addon is enabled

❯ kubectl -n ingress-nginx get pod -o wide
NAME                                        READY   STATUS      RESTARTS   AGE   IP           NODE         NOMINATED NODE   READINESS GATES
ingress-nginx-admission-create--1-ljnpx     0/1     Completed   0          64s   172.17.0.2   k8s-1.22.2   <none>           <none>
ingress-nginx-admission-patch--1-sfbmq      0/1     Completed   0          64s   172.17.0.3   k8s-1.22.2   <none>           <none>
ingress-nginx-controller-5f66978484-8jbw9   1/1     Running     0          64s   172.17.0.2   k8s-1.22.2   <none>           <none>
❯ env TEST_ARGS="-minikube-start-args='--driver=docker' -test.run TestAddons --cleanup=false" make integration
go build -gcflags="all=-N -l"  -tags "" -ldflags="-X k8s.io/minikube/pkg/version.version=v1.23.2 -X k8s.io/minikube
...
--- PASS: TestAddons (238.45s)
    --- PASS: TestAddons/Setup (129.90s)
    --- PASS: TestAddons/parallel (0.00s)
        --- PASS: TestAddons/parallel/MetricsServer (5.67s)
        --- PASS: TestAddons/parallel/HelmTiller (19.89s)
        --- PASS: TestAddons/parallel/Registry (23.44s)
        --- PASS: TestAddons/parallel/CSI (43.04s)
        --- PASS: TestAddons/parallel/Ingress (44.62s)
        --- PASS: TestAddons/parallel/Olm (82.23s)
    --- PASS: TestAddons/serial (13.31s)
        --- PASS: TestAddons/serial/GCPAuth (13.31s)
    --- PASS: TestAddons/StoppedEnableDisable (13.01s)
PASS
Tests completed in 3m58.448692954s (result code 0)
ok      k8s.io/minikube/test/integration        238.481s

@k8s-ci-robot k8s-ci-robot added cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. size/S Denotes a PR that changes 10-29 lines, ignoring generated files. labels Oct 27, 2021
@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Oct 27, 2021
@prezha prezha self-assigned this Oct 27, 2021
@prezha
Copy link
Contributor Author

prezha commented Oct 27, 2021

/ok-to-test

@k8s-ci-robot k8s-ci-robot added the ok-to-test Indicates a non-member PR verified by an org member that is safe to test. label Oct 27, 2021
@minikube-pr-bot
Copy link

kvm2 driver with docker runtime

+----------------+----------+---------------------+
|    COMMAND     | MINIKUBE | MINIKUBE (PR 12794) |
+----------------+----------+---------------------+
| minikube start | 47.4s    | 47.3s               |
| enable ingress | 31.8s    | 32.0s               |
+----------------+----------+---------------------+

Times for minikube (PR 12794) ingress: 31.2s 32.2s 32.8s 31.8s 31.7s
Times for minikube ingress: 30.8s 33.3s 30.3s 32.8s 31.8s

Times for minikube start: 49.5s 48.3s 46.5s 46.8s 45.9s
Times for minikube (PR 12794) start: 46.8s 48.0s 45.9s 47.6s 48.1s

docker driver with docker runtime

+----------------+----------+---------------------+
|    COMMAND     | MINIKUBE | MINIKUBE (PR 12794) |
+----------------+----------+---------------------+
| minikube start | 21.7s    | 22.0s               |
| enable ingress | 32.4s    | 29.1s               |
+----------------+----------+---------------------+

Times for minikube start: 22.2s 21.7s 21.2s 21.1s 22.5s
Times for minikube (PR 12794) start: 21.7s 22.2s 22.1s 22.1s 22.0s

Times for minikube ingress: 34.9s 32.4s 27.9s 31.9s 35.0s
Times for minikube (PR 12794) ingress: 26.9s 35.4s 27.9s 27.4s 27.9s

docker driver with containerd runtime

+----------------+----------+---------------------+
|    COMMAND     | MINIKUBE | MINIKUBE (PR 12794) |
+----------------+----------+---------------------+
| minikube start | 42.1s    | 43.6s               |
| enable ingress | 33.9s    | 31.9s               |
+----------------+----------+---------------------+

Times for minikube start: 31.3s 43.7s 43.6s 48.1s 43.8s
Times for minikube (PR 12794) start: 43.4s 43.0s 43.5s 45.4s 42.5s

Times for minikube ingress: 33.4s 37.0s 36.9s 33.4s 28.9s
Times for minikube (PR 12794) ingress: 40.9s 37.9s 19.4s 28.4s 32.9s

@minikube-pr-bot
Copy link

These are the flake rates of all failed tests.

Environment Failed Tests Flake Rate (%)
Docker_Linux_containerd TestAddons/Setup (gopogh) 0.00 (chart)
Docker_Linux TestAddons/Setup (gopogh) 0.00 (chart)
Docker_Windows TestKicCustomNetwork/use_default_bridge_network (gopogh) 0.00 (chart)
Docker_Windows TestNetworkPlugins/group/false/Start (gopogh) 0.00 (chart)
KVM_Linux_containerd TestAddons/Setup (gopogh) 0.00 (chart)
KVM_Linux TestAddons/Setup (gopogh) 0.00 (chart)
none_Linux TestAddons/Setup (gopogh) 0.00 (chart)
Docker_Linux_containerd TestKVMDriverInstallOrUpdate (gopogh) 0.69 (chart)
none_Linux TestFunctional/serial/LogsFileCmd (gopogh) 0.69 (chart)
Docker_Windows TestPause/serial/SecondStartNoReconfiguration (gopogh) 0.72 (chart)
Docker_Windows TestStartStop/group/old-k8s-version/serial/Pause (gopogh) 6.52 (chart)
KVM_Linux_containerd TestPause/serial/SecondStartNoReconfiguration (gopogh) 11.61 (chart)
Docker_Linux_containerd TestPause/serial/Pause (gopogh) 15.17 (chart)
Docker_Linux_containerd TestPause/serial/VerifyStatus (gopogh) 15.17 (chart)
Docker_Windows TestKubernetesUpgrade (gopogh) 22.46 (chart)
Docker_Linux_containerd TestPause/serial/PauseAgain (gopogh) 25.52 (chart)
Docker_Windows TestMountStart/serial/StartWithMountFirst (gopogh) 65.14 (chart)
Docker_Windows TestMountStart/serial/StartWithMountSecond (gopogh) 65.14 (chart)
Docker_Windows TestMountStart/serial/VerifyMountFirst (gopogh) 65.14 (chart)
Docker_Windows TestMountStart/serial/VerifyMountPostDelete (gopogh) 65.14 (chart)
Docker_Windows TestMountStart/serial/VerifyMountSecond (gopogh) 65.14 (chart)
Docker_Windows TestNetworkPlugins/group/custom-weave/Start (gopogh) 69.57 (chart)
Docker_Windows TestNetworkPlugins/group/kindnet/DNS (gopogh) 70.00 (chart)
Docker_Windows TestNetworkPlugins/group/enable-default-cni/DNS (gopogh) 76.47 (chart)
Docker_Windows TestNetworkPlugins/group/calico/Start (gopogh) 80.43 (chart)
Docker_Windows TestNetworkPlugins/group/bridge/DNS (gopogh) 81.34 (chart)
Docker_Windows TestNetworkPlugins/group/kubenet/DNS (gopogh) 81.34 (chart)
Docker_Linux_containerd TestScheduledStopUnix (gopogh) 100.00 (chart)
Docker_Windows TestCertOptions (gopogh) 100.00 (chart)
Docker_Windows TestMountStart/serial/RestartStopped (gopogh) 100.00 (chart)
More tests... Continued...

Too many tests failed - See test logs for more details.

To see the flake rates of all tests by environment, click here.

@minikube-pr-bot
Copy link

kvm2 driver with docker runtime

+----------------+----------+---------------------+
|    COMMAND     | MINIKUBE | MINIKUBE (PR 12794) |
+----------------+----------+---------------------+
| minikube start | 48.2s    | 47.6s               |
| enable ingress | 32.9s    | 32.4s               |
+----------------+----------+---------------------+

Times for minikube start: 51.0s 47.9s 48.5s 47.4s 46.4s
Times for minikube (PR 12794) start: 49.1s 47.3s 47.6s 47.1s 46.9s

Times for minikube ingress: 32.8s 32.3s 32.9s 34.3s 32.3s
Times for minikube (PR 12794) ingress: 32.8s 32.3s 32.8s 33.8s 30.3s

docker driver with docker runtime

+----------------+----------+---------------------+
|    COMMAND     | MINIKUBE | MINIKUBE (PR 12794) |
+----------------+----------+---------------------+
| minikube start | 21.9s    | 22.0s               |
| enable ingress | 34.0s    | 33.4s               |
+----------------+----------+---------------------+

Times for minikube ingress: 34.9s 35.0s 36.9s 27.9s 35.4s
Times for minikube (PR 12794) ingress: 35.0s 35.4s 34.4s 34.9s 27.5s

Times for minikube start: 22.9s 21.6s 22.0s 21.1s 21.9s
Times for minikube (PR 12794) start: 22.5s 22.0s 21.8s 21.8s 21.6s

docker driver with containerd runtime

+----------------+----------+---------------------+
|    COMMAND     | MINIKUBE | MINIKUBE (PR 12794) |
+----------------+----------+---------------------+
| minikube start | 40.4s    | 42.6s               |
| enable ingress | 32.3s    | 33.8s               |
+----------------+----------+---------------------+

Times for minikube (PR 12794) start: 45.1s 43.8s 36.9s 43.4s 43.6s
Times for minikube start: 27.3s 44.1s 43.1s 44.2s 43.3s

Times for minikube ingress: 37.4s 37.4s 33.9s 32.9s 19.9s
Times for minikube (PR 12794) ingress: 36.5s 33.4s 29.6s 32.9s 36.4s

@minikube-pr-bot
Copy link

These are the flake rates of all failed tests.

Environment Failed Tests Flake Rate (%)
Docker_Windows TestPause/serial/DeletePaused (gopogh) 0.71 (chart)
Docker_Windows TestRunningBinaryUpgrade (gopogh) 0.71 (chart)
Docker_Windows TestFunctional/parallel/PersistentVolumeClaim (gopogh) 1.42 (chart)
Docker_Windows TestSkaffold (gopogh) 1.42 (chart)
KVM_Linux_containerd TestAddons/parallel/Registry (gopogh) 2.55 (chart)
Docker_Windows TestStartStop/group/embed-certs/serial/Pause (gopogh) 8.51 (chart)
Docker_Windows TestKubernetesUpgrade (gopogh) 21.99 (chart)
Docker_Windows TestNetworkPlugins/group/calico/NetCatPod (gopogh) 39.29 (chart)
Docker_Windows TestMountStart/serial/StartWithMountFirst (gopogh) 66.07 (chart)
Docker_Windows TestMountStart/serial/StartWithMountSecond (gopogh) 66.07 (chart)
Docker_Windows TestMountStart/serial/VerifyMountFirst (gopogh) 66.07 (chart)
Docker_Windows TestMountStart/serial/VerifyMountPostDelete (gopogh) 66.07 (chart)
Docker_Windows TestMountStart/serial/VerifyMountSecond (gopogh) 66.07 (chart)
Docker_Linux_containerd TestScheduledStopUnix (gopogh) 100.00 (chart)
Docker_Windows TestCertOptions (gopogh) 100.00 (chart)
Docker_Windows TestMountStart/serial/RestartStopped (gopogh) 100.00 (chart)
Docker_Windows TestMountStart/serial/Stop (gopogh) 100.00 (chart)
Docker_Windows TestMountStart/serial/VerifyMountPostStop (gopogh) 100.00 (chart)

To see the flake rates of all tests by environment, click here.

// https://github.com/kubernetes/ingress-nginx/blob/0a2ec01eb4ec0e1b29c4b96eb838a2e7bfe0e9f6/deploy/static/provider/kind/deploy.yaml#L328
"IngressController": "ingress-nginx/controller:v0.49.3@sha256:35fe394c82164efa8f47f3ed0be981b3f23da77175bbb8268a9ae438851c8324",
// issues: https://github.com/kubernetes/ingress-nginx/issues/7418 and https://github.com/jet/kube-webhook-certgen/issues/30
"KubeWebhookCertgenCreate": "docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7",
"KubeWebhookCertgenPatch": "docker.io/jettech/kube-webhook-certgen:v1.5.1@sha256:950833e19ade18cd389d647efb88992a7cc077abedef343fa59e012d376d79b7",
}
if cc.CustomAddonImages == nil {
Copy link
Member

@medyagh medyagh Oct 27, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I could be wrong but @prezha does this mean the cutsom addon image wont work for ingress ? why we are removing this ?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@medyagh in my first attempt to maintain backward compatibility (pr #12325), i used cc.CustomAddonImages to pass compatible images but only in case that those were not already set by the user

that would not work because of the logic in assets.SelectAndPersistImages overrides it - expecting to read custom images from the viper/flags (not from the cc):

newImages := parseMapString(viper.GetString(config.AddonImages))
for name, image := range newImages {
if image == "" {
out.WarningT("Ignoring empty custom image {{.name}}", out.V{"name": name})
delete(newImages, name)
continue
}
if _, ok := addonDefaultImages[name]; !ok {
out.WarningT("Ignoring unknown custom image {{.name}}", out.V{"name": name})
}
}
// Use newly configured custom images.
images = overrideDefaults(addonDefaultImages, newImages)

hence, here i've amended the addons.supportLegacyIngress to replace instead the default images to those compatible if the older k8s was used, ie, without using or affecting in any way the cc - the custom images user could have provided via flags, so that should continue to work as before

on a side note, i think the current logic in assets.SelectAndPersistImages mentioned above could be problematic as it could override the custom images that were defined in the first run with the default ones in subsequent runs if those that follow would not use the same flag providing initial custom images, same could be for the custom registries, but if an issue at all, it would be another story :)

@spowelljr
Copy link
Member

@prezha Anything in your PR description that follows the pattern fixes <issue> will auto close those issues, so issues you labeled improves/fixes & potentially helps/fixes will be auto closed, that's fine if that's intended, but if it's not just reword the line.

@prezha
Copy link
Contributor Author

prezha commented Oct 27, 2021

@prezha Anything in your PR description that follows the pattern fixes <issue> will auto close those issues, so issues you labeled improves/fixes & potentially helps/fixes will be auto closed, that's fine if that's intended, but if it's not just reword the line.

@spowelljr thanks for pointing that out! to the best of my knowledge (ie, based on the issues' description and tries to replicate them), i left the issues i think should be fixed with this pr and also removed two "improves/fixes" issues that might be more related to the network issues in general + reworded two "potentially helps/fixes" issues that can be a subject of other issues as well (ie, #11987 and #12189) and therefore should remain open until confirmed that they could be closed as well

@sharifelgamal
Copy link
Collaborator

@prezha I think this change looks great! Could you add a new test, outside of the existing addons_test, that starts a new cluster with an "old" k8s version then enables ingress? The code to verify ingress can be shared between the two tests.

@k8s-ci-robot k8s-ci-robot added size/L Denotes a PR that changes 100-499 lines, ignoring generated files. and removed size/S Denotes a PR that changes 10-29 lines, ignoring generated files. labels Oct 28, 2021
@prezha
Copy link
Contributor Author

prezha commented Oct 28, 2021

@prezha I think this change looks great! Could you add a new test, outside of the existing addons_test, that starts a new cluster with an "old" k8s version then enables ingress? The code to verify ingress can be shared between the two tests.

thanks @sharifelgamal i've also added support for ingress-dns and a new test that covers both addons now (for which i had to adapt integration.validateIngressAddon to be able to reuse it, and since i've also removed annotation and ingressClassName from nginx-ingress-v1.yaml testdata, the no-annotation behaviour is tested as well)
please have a look and let me know if this is what you had in mind

examples:

  • 1.18 / v1beta1
❯ env TEST_ARGS="-test.run TestIngressAddonLegacy" make integration
go build  -tags "" -ldflags="-X k8s.io/minikube/pkg/version.version=v1.23.2 -X k8s.io/minikube/pkg/version.isoVersion=v1.23.1-1633115168-12081 -X k8s.io/minikube/pkg/version.gitCommitID="76c1e7959859f8c8bf5a35a299962806ea8f5549-dirty" -X k8s.io/minikube/pkg/version.storageProvisionerVersion=v5" -o out/minikube k8s.io/minikube/cmd/minikube
go test -ldflags="-X k8s.io/minikube/pkg/version.version=v1.23.2 -X k8s.io/minikube/pkg/version.isoVersion=v1.23.1-1633115168-12081 -X k8s.io/minikube/pkg/version.gitCommitID="76c1e7959859f8c8bf5a35a299962806ea8f5549-dirty" -X k8s.io/minikube/pkg/version.storageProvisionerVersion=v5" -v -test.timeout=90m ./test/integration --tags="integration " -test.run TestIngressAddonLegacy 2>&1 | tee "./out/testout_76c1e7959.txt"
Found 16 cores, limiting parallelism with --test.parallel=9
=== RUN   TestIngressAddonLegacy
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
    ingress_addon_legacy_test.go:40: (dbg) Run:  out/minikube start -p ingress-addon-legacy-20211028033955-299634 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5
    ingress_addon_legacy_test.go:40: (dbg) Done: out/minikube start -p ingress-addon-legacy-20211028033955-299634 --kubernetes-version=v1.18.20 --memory=4096 --wait=true --alsologtostderr -v=5 : (1m2.081186126s)
=== RUN   TestIngressAddonLegacy/serial
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
    ingress_addon_legacy_test.go:71: (dbg) Run:  out/minikube -p ingress-addon-legacy-20211028033955-299634 addons enable ingress --alsologtostderr -v=5
    ingress_addon_legacy_test.go:71: (dbg) Done: out/minikube -p ingress-addon-legacy-20211028033955-299634 addons enable ingress --alsologtostderr -v=5: (18.718205347s)
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
    ingress_addon_legacy_test.go:80: (dbg) Run:  out/minikube -p ingress-addon-legacy-20211028033955-299634 addons enable ingress-dns --alsologtostderr -v=5
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
    addons_test.go:163: (dbg) Run:  kubectl --context ingress-addon-legacy-20211028033955-299634 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
    addons_test.go:163: (dbg) Done: kubectl --context ingress-addon-legacy-20211028033955-299634 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (10.381548303s)
    addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-20211028033955-299634 replace --force -f testdata/nginx-ingress-v1beta1.yaml
    addons_test.go:196: (dbg) Run:  kubectl --context ingress-addon-legacy-20211028033955-299634 replace --force -f testdata/nginx-pod-svc.yaml
    addons_test.go:201: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
    helpers_test.go:342: "nginx" [cbd7b1ab-e484-47d8-86d2-125f734f2522] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
    helpers_test.go:342: "nginx" [cbd7b1ab-e484-47d8-86d2-125f734f2522] Running
    addons_test.go:201: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 11.021979638s
    addons_test.go:213: (dbg) Run:  out/minikube -p ingress-addon-legacy-20211028033955-299634 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
    addons_test.go:237: (dbg) Run:  kubectl --context ingress-addon-legacy-20211028033955-299634 replace --force -f testdata/ingress-dns-example-v1beta1.yaml
    addons_test.go:242: (dbg) Run:  out/minikube -p ingress-addon-legacy-20211028033955-299634 ip
    addons_test.go:248: (dbg) Run:  nslookup hello-john.test 192.168.39.47
    addons_test.go:257: (dbg) Run:  out/minikube -p ingress-addon-legacy-20211028033955-299634 addons disable ingress-dns --alsologtostderr -v=1
    addons_test.go:257: (dbg) Done: out/minikube -p ingress-addon-legacy-20211028033955-299634 addons disable ingress-dns --alsologtostderr -v=1: (7.203205737s)
    addons_test.go:262: (dbg) Run:  out/minikube -p ingress-addon-legacy-20211028033955-299634 addons disable ingress --alsologtostderr -v=1
    addons_test.go:262: (dbg) Done: out/minikube -p ingress-addon-legacy-20211028033955-299634 addons disable ingress --alsologtostderr -v=1: (28.491408287s)
=== CONT  TestIngressAddonLegacy
    helpers_test.go:175: Cleaning up "ingress-addon-legacy-20211028033955-299634" profile ...
    helpers_test.go:178: (dbg) Run:  out/minikube delete -p ingress-addon-legacy-20211028033955-299634
    helpers_test.go:178: (dbg) Done: out/minikube delete -p ingress-addon-legacy-20211028033955-299634: (1.055209533s)
--- PASS: TestIngressAddonLegacy (140.21s)
    --- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (62.08s)
    --- PASS: TestIngressAddonLegacy/serial (77.07s)
        --- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (18.72s)
        --- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.29s)
        --- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (58.06s)
PASS
Tests completed in 2m20.208623529s (result code 0)
ok      k8s.io/minikube/test/integration        140.235s
  • current / v1 (ie, simulated by not specifying k8s version via --kubernetes-version flag)
❯ env TEST_ARGS="-test.run TestIngressAddonLegacy" make integration
go build  -tags "" -ldflags="-X k8s.io/minikube/pkg/version.version=v1.23.2 -X k8s.io/minikube/pkg/version.isoVersion=v1.23.1-1633115168-12081 -X k8s.io/minikube/pkg/version.gitCommitID="76c1e7959859f8c8bf5a35a299962806ea8f5549-dirty" -X k8s.io/minikube/pkg/version.storageProvisionerVersion=v5" -o out/minikube k8s.io/minikube/cmd/minikube
go test -ldflags="-X k8s.io/minikube/pkg/version.version=v1.23.2 -X k8s.io/minikube/pkg/version.isoVersion=v1.23.1-1633115168-12081 -X k8s.io/minikube/pkg/version.gitCommitID="76c1e7959859f8c8bf5a35a299962806ea8f5549-dirty" -X k8s.io/minikube/pkg/version.storageProvisionerVersion=v5" -v -test.timeout=90m ./test/integration --tags="integration " -test.run TestIngressAddonLegacy 2>&1 | tee "./out/testout_76c1e7959.txt"
Found 16 cores, limiting parallelism with --test.parallel=9
=== RUN   TestIngressAddonLegacy
=== RUN   TestIngressAddonLegacy/StartLegacyK8sCluster
    ingress_addon_legacy_test.go:41: (dbg) Run:  out/minikube start -p ingress-addon-legacy-20211028034423-302579 --memory=4096 --wait=true --alsologtostderr -v=5
    ingress_addon_legacy_test.go:41: (dbg) Done: out/minikube start -p ingress-addon-legacy-20211028034423-302579 --memory=4096 --wait=true --alsologtostderr -v=5 : (1m0.975816829s)
=== RUN   TestIngressAddonLegacy/serial
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddonActivation
    ingress_addon_legacy_test.go:72: (dbg) Run:  out/minikube -p ingress-addon-legacy-20211028034423-302579 addons enable ingress --alsologtostderr -v=5
    ingress_addon_legacy_test.go:72: (dbg) Done: out/minikube -p ingress-addon-legacy-20211028034423-302579 addons enable ingress --alsologtostderr -v=5: (18.154807988s)
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation
    ingress_addon_legacy_test.go:81: (dbg) Run:  out/minikube -p ingress-addon-legacy-20211028034423-302579 addons enable ingress-dns --alsologtostderr -v=5
=== RUN   TestIngressAddonLegacy/serial/ValidateIngressAddons
    addons_test.go:163: (dbg) Run:  kubectl --context ingress-addon-legacy-20211028034423-302579 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
    addons_test.go:163: (dbg) Done: kubectl --context ingress-addon-legacy-20211028034423-302579 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s: (12.043994483s)
    addons_test.go:183: (dbg) Run:  kubectl --context ingress-addon-legacy-20211028034423-302579 replace --force -f testdata/nginx-ingress-v1.yaml
    addons_test.go:196: (dbg) Run:  kubectl --context ingress-addon-legacy-20211028034423-302579 replace --force -f testdata/nginx-pod-svc.yaml
    addons_test.go:201: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
    helpers_test.go:342: "nginx" [bfa6b2f1-e6f8-4038-b8b7-95bc923bfaa7] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
    helpers_test.go:342: "nginx" [bfa6b2f1-e6f8-4038-b8b7-95bc923bfaa7] Running
    addons_test.go:201: (dbg) TestIngressAddonLegacy/serial/ValidateIngressAddons: run=nginx healthy within 11.030951337s
    addons_test.go:213: (dbg) Run:  out/minikube -p ingress-addon-legacy-20211028034423-302579 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
    addons_test.go:237: (dbg) Run:  kubectl --context ingress-addon-legacy-20211028034423-302579 replace --force -f testdata/ingress-dns-example-v1.yaml
    addons_test.go:242: (dbg) Run:  out/minikube -p ingress-addon-legacy-20211028034423-302579 ip
    addons_test.go:248: (dbg) Run:  nslookup hello-john.test 192.168.39.230
    addons_test.go:257: (dbg) Run:  out/minikube -p ingress-addon-legacy-20211028034423-302579 addons disable ingress-dns --alsologtostderr -v=1
    addons_test.go:257: (dbg) Done: out/minikube -p ingress-addon-legacy-20211028034423-302579 addons disable ingress-dns --alsologtostderr -v=1: (1.899620117s)
    addons_test.go:262: (dbg) Run:  out/minikube -p ingress-addon-legacy-20211028034423-302579 addons disable ingress --alsologtostderr -v=1
    addons_test.go:262: (dbg) Done: out/minikube -p ingress-addon-legacy-20211028034423-302579 addons disable ingress --alsologtostderr -v=1: (28.819080893s)
=== CONT  TestIngressAddonLegacy
    helpers_test.go:175: Cleaning up "ingress-addon-legacy-20211028034423-302579" profile ...
    helpers_test.go:178: (dbg) Run:  out/minikube delete -p ingress-addon-legacy-20211028034423-302579
    helpers_test.go:178: (dbg) Done: out/minikube delete -p ingress-addon-legacy-20211028034423-302579: (1.195711011s)
--- PASS: TestIngressAddonLegacy (135.28s)
    --- PASS: TestIngressAddonLegacy/StartLegacyK8sCluster (60.98s)
    --- PASS: TestIngressAddonLegacy/serial (73.11s)
        --- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddonActivation (18.15s)
        --- PASS: TestIngressAddonLegacy/serial/ValidateIngressDNSAddonActivation (0.33s)
        --- PASS: TestIngressAddonLegacy/serial/ValidateIngressAddons (54.63s)
PASS
Tests completed in 2m15.283069626s (result code 0)
ok      k8s.io/minikube/test/integration        135.309s

@minikube-pr-bot
Copy link

kvm2 driver with docker runtime

+----------------+----------+---------------------+
|    COMMAND     | MINIKUBE | MINIKUBE (PR 12794) |
+----------------+----------+---------------------+
| minikube start | 47.4s    | 47.6s               |
| enable ingress | 31.8s    | 31.5s               |
+----------------+----------+---------------------+

Times for minikube start: 49.9s 46.2s 47.8s 46.9s 46.0s
Times for minikube (PR 12794) start: 47.4s 47.8s 47.8s 47.8s 46.8s

Times for minikube (PR 12794) ingress: 32.8s 30.8s 31.4s 31.2s 31.4s
Times for minikube ingress: 31.8s 32.2s 31.3s 31.3s 32.2s

docker driver with docker runtime

+----------------+----------+---------------------+
|    COMMAND     | MINIKUBE | MINIKUBE (PR 12794) |
+----------------+----------+---------------------+
| minikube start | 21.9s    | 21.3s               |
| enable ingress | 35.4s    | 33.0s               |
+----------------+----------+---------------------+

Times for minikube start: 23.2s 21.6s 21.5s 22.2s 20.8s
Times for minikube (PR 12794) start: 21.8s 21.4s 20.5s 20.7s 21.9s

Times for minikube ingress: 34.4s 35.5s 36.9s 34.9s 35.5s
Times for minikube (PR 12794) ingress: 34.9s 36.9s 35.4s 28.9s 28.9s

docker driver with containerd runtime

+----------------+----------+---------------------+
|    COMMAND     | MINIKUBE | MINIKUBE (PR 12794) |
+----------------+----------+---------------------+
| minikube start | 34.6s    | 44.3s               |
| enable ingress | 32.8s    | 32.3s               |
+----------------+----------+---------------------+

Times for minikube start: 30.5s 42.5s 26.0s 43.8s 30.1s
Times for minikube (PR 12794) start: 43.6s 43.3s 43.0s 43.9s 47.7s

Times for minikube ingress: 29.4s 23.9s 47.4s 33.9s 29.4s
Times for minikube (PR 12794) ingress: 28.9s 36.9s 36.9s 29.5s 29.4s

@minikube-pr-bot
Copy link

These are the flake rates of all failed tests.

Environment Failed Tests Flake Rate (%)
Docker_Linux TestFunctional/serial/ComponentHealth (gopogh) 7.64 (chart)
Docker_Linux_containerd TestPause/serial/Pause (gopogh) 17.36 (chart)
Docker_Linux_containerd TestPause/serial/VerifyStatus (gopogh) 17.36 (chart)
Docker_Linux_containerd TestPause/serial/PauseAgain (gopogh) 27.78 (chart)
Docker_Windows TestMountStart/serial/StartWithMountFirst (gopogh) 67.52 (chart)
Docker_Windows TestMountStart/serial/StartWithMountSecond (gopogh) 67.52 (chart)
Docker_Windows TestMountStart/serial/VerifyMountFirst (gopogh) 67.52 (chart)
Docker_Windows TestMountStart/serial/VerifyMountPostDelete (gopogh) 67.52 (chart)
Docker_Windows TestMountStart/serial/VerifyMountSecond (gopogh) 67.52 (chart)
Docker_Windows TestNetworkPlugins/group/kindnet/DNS (gopogh) 68.54 (chart)
Docker_Windows TestNetworkPlugins/group/custom-weave/Start (gopogh) 69.63 (chart)
Docker_Windows TestNetworkPlugins/group/enable-default-cni/DNS (gopogh) 75.19 (chart)
Docker_Windows TestNetworkPlugins/group/calico/Start (gopogh) 78.52 (chart)
Docker_Windows TestNetworkPlugins/group/kubenet/DNS (gopogh) 80.15 (chart)
Docker_Windows TestNetworkPlugins/group/bridge/DNS (gopogh) 80.30 (chart)
Docker_Linux_containerd TestScheduledStopUnix (gopogh) 100.00 (chart)
Docker_Windows TestCertOptions (gopogh) 100.00 (chart)
Docker_Windows TestMountStart/serial/RestartStopped (gopogh) 100.00 (chart)
Docker_Windows TestMountStart/serial/Stop (gopogh) 100.00 (chart)
Docker_Windows TestMountStart/serial/VerifyMountPostStop (gopogh) 100.00 (chart)
Docker_Windows TestPause/serial/VerifyDeletedResources (gopogh) 100.00 (chart)

To see the flake rates of all tests by environment, click here.

Copy link
Collaborator

@sharifelgamal sharifelgamal left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for doing this!

@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: prezha, sharifelgamal

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:
  • OWNERS [prezha,sharifelgamal]

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@sharifelgamal sharifelgamal merged commit 6bbf98d into kubernetes:master Oct 28, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. ok-to-test Indicates a non-member PR verified by an org member that is safe to test. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
6 participants