Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Number of Passed/Failed test case is not accurate #1649

Closed
iElephant opened this issue Sep 21, 2022 · 10 comments
Closed

[BUG] Number of Passed/Failed test case is not accurate #1649

iElephant opened this issue Sep 21, 2022 · 10 comments
Assignees
Labels
3 pts bug Something isn't working sprint22.21 Sprint 22.21: Oct 7-20, 2022 v0.36.0 Issue included in v0.36.0 release

Comments

@iElephant
Copy link

Describe the bug
From the execution output, it looks like the number of passed cases is not accurate for some of the categories.

For example, in compatibility test cases, I have everything passed but it still shows 9 of 11 tests passed:

image

It happens to other categories too, please see attached for the full test output.

test_output.txt

To Reproduce
Just run ./cnf-testsuite workload and check the number of passed/fail cases.

Expected behavior
The reported number of passed/failed test case reflect the actual tested cases.

@iElephant iElephant added the bug Something isn't working label Sep 21, 2022
@agentpoyo
Copy link
Collaborator

This might be caused by the increase_decrease test which was combined but still has separate entries in the embedded points.yml. We'll look into this issue for the category and verify the others as well.

@agentpoyo agentpoyo self-assigned this Sep 21, 2022
@iElephant
Copy link
Author

Thanks @agentpoyo for the quick reply. It happens in other categories too, for example, there are 20 test cases in security but it always reporting 16 test cases...

Another question is if the SKIPPED case will be included in the total number of cases in the printout?

@agentpoyo
Copy link
Collaborator

Another question is if the SKIPPED case will be included in the total number of cases in the printout?

Skipped or N/A will not be part of the total.

@agentpoyo
Copy link
Collaborator

Verified workload and cert category points are not getting calculated properly.

denverwilliams added a commit that referenced this issue Sep 26, 2022
…pped.

Signed-off-by: denverwilliams <denver@debian.nz>
denverwilliams added a commit that referenced this issue Sep 28, 2022
Signed-off-by: denverwilliams <denver@debian.nz>
denverwilliams pushed a commit that referenced this issue Sep 29, 2022
Signed-off-by: denverwilliams <denver@debian.nnz>
denverwilliams added a commit that referenced this issue Sep 29, 2022
Signed-off-by: denverwilliams <denver@debian.nz>
denverwilliams added a commit that referenced this issue Sep 29, 2022
Signed-off-by: denverwilliams <denver@debian.nz>
wvwatson pushed a commit that referenced this issue Sep 29, 2022
Signed-off-by: denverwilliams <denver@debian.nnz>
@lixuna lixuna added sprint22.21 Sprint 22.21: Oct 7-20, 2022 v0.35.0 Issue included in v0.35.0 release labels Oct 6, 2022
denverwilliams added a commit that referenced this issue Oct 10, 2022
Signed-off-by: denverwilliams <denver@debian.nz>
agentpoyo added a commit that referenced this issue Oct 10, 2022
@agentpoyo
Copy link
Collaborator

agentpoyo commented Oct 13, 2022

Acceptance Criteria

  • Checkout the main branch source and follow the install instructions for cnf-testsuite
  • When you run cert or workload, you should find that the number of tests that run match the summary for each category (excluding bonus and n/a tests unless they run and pass, they are not counted when they fail)
  • I can see screenshots of the output here for each category.

@lixuna
Copy link
Collaborator

lixuna commented Oct 21, 2022

@taylor please peer review

@taylor
Copy link
Member

taylor commented Oct 21, 2022

@denverwilliams @agentpoyo please post screenshots with the results showing the number of passed/failed is fixed.

@agentpoyo
Copy link
Collaborator

@denverwilliams @agentpoyo please post screenshots with the results showing the number of passed/failed is fixed.

I'm running both cert and workload now to verify this bug and the points for each.

@agentpoyo
Copy link
Collaborator

CERT Results for points output:

Compatibility category is accurate:

Compatibility, Installability & Upgradability Tests
Successfully created directories for cnf-testsuite
✔️  PASSED: Helm Chart exported_chart Lint Passed ⎈📝☑️
✔️  PASSED: Published Helm Chart Found ⎈📦🌐
✔️  PASSED: Helm deploy successful ⎈🚀
✔️  PASSED: CNF compatible with both Calico and Cilium 🔓🔑
✔️  PASSED: Replicas increased to 3 and decreased to 1 📦📉
Please add the container name coredns and a corresponding rollback_from_tag into your cnf-testsuite.yml under container names
✖️  FAILED: CNF Rollback Failed
Compatibility, installability, and upgradeability results: 5 of 6 tests passed

State tests are accurate:

State Tests
Rescued: On resource coredns-coredns of kind Service, local storage configuration volumes not found 🖥️  💾
✔️  ✨PASSED: local storage configuration volumes not found 🖥️  💾
✔️  ✨FAILED: Volumes used are not elastic volumes 🧫
✔️  🏆 PASSED: node_drain chaos test passed 🗡️💀♻️
State results: 2 of 2 tests passed

Security tests are accurate when you remove the N/A and Bonus tests that are not counted when they fail or skip:

Security Tests
✔️  PASSED: No containers allow a symlink attack 🔓🔑
✖️  FAILED: Found containers that allow privilege escalation 🔓🔑
Failed resource: Deployment coredns-coredns in cnfspace namespace
Remediation: If your application does not need it, make sure the allowPrivilegeEscalation field of the securityContext is set to false.
✔️  PASSED: Containers with insecure capabilities were not found 🔓🔑
✔️  🏆 PASSED: Containers have resource limits defined 🔓🔑
✖️  ✨FAILED: Found resources that do not use security services 🔓🔑
Failed resource: Deployment coredns-coredns in cnfspace namespace
Remediation: You can use AppArmor, Seccomp, SELinux and Linux Capabilities mechanisms to restrict containers abilities to utilize unwanted privileges.
✖️  ✨FAILED: Ingress and Egress traffic not blocked on pods 🔓🔑
Failed resource: Deployment coredns-coredns in cnfspace namespace
Remediation: Define a network policy that restricts ingress and egress connections.
✔️  PASSED: No containers with hostPID and hostIPC privileges 🔓🔑
✖️  🏆 FAILED: Found containers running with root user or user with root group membership 🔓🔑
Failed resource: Deployment coredns-coredns in cnfspace namespace
Remediation: If your application does not need root privileges, make sure to define the runAsUser or runAsGroup under the PodSecurityContext and use user ID 1000 or higher. Do not turn on allowPrivlegeEscalation bit and make sure runAsNonRoot is true.
✔️  🏆 PASSED: No privileged containers were found 🔓🔑
✖️  ✨FAILED: Found containers with mutable file systems 🔓🔑
Failed resource: Deployment coredns-coredns in cnfspace namespace
Remediation: Set the filesystem of the container to read-only when possible (POD securityContext, readOnlyRootFilesystem: true). If containers application needs to write into the filesystem, it is recommended to mount secondary filesystems for specific directories where application require write access.
✔️  PASSED: Containers do not have hostPath mounts 🔓🔑
✔️  🏆 PASSED: Container engine daemon sockets are not mounted as volumes 🔓🔑
✔️  PASSED: Services are not using external IPs 🔓🔑
⏭️  🏆 N/A: Pods are not using SELinux 🔓🔑
✔️  PASSED: No restricted values found for sysctls 🔓🔑
✔️  PASSED: No host network attached to pod 🔓🔑
✖️  FAILED: Service accounts automatically mapped 🔓🔑
Failed resource: Deployment coredns-coredns in cnfspace namespace
Remediation: Disable automatic mounting of service account tokens to PODs either at the service account level or at the individual POD level, by specifying the automountServiceAccountToken: false. Note that POD level takes precedence.
✔️  PASSED: No applications credentials in configuration files 🔓🔑
Security results: 11 of 14 tests passed

Configuration Tests are accurate, the only failure/skipped were bonus tests:

Configuration Tests
✔️  PASSED: NodePort is not used
✔️  🏆 PASSED: HostPort is not used
✔️  🏆 PASSED: No hard-coded IP addresses found in the runtime K8s configuration
No Secret Volumes or Container secretKeyRefs found for resource: {kind: "Deployment", name: "coredns-coredns", namespace: "cnfspace"}
⏭  ✨SKIPPED: Secrets not used 🧫

To address this issue please see the USAGE.md documentation

✖️  ✨FAILED: Found mutable configmap(s) ⚖️
✔️  PASSED: Pods have the app.kubernetes.io/name label 🏷️✔️
✔️  🏆 PASSED: Container images are not using the latest tag 🏷️✔️
✔️  PASSED: default namespace is not being used 🏷️✔️
Configuration results: 6 of 6 tests passed

Observability accurate in points:

Observability and Diagnostics Tests
✔️  🏆 PASSED: Resources output logs to stdout and stderr 📶☠️
⏭️  ✨SKIPPED: Prometheus server not found 📶☠️
⏭️  ✨SKIPPED: Prometheus traffic not configured 📶☠️
⏭️  ✨SKIPPED: Fluentd or FluentBit not configured 📶☠️
⏭️  ✨SKIPPED: Jaeger not configured ⎈🚀
Observability and diagnostics results: 1 of 1 tests passed

Microservice tests has a discrepancy in the count, seems that the total should be 4 of 4, not 4 of 5 since there was a skip for MariaDB shared database but another Bonus test passed, but only 4 tests actually passed. @denverwilliams @wavell

Microservice Tests
✔️  PASSED: Image size is good 🐜 ⚖️👀
✔️  PASSED: CNF had a reasonable startup time 🚀
✔️  🏆 PASSED: Only one process type used 🐜 ⚖️👀
✔️  ✨PASSED: Some containers exposed as a service 🐜 ⚖️👀
⏭️  SKIPPED: [shared_database] No MariaDB containers were found
Microservice results: 4 of 5 tests passed

Resilience accurate:

Reliability, Resilience, and Availability Tests
✔️  ✨PASSED: pod_network_latency chaos test passed 🗡️💀♻️
✔️  ✨PASSED: pod_network_corruption chaos test passed 🗡️💀♻️
✔️  PASSED: disk_fill chaos test passed 🗡️💀♻️
✔️  PASSED: pod_delete chaos test passed 🗡️💀♻️
✔️  PASSED: pod_memory_hog chaos test passed 🗡️💀♻️
✔️  ✨PASSED: pod_io_stress chaos test passed 🗡️💀♻️
⏭️   ✨SKIPPED: pod_dns_error docker runtime not found 🗡️💀♻️
✔️  ✨PASSED: pod_network_duplication chaos test passed 🗡️💀♻️
✔️  🏆 PASSED: Helm liveness probe found ⎈🧫
✔️  🏆 PASSED: Helm readiness probe found ⎈🧫
Reliability, resilience, and availability results: 9 of 9 tests passed

@agentpoyo
Copy link
Collaborator

agentpoyo commented Nov 10, 2022

The workload results; the microservices have same behavior during this as does the cert, a bug has been opened to address, the rest of these are matching up as accurate based on whether they skip or NA to not count towards category totals:

[root@akash-rhel8 cnf-testsuite-drew]# ./cnf-testsuite workload
Successfully created directories for cnf-testsuite
✔️  PASSED: Helm Chart exported_chart Lint Passed ⎈📝☑️
✔️  PASSED: Published Helm Chart Found ⎈📦🌐
✔️  PASSED: Helm deploy successful ⎈🚀
✔️  PASSED: CNF compatible with both Calico and Cilium 🔓🔑
✔️  PASSED: Replicas increased to 3 and decreased to 1 📦📉
Please add the container name coredns and a corresponding rollback_from_tag into your cnf-testsuite.yml under container names
✖️  FAILED: CNF Rollback Failed
Please add the container name coredns and a corresponding rolling_update_test_tag into your cnf-testsuite.yml under container names
✖️  FAILED: CNF for Rolling Update Failed
Please add the container name coredns and a corresponding rolling_downgrade_test_tag into your cnf-testsuite.yml under container names
✖️  FAILED: CNF for Rolling Downgrade Failed
Please add the container name coredns and a corresponding rolling_version_change_test_tag into your cnf-testsuite.yml under container names
✖️  FAILED: CNF for Rolling Version Change Failed
Compatibility, installability, and upgradeability results: 5 of 9 tests passed

✔️  PASSED: hostPath volumes not found 🖥️  💾
Rescued: On resource coredns-coredns of kind Service, local storage configuration volumes not found 🖥️  💾
✔️  ✨PASSED: local storage configuration volumes not found 🖥️  💾
✔️  ✨FAILED: Volumes used are not elastic volumes 🧫
⏭️  SKIPPED: Mysql not installed 🧫
✖️  🏆 FAILED: node_drain chaos test failed 🗡️💀♻️
State results: 2 of 4 tests passed

✔️  PASSED: No privileged containers 🔓🔑
⏭️  SKIPPED: Skipping non_root_user: Falco failed to install. Check Kernel Headers are installed on the Host Systems(K8s).
✔️  PASSED: No containers allow a symlink attack 🔓🔑
✖️  FAILED: Found containers that allow privilege escalation 🔓🔑
Failed resource: Deployment coredns-coredns in cnfspace namespace
Remediation: If your application does not need it, make sure the allowPrivilegeEscalation field of the securityContext is set to false.
✔️  PASSED: Containers with insecure capabilities were not found 🔓🔑
✔️  🏆 PASSED: Containers have resource limits defined 🔓🔑
✖️  ✨FAILED: Found resources that do not use security services 🔓🔑
Failed resource: Deployment coredns-coredns in cnfspace namespace
Remediation: You can use AppArmor, Seccomp, SELinux and Linux Capabilities mechanisms to restrict containers abilities to utilize unwanted privileges.
✖️  ✨FAILED: Ingress and Egress traffic not blocked on pods 🔓🔑
Failed resource: Deployment coredns-coredns in cnfspace namespace
Remediation: Define a network policy that restricts ingress and egress connections.
✔️  PASSED: No containers with hostPID and hostIPC privileges 🔓🔑
✖️  🏆 FAILED: Found containers running with root user or user with root group membership 🔓🔑
Failed resource: Deployment coredns-coredns in cnfspace namespace
Remediation: If your application does not need root privileges, make sure to define the runAsUser or runAsGroup under the PodSecurityContext and use user ID 1000 or higher. Do not turn on allowPrivlegeEscalation bit and make sure runAsNonRoot is true.
✔️  🏆 PASSED: No privileged containers were found 🔓🔑
✖️  ✨FAILED: Found containers with mutable file systems 🔓🔑
Failed resource: Deployment coredns-coredns in cnfspace namespace
Remediation: Set the filesystem of the container to read-only when possible (POD securityContext, readOnlyRootFilesystem: true). If containers application needs to write into the filesystem, it is recommended to mount secondary filesystems for specific directories where application require write access.
✔️  PASSED: Containers do not have hostPath mounts 🔓🔑
✔️  🏆 PASSED: Container engine daemon sockets are not mounted as volumes 🔓🔑
✔️  PASSED: Services are not using external IPs 🔓🔑
⏭️  🏆 N/A: Pods are not using SELinux 🔓🔑
✔️  PASSED: No restricted values found for sysctls 🔓🔑
✔️  PASSED: No host network attached to pod 🔓🔑
✖️  FAILED: Service accounts automatically mapped 🔓🔑
Failed resource: Deployment coredns-coredns in cnfspace namespace
Remediation: Disable automatic mounting of service account tokens to PODs either at the service account level or at the individual POD level, by specifying the automountServiceAccountToken: false. Note that POD level takes precedence.
✔️  PASSED: No applications credentials in configuration files 🔓🔑
Security results: 12 of 16 tests passed

✔️  PASSED: No IP addresses found
✔️  PASSED: NodePort is not used
✔️  🏆 PASSED: HostPort is not used
✔️  🏆 PASSED: No hard-coded IP addresses found in the runtime K8s configuration
No Secret Volumes or Container secretKeyRefs found for resource: {kind: "Deployment", name: "coredns-coredns", namespace: "cnfspace"}
⏭  ✨SKIPPED: Secrets not used 🧫

To address this issue please see the USAGE.md documentation

✖️  ✨FAILED: Found mutable configmap(s) ⚖️
⏭️  SKIPPED: alpha_k8s_apis
✔️  PASSED: Pods have the app.kubernetes.io/name label 🏷️✔️
✔️  🏆 PASSED: Container images are not using the latest tag 🏷️✔️
✔️  PASSED: default namespace is not being used 🏷️✔️
cnf-testsuite namespace already exists on the Kubernetes cluster
✔️  PASSED: Container images use versioned tags 🏷️✔️
Configuration results: 7 of 9 tests passed

✔️  🏆 PASSED: Resources output logs to stdout and stderr 📶☠️
⏭️  ✨SKIPPED: Prometheus server not found 📶☠️
⏭️  ✨SKIPPED: Prometheus traffic not configured 📶☠️
⏭️  ✨SKIPPED: Fluentd or FluentBit not configured 📶☠️
⏭️  ✨SKIPPED: Jaeger not configured ⎈🚀
Observability and diagnostics results: 1 of 1 tests passed

✔️  PASSED: Image size is good 🐜 ⚖️👀
✔️  PASSED: CNF had a reasonable startup time 🚀
✔️  🏆 PASSED: Only one process type used 🐜 ⚖️👀
✔️  ✨PASSED: Some containers exposed as a service 🐜 ⚖️👀
⏭️  SKIPPED: [shared_database] No MariaDB containers were found
Microservice results: 4 of 5 tests passed

✔️  ✨PASSED: pod_network_latency chaos test passed 🗡️💀♻️
✔️  ✨PASSED: pod_network_corruption chaos test passed 🗡️💀♻️
✔️  PASSED: disk_fill chaos test passed 🗡️💀♻️
✔️  PASSED: pod_delete chaos test passed 🗡️💀♻️
✔️  PASSED: pod_memory_hog chaos test passed 🗡️💀♻️
✔️  ✨PASSED: pod_io_stress chaos test passed 🗡️💀♻️
⏭️   ✨SKIPPED: pod_dns_error docker runtime not found 🗡️💀♻️
✔️  ✨PASSED: pod_network_duplication chaos test passed 🗡️💀♻️
✔️  🏆 PASSED: Helm liveness probe found ⎈🧫
✔️  🏆 PASSED: Helm readiness probe found ⎈🧫
Reliability, resilience, and availability results: 9 of 9 tests passed

@lixuna lixuna added v0.36.0 Issue included in v0.36.0 release and removed v0.35.0 Issue included in v0.35.0 release labels Dec 13, 2022
@lixuna lixuna closed this as completed Dec 13, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
3 pts bug Something isn't working sprint22.21 Sprint 22.21: Oct 7-20, 2022 v0.36.0 Issue included in v0.36.0 release
Projects
None yet
Development

No branches or pull requests

5 participants