Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Enhancement] Support dynamically setting memory used in the pod_memory_hog test based on the applications pod memory constraints #1989

Open
taylor opened this issue Apr 18, 2024 · 0 comments
Assignees
Labels

Comments

@taylor
Copy link
Member

taylor commented Apr 18, 2024

Is your feature request related to a problem? Please describe.
The Litmus Chaos experiment has changed upstream and now supports dynamically setting the memory used or will default to using all memory for the pod. If all memory is used the pod will be killed by the kube scheduler. This results in an experiment failure which results in this test not passing.

This is related to BUG #1973 which had a temporary fix with hard coding the memory used by the Litmus experiment

After this enhancement, this test could be considered as an essential test.

Describe the solution you'd like
Dynamically check for constraints in a pod and set the memory test usage at 80-95%. At 95% badly behaving applications will have failures but the kube scheduler will not be triggered. 89% memory usage caused CoreDNS spec to fail 1/5 times in initial testing.

If no constraints are set for a pod then we can use 100% of the memory on the Node (since Kubernetes will not restart the nodes by default at this time). We have other tests for validating that a constraint is set (which is a best practice).

Need spec tests. See below

Describe alternatives you've considered
Hard coding a random set of memory can be done but is not a good alternative for the intentions of this test or the intentions of the upstream experiment.

**How will this be tested? aka Acceptance Criteria (optional) **

  • Existing spec test for checking for passing the test can work
  • Need spec test to check for failing when 80-95% memory is used
  • Need spec for PASSING test for a pod that has zero constraints and 100% memory on Node is used
  • Need spec for FAILING test for a pod that has zero constraints and 100% memory of a Node is used

Once this issue is addressed how will the fix be verified?
yes

Additional context
https://litmuschaos.github.io/litmus/experiments/categories/pods/pod-memory-hog/

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

2 participants