You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
The Litmus Chaos experiment has changed upstream and now supports dynamically setting the memory used or will default to using all memory for the pod. If all memory is used the pod will be killed by the kube scheduler. This results in an experiment failure which results in this test not passing.
This is related to BUG #1973 which had a temporary fix with hard coding the memory used by the Litmus experiment
After this enhancement, this test could be considered as an essential test.
Describe the solution you'd like
Dynamically check for constraints in a pod and set the memory test usage at 80-95%. At 95% badly behaving applications will have failures but the kube scheduler will not be triggered. 89% memory usage caused CoreDNS spec to fail 1/5 times in initial testing.
If no constraints are set for a pod then we can use 100% of the memory on the Node (since Kubernetes will not restart the nodes by default at this time). We have other tests for validating that a constraint is set (which is a best practice).
Need spec tests. See below
Describe alternatives you've considered
Hard coding a random set of memory can be done but is not a good alternative for the intentions of this test or the intentions of the upstream experiment.
**How will this be tested? aka Acceptance Criteria (optional) **
Existing spec test for checking for passing the test can work
Need spec test to check for failing when 80-95% memory is used
Need spec for PASSING test for a pod that has zero constraints and 100% memory on Node is used
Need spec for FAILING test for a pod that has zero constraints and 100% memory of a Node is used
Once this issue is addressed how will the fix be verified?
yes
Is your feature request related to a problem? Please describe.
The Litmus Chaos experiment has changed upstream and now supports dynamically setting the memory used or will default to using all memory for the pod. If all memory is used the pod will be killed by the kube scheduler. This results in an experiment failure which results in this test not passing.
This is related to BUG #1973 which had a temporary fix with hard coding the memory used by the Litmus experiment
After this enhancement, this test could be considered as an essential test.
Describe the solution you'd like
Dynamically check for constraints in a pod and set the memory test usage at 80-95%. At 95% badly behaving applications will have failures but the kube scheduler will not be triggered. 89% memory usage caused CoreDNS spec to fail 1/5 times in initial testing.
If no constraints are set for a pod then we can use 100% of the memory on the Node (since Kubernetes will not restart the nodes by default at this time). We have other tests for validating that a constraint is set (which is a best practice).
Need spec tests. See below
Describe alternatives you've considered
Hard coding a random set of memory can be done but is not a good alternative for the intentions of this test or the intentions of the upstream experiment.
**How will this be tested? aka Acceptance Criteria (optional) **
Once this issue is addressed how will the fix be verified?
yes
Additional context
https://litmuschaos.github.io/litmus/experiments/categories/pods/pod-memory-hog/
The text was updated successfully, but these errors were encountered: