-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
VPA: Create event for VPA object when Pod is evicted #7413
VPA: Create event for VPA object when Pod is evicted #7413
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just my humble opinion.
@@ -322,11 +323,11 @@ func TestEvictReplicatedByReplicaSet(t *testing.T) { | |||
} | |||
|
|||
for _, pod := range pods[:2] { | |||
err := eviction.Evict(pod, test.FakeEventRecorder()) | |||
err := eviction.Evict(pod, getBasicVpa(), test.FakeEventRecorder()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is there any reason why not doing something like:
basicVpa := getBasicVpa()
for _, pod := range pods[:2] {
err := eviction.Evict(pod, basicVpa, test.FakeEventRecorder())
assert.Nil(t, err, "Should evict with no error")
}
for _, pod := range pods[2:] {
err := eviction.Evict(pod, basicVpa, test.FakeEventRecorder())
assert.Error(t, err, "Error expected")
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed in 5cbf777
var err error | ||
// Add delay for fake client to catch up due to be being asynchronous | ||
for i := 0; i < 5; i++ { | ||
events, err = fakeClient.CoreV1().Events("default").List(context.TODO(), metav1.ListOptions{}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we do something like?
const (
maxRetries = 5
retryDelay = 100 * time.Millisecond
contextTimeout = 5*time.Second
)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good idea. I've made them variables instead, I hope that's fine: d16c5e7
var err error | ||
// Add delay for fake client to catch up due to be being asynchronous | ||
for i := 0; i < 5; i++ { | ||
events, err = fakeClient.CoreV1().Events("default").List(context.TODO(), metav1.ListOptions{}) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Instead of context.TODO()
I suggest something like:
ctx, cancel := context.WithTimeout(context.Background(), contextTimeout)
defer cancel()
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great, thanks. Done in d16c5e7
return eventBroadcaster.NewRecorder(scheme.Scheme, apiv1.EventSource{Component: "vpa-updater"}) | ||
|
||
vpascheme := scheme.Scheme | ||
corescheme.AddToScheme(vpascheme) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Check for error?
if err := corescheme.AddToScheme(vpascheme); err != nil {
klog.Fatalf("Error adding core scheme: %v", err)
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Whoops! Thanks for catching that. Fixed in f6ada00
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a couple more comments.
vertical-pod-autoscaler/pkg/updater/eviction/pods_eviction_restriction_test.go
Outdated
Show resolved
Hide resolved
vertical-pod-autoscaler/pkg/updater/eviction/pods_eviction_restriction_test.go
Show resolved
Hide resolved
vertical-pod-autoscaler/pkg/updater/eviction/pods_eviction_restriction_test.go
Outdated
Show resolved
Hide resolved
vertical-pod-autoscaler/pkg/updater/eviction/pods_eviction_restriction_test.go
Outdated
Show resolved
Hide resolved
vertical-pod-autoscaler/pkg/updater/eviction/pods_eviction_restriction_test.go
Outdated
Show resolved
Hide resolved
vertical-pod-autoscaler/pkg/updater/eviction/pods_eviction_restriction_test.go
Outdated
Show resolved
Hide resolved
vertical-pod-autoscaler/pkg/updater/eviction/pods_eviction_restriction_test.go
Outdated
Show resolved
Hide resolved
vertical-pod-autoscaler/pkg/updater/eviction/pods_eviction_restriction_test.go
Outdated
Show resolved
Hide resolved
vertical-pod-autoscaler/pkg/updater/eviction/pods_eviction_restriction_test.go
Outdated
Show resolved
Hide resolved
vertical-pod-autoscaler/pkg/updater/eviction/pods_eviction_restriction_test.go
Outdated
Show resolved
Hide resolved
aec76a5
to
24bf38a
Compare
24bf38a
to
8137019
Compare
/lgtm |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
/approve
/hold
Feel free to unhold, just want to unblock.
WDYT about putting the extra event behind a flag? Not sure if people may be surprised about an extra event (load wise) for each evict. On the other hand load of those shouldn't be this high.
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: adrianmoisey, kwiesmueller The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Hmmm, I'm not too sure. I like it "on by default", and I think the load won't be too high. Kubernetes seems to handle events pretty well. At a guess this may happen roughly as often as the HPA scales Deployments, so my gut feel is that it doesn't need to be behind a flag. |
What about putting it behind a flag, that is default to "on"? |
I thought about it a bit more. The act of scheduling a Pod often creates a few Events, ie:
I think there also may be an event on the ReplicaSet, and if this was backed by an HPA, one there too. I'm happy to merge this as-is if you're fine with it |
sgtm |
What type of PR is this?
/kind feature
What this PR does / why we need it:
A user asked for the VPA to emit an event that is related to the VPA object, in addition to the event related to the Pod object.
This makes sense, I see it as a "log" of sorts of what a particular VPA is up to.
Which issue(s) this PR fixes:
Fixes #7149
Special notes for your reviewer:
Does this PR introduce a user-facing change?
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.: