-
Notifications
You must be signed in to change notification settings - Fork 593
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
FinalizeKind is overkill for IMC dispatcher #4509
Comments
I'm curious to learn more about the awkward handling if you have examples.
The x-ns thing is a bit strange, anything with fg/bg deletion you've seen
cause issues?
…On Wed, Nov 11, 2020 at 4:55 PM Matt Moore ***@***.***> wrote:
*Describe the bug*
Today the IMC Dispatcher uses finalizers:
https://github.com/knative/eventing/blob/66db463e394053ac1bff0556f5a584a6f31a90ec/pkg/reconciler/inmemorychannel/dispatcher/inmemorychannel.go#L96-L105
*Expected behavior*
When everything is in-memory, it makes more sense to actually do this kind
of work directly in the informer callback rather than effectively blocking
resource deletion on finalizer handling.
*Knative release version*
HEAD
*Additional context*
Due to the awkward handling of Finalizers on CRDs in Kubernetes, we should
(IMO) limit our use of them to where it's absolutely needed, otherwise
uninstall gets needlessly complex.
cc @vaikas <https://github.com/vaikas>
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#4509>, or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAICPOSB4ZLFDNCMUZOAZXTSPMW6NANCNFSM4TSUV6HA>
.
|
Essentially if you uninstall CRD-based systems (incl. CRD + controller) that make use of finalizers while instances of those resources still exist what will happen is:
Now we're in trouble. The instances have finalizers, but the controller responsible for handling the removal of those finalizers is gone. So the CRD and those instance will sit around basically forever. But wait! There's more... If you reinstall things in this bad state, then things will seem fine at first, but what's really happening is:
But the fun doesn't stop there. Conversion webhooks exacerbate this further because you can't simply So now you have to hand-edit the CRDs to remove the webhook, then hand-edit the instances to remove finalizers. Then everything finally goes away... Luckily I think we only have the first two here 😅 |
Oof, thanks, we've definitely seen those sorts of issues as well
…On Wed, Nov 11, 2020, 10:03 PM Matt Moore ***@***.***> wrote:
Essentially if you uninstall CRD-based systems (incl. CRD + controller)
that make use of finalizers while instances of those resources still exist
what will happen is:
1. CRD will be deleted (resources exist, so tombstoned)
2. Controller will be deleted
3. Instance of the tombstoned CRD will be marked for deletion
Now we're in trouble. The instances have finalizers, but the controller
responsible for handling the removal of those finalizers is gone. So the
CRD and those instance will sit around basically forever.
------------------------------
But wait! There's more...
If you reinstall things in this bad state, then things will seem fine at
first, but what's really happening is:
1. CRDs are applied (the tombstoned one exists).
2. Controller comes up
3. Controller processes tombstoned resources
4. CRD is finally deleted
5. Controller is not processing an unknown resource
------------------------------
But the fun doesn't stop there. Conversion webhooks exacerbate this
further because you can't simply kubectl edit the resource to remove the
finalizers because it has to go through the webhook which is (you guessed
it) also gone.
So now you have to hand-edit the CRDs to remove the webhook, then
hand-edit the instances to remove finalizers. Then everything finally goes
away...
Luckily I think we only have the first two here 😅
—
You are receiving this because you commented.
Reply to this email directly, view it on GitHub
<#4509 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AAICPOVYAJIKSP4QI2LJZVDSPN3CRANCNFSM4TSUV6HA>
.
|
Describe the bug
Today the IMC Dispatcher uses finalizers:
eventing/pkg/reconciler/inmemorychannel/dispatcher/inmemorychannel.go
Lines 96 to 105 in 66db463
Expected behavior
When everything is in-memory, it makes more sense to actually do this kind of work directly in the informer callback rather than effectively blocking resource deletion on finalizer handling.
Knative release version
HEAD
Additional context
Due to the awkward handling of Finalizers on CRDs in Kubernetes, we should (IMO) limit our use of them to where it's absolutely needed, otherwise uninstall gets needlessly complex.
cc @vaikas
The text was updated successfully, but these errors were encountered: