-
Notifications
You must be signed in to change notification settings - Fork 10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Safe deletion of Kyma Clusters [EPIC] #126
Comments
TODO: We have clarify the retention time for hibernated clusters as they are also charged by Gardener (Base-fee). |
Hibernation was quite unstable in Gardener in the past. We have to create an POC which verifies how reliable the hibernation feature works. |
Points to clarify:
|
During a POC we also verified if a hibernated cluster gets restarted before the deletion is happening. This isn't the case. The POC covered following steps:
The POC confirmed that a hibernated cluster isn't started between hibernation and deletion. The loop which applied a
|
Before the team picks up the story please align with @PK85 and @ngrkajac on how it can work with KEB and Cloud Manager. In the initial proposal, the Kyma resource would be deleted immediately, but then other cloud resources would be also deleted immediately. We should be consistent here and offer a similar strategy not only to the gardener cluster but also to other cloud resources related to the Kyma instance. |
Description
Instead of deletion we can hibernate cluster and delete it few days later. Accidental deletion can be recovered. Such cluster is not reconciled (Kyma resource can be deleted). Deletion of Kyma resource should not cause module deletion - it is just opt out from lifecycle management.
Note: customer data is still in the hibernated cluster so we should not keep it too long and we need to make sure we do not violate data privacy policies
Implementation idea:
Reasons
We should protect our customers as much as possible against unintentional or malicious actions that cause data loss.
Attachments
Related to
The text was updated successfully, but these errors were encountered: