-
Notifications
You must be signed in to change notification settings - Fork 4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(rds): support rolling instance updates to reduce downtime #20054
feat(rds): support rolling instance updates to reduce downtime #20054
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for this change. For the most part it looks great, just a couple things before we can approve it. We need to have a README update for this new functionality and we also need an integration test.
Additionally, I'm wondering if there's a use case for allowing the rolling update to be more configurable rather than just one at a time. Thoughts on that?
@TheRealAmazonKendra Thanks for the feedback. I indeed forgot to add both a documentation and an integration test. Concerning more flexible update behaviors: I generally like the idea to add more flexibility. But I think we only have limited options in this case as RDS/Aurora does not support more sophisticated update behaviors as, for instance. E.g., I cannot state that at least "2 instances need to be running" or "at most 3 instances at a time can be updated". Thus, I think the option to choose an enum reflecting the most important options, |
@spanierm42 Sounds good. If you can just add in the tests and documentation I'll be more than happy to approve and merge this. |
5be1417
to
2535e03
Compare
Pull request has been modified.
e1a8cf7
to
50559a6
Compare
Support defining the instance update behaviour of RDS instances. This allows to switch between bulk (all instances at once) and rolling updates (one instance after another). While bulk updates are faster, they have a higher risk for longer downtimes as all instances might be simultaneously unreachable due to the update. Rolling updates take longer but ensure that all but one instance are not updated and thus downtimes are limited to the (at most two) changes of the primary instance. We keep the current behaviour, namely a bulk update, as default. This implementation follows proposal A by hixi-hyi in issue aws#10595.
50559a6
to
d9b0fb0
Compare
@TheRealAmazonKendra I added the missing documentation and integration test. Let me know if that fits your needs. Looking forward to hearing from you :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for this contribution and for your edits on this! I hope you don't mind that I updated the README a bit.
Thank you for contributing! Your pull request will be updated from main and then merged automatically (do not update manually, and be sure to allow changes to be pushed to your fork). |
AWS CodeBuild CI Report
Powered by github-codebuild-logs, available on the AWS Serverless Application Repository |
Thank you for contributing! Your pull request will be updated from main and then merged automatically (do not update manually, and be sure to allow changes to be pushed to your fork). |
@TheRealAmazonKendra Thanks for all your guidance and feedback. I highly appreciate your change in the |
Support defining the instance update behaviour of RDS instances. This allows to switch between bulk (all instances at once) and rolling updates (one instance after another). While bulk updates are faster, they have a higher risk for longer downtimes as all instances might be simultaneously unreachable due to the update. Rolling updates take longer but ensure that all but one instance are not updated and thus downtimes are limited to the (at most two) changes of the primary instance.
We keep the current behaviour, namely a bulk update, as default.
This implementation follows proposal A by @hixi-hyi in issue #10595.
Fixes #10595