-
Notifications
You must be signed in to change notification settings - Fork 83
Backup the Cloud SQL database to GCS every 6 hours #875
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
suggestion - we add GCS retention timeline on the bucket as well
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: icco, mikehelmick, sethvargo The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm, I'll let @icco finish here.
name = "${var.project}-backups" | ||
location = var.storage_location | ||
|
||
force_destroy = true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
feels safer to me to have this be false(the default).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This tells terraform to delete the objects in the bucket before deleting the bucket. GCS doesn't let you delete a bucket with objects inside.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I understood that. I felt it was kinda safer to require the bucket be empty.
Fixes #864
Nuances
Cloud SQL doesn't seem to support per-instance IAM, so we have to grant at the project level (yuck, but it's only viewer)
I'm relying on bucket versioning instead of actually naming the exports with a datetime. This is mostly a design decision to avoid needing a middle service. Ideally Cloud Scheduler would support a template syntax, but it does not.
The backups aren't compressed (for the same reason as above), but our last backup was 2MB. I'm not really worried about large databases for this project.
Backup time is configurable via the cron notation.
Release Note
/assign @jeremyfaller @icco