TiDB-Backup
is a helm chart designed for TiDB cluster backup and restore via the mydumper
and loader
. This documentation explains TiDB-Backup
configuration. Refer to Restore and Backup TiDB cluster for user guide with example.
- To choose the operation, backup or restore, required
- Default: "backup"
- The name of the TiDB cluster that data is backed up from or restore to, required
- Default: "demo"
- The backup name
- Default: "fullbackup-${date}", date is the start time of backup, accurate to minute
- The name of the secret which stores user and password used for backup/restore
- Default: "backup-secret"
- You can create the secret by
kubectl create secret generic backup-secret -n ${namespace} --from-literal=user=root --from-literal=password=<password>
- The storageClass used to store the backup data
- Default: "local-storage"
- The storage size of PersistenceVolume
- Default: "100Gi"
- The options that are passed to
mydumper
- Default: "--chunk-filesize=100"
- The options that are passed to
loader
- Default: "-t 16"
- The name of the GCP bucket used to store backup data
Note:
Once you set any variables under
gcp
section, the backup data will be uploaded to Google Cloud Storage, namely, you have to keep the configuration intact.
- The name of the secret which stores the gcp service account credentials json file
- You can create the secret by
kubectl create secret generic gcp-backup-secret -n ${namespace} --from-file=./credentials.json
. To download credentials json, refer to Google Cloud Documentation
- The endpoint of ceph object storage
Note:
Once you set any variables under
ceph
section, the backup data will be uploaded to ceph object storage, namely, you have to keep the configuration intact.
- The bucket name of ceph object storage
-
The name of the secret which stores ceph object store access key and secret key
-
You can create the secret by:
$ kubectl create secret generic ceph-backup-secret -n ${namespace} --from-literal=access_key=<access-key> --from-literal=secret_key=<secret-key>