-
It is recommended to use external databases instead of using the internal databases that are installed by default
-
For an easier backup/restore management when using external databases on public cloud providers, the recommendation is to setup automated backups using the public cloud provider’s integrated backup solution (like for example AWS RDS and its automated daily snapshots with PITR), if available
-
If using the internal databases, backup and restore can be done using the 3scale Operator by following these instructions
-
However, if for any reason you don’t want to use the 3scale Operator to manage backup/restore of internal databases, in the following sections you can find the deprecated way of doing manual backup and restore. This is not recommended, use at you own risk
This section provides an Operator of a 3scale installation the information they need for:
-
Backup Procedures to backup persistent data
-
Restore Procedures to restore from a backup of the persistent data
Such that, in the case of a failure, they can restore 3scale to correctly running operational state and continue operating.
In a 3scale deployment on OpenShift all persistent data is stored either in a storage service in the cluster (not currently used), a Persistent Volume (PV) provided to the cluster by the underlying infrastructure or a storage service external to the cluster (be that in the same Data Center or elsewhere).
The backup and restore procedures for persistent data are required to vary depending on the storage used, to ensure the backups and restores preserve data conistsency (e.g. that a partial write, or a partial transaction is not captured). i.e. It is not sufficient to backup the underlying Persistent Volumes for a database, but instead the databases backup mechanisms should be used.
Also, some parts of the data are synchronized between different components. One copy is considered the "source of truth" for the data set, and the other a copy that is not modified locally, just synchronized from the "source of truth". In these cases, upon restore, the "source of truth" should be restored and then the copies in other components synchronized from it.
This section goes into more detail on the different data sets in the different persistent stores, their purposes, the storage type used and if it is the "source of truth" or not.
The full state of a 3scale deployment is stored across these services and their persistent volumes:
-
system-mysql
: Mysql database (mysql-storage
) -
system-storage
: Volume for Files -
zync-database
: Postgres database forzync
component. This uses "HostPath" as storage. If the pod is moved into another node the data is lost. That is not a problem because the data are sync jobs and does not need to be 100% persistent. -
backend-redis
: Redis database (backend-redis-storage
) -
system-redis
: Redis database (system-redis-storage
)
This is a relational database for storing the information about users, accounts, APIs, plans etc in the 3scale Admin Console.
A subset of this information related to Services is synchronized to the Backend
component and stored in
backend-redis
. system-mysql
OR system-database
OR external Oracle database is the source of truth for this information.
This is for storing files to be read and written by the System
component. They fall into two categories:
-
Configuration files read by the System component at run-time
-
Static files (HTML, CSS, JS, etc) uploaded to System by its CMS feature, for the purpose of creating a Developer Portal.
Note that System
can be scaled horizontally with multiple pods uploading and reading said static files, hence the
need for a RWX
PersistentVolume.
This is a relational database for storing information related to the synchronization of identities between 3scale and an Identity provider. This information is not duplicated in other components and this is the sole source of truth.
This contains multiple data sets used by the Backend
component:
-
Usages: This is API usage information aggregated by
Backend
. It is used byBackend
for rate-limiting decisions and bySystem
to display analytics information in the UI or via API. -
Config: This is configuration information about Services, Rate-limits, etc that is synchronized from
System
via an internal API. This is NOT the source of truth of this info,System
andsystem-mysql
is. -
AuthKeys: Storage of OAuth keys created directly in
Backend
. This is the source of truth for this information. -
Queues: Queues of Background Jobs to be executed by worker processses. These are ephemeral and are deleted once processed.
Execute MySQL Backup Command
oc rsh $(oc get pods -l 'deploymentConfig=system-mysql' -o json | jq -r '.items[0].metadata.name') bash -c 'export MYSQL_PWD=${MYSQL_ROOT_PASSWORD}; mysqldump --single-transaction -hsystem-mysql -uroot system' | gzip > system-mysql-backup.gz
Execute PostgreSQL Backup Command
oc rsh $(oc get pods -l 'deploymentConfig=system-database' -o json | jq '.items[0].metadata.name' -r) bash -c 'pg_dumpall -c --if-exists' | gzip > system-postgres-backup.gz
Follow Oracle Database Backup and Recovery Quick Start Guide: https://docs.oracle.com/cd/B19306_01/backup.102/b14193/toc.htm
Archive the system-storage files to another storage.
oc rsync $(oc get pods -l 'deploymentConfig=system-app' -o json | jq '.items[0].metadata.name' -r):/opt/system/public/system ./local/dir
Execute Postgres Backup Command
oc rsh $(oc get pods -l 'deploymentConfig=zync-database' -o json | jq '.items[0].metadata.name' -r) bash -c 'pg_dumpall -c --if-exists' | gzip > zync-database-backup.gz
Backup the dump.rb file from redis
oc cp $(oc get pods -l 'deploymentConfig=backend-redis' -o json | jq '.items[0].metadata.name' -r):/var/lib/redis/data/dump.rdb ./backend-redis-dump.rdb
Backup the dump.rb file from redis
oc cp $(oc get pods -l 'deploymentConfig=system-redis' -o json | jq '.items[0].metadata.name' -r):/var/lib/redis/data/dump.rdb ./system-redis-dump.rdb
oc get secrets system-smtp -o json --export | jq -r 'del(.metadata.ownerReferences,.metadata.selfLink)' > system-smtp.json
oc get secrets system-seed -o json --export | jq -r 'del(.metadata.ownerReferences,.metadata.selfLink)' > system-seed.json
oc get secrets system-database -o json --export | jq -r 'del(.metadata.ownerReferences,.metadata.selfLink)' > system-database.json
oc get secrets backend-internal-api -o json --export | jq -r 'del(.metadata.ownerReferences,.metadata.selfLink)' > backend-internal-api.json
oc get secrets system-events-hook -o json --export | jq -r 'del(.metadata.ownerReferences,.metadata.selfLink)' > system-events-hook.json
oc get secrets system-app -o json --export | jq -r 'del(.metadata.ownerReferences,.metadata.selfLink)' > system-app.json
oc get secrets system-recaptcha -o json --export | jq -r 'del(.metadata.ownerReferences,.metadata.selfLink)' > system-recaptcha.json
oc get secrets system-redis -o json --export | jq -r 'del(.metadata.ownerReferences,.metadata.selfLink)' > system-redis.json
oc get secrets zync -o json --export | jq -r 'del(.metadata.ownerReferences,.metadata.selfLink)' > zync.json
oc get secrets system-master-apicast -o json --export | jq -r 'del(.metadata.ownerReferences,.metadata.selfLink)' > system-master-apicast.json
oc get secrets backend-listener -o json --export | jq -r 'del(.metadata.ownerReferences,.metadata.selfLink)' > backend-listener.json
oc get secrets backend-redis -o json --export | jq -r 'del(.metadata.ownerReferences,.metadata.selfLink)' > backend-redis.json
oc get secrets system-memcache -o json --export | jq -r 'del(.metadata.ownerReferences,.metadata.selfLink)' > system-memcache.json
oc get configmaps system-environment -o json --export | jq -r 'del(.metadata.ownerReferences,.metadata.selfLink)' > system-environment.json
oc get configmaps apicast-environment -o json --export | jq -r 'del(.metadata.ownerReferences,.metadata.selfLink)' > apicast-environment.json
oc get configmaps backend-environment -o json --export | jq -r 'del(.metadata.ownerReferences,.metadata.selfLink)' > backend-environment.json
oc get configmaps mysql-extra-conf -o json --export | jq -r 'del(.metadata.ownerReferences,.metadata.selfLink)' > mysql-extra-conf.json
oc get configmaps mysql-main-conf -o json --export | jq -r 'del(.metadata.ownerReferences,.metadata.selfLink)' > mysql-main-conf.json
oc get configmaps redis-config -o json --export | jq -r 'del(.metadata.ownerReferences,.metadata.selfLink)' > redis-config.json
oc get configmaps system -o json --export | jq -r 'del(.metadata.ownerReferences,.metadata.selfLink)' > system.json
Restore secrets before creating deploying template.
oc apply -f system-smtp.json
Template parameters will be read from copied secrets and configmaps.
oc new-app --file /opt/amp/templates/amp.yml \
--param APP_LABEL=$(cat system-environment.json | jq -r '.metadata.labels.app') \
--param TENANT_NAME=$(cat system-seed.json | jq -r '.data.TENANT_NAME' | base64 -d) \
--param SYSTEM_DATABASE_USER=$(cat system-database.json | jq -r '.data.DB_USER' | base64 -d) \
--param SYSTEM_DATABASE_PASSWORD=$(cat system-database.json | jq -r '.data.DB_PASSWORD' | base64 -d) \
--param SYSTEM_DATABASE=$(cat system-database.json | jq -r '.data.URL' | base64 -d | cut -d '/' -f4) \
--param SYSTEM_DATABASE_ROOT_PASSWORD=$(cat system-database.json | jq -r '.data.URL' | base64 -d | awk -F '[:@]' '{print $3}') \
--param WILDCARD_DOMAIN=$(cat system-environment.json | jq -r '.data.THREESCALE_SUPERDOMAIN') \
--param SYSTEM_BACKEND_USERNAME=$(cat backend-internal-api.json | jq '.data.username' -r | base64 -d) \
--param SYSTEM_BACKEND_PASSWORD=$(cat backend-internal-api.json | jq '.data.password' -r | base64 -d) \
--param SYSTEM_BACKEND_SHARED_SECRET=$(cat system-events-hook.json | jq -r '.data.PASSWORD' | base64 -d) \
--param SYSTEM_APP_SECRET_KEY_BASE=$(cat system-app.json | jq -r '.data.SECRET_KEY_BASE' | base64 -d) \
--param ADMIN_PASSWORD=$(cat system-seed.json | jq -r '.data.ADMIN_PASSWORD' | base64 -d) \
--param ADMIN_USERNAME=$(cat system-seed.json | jq -r '.data.ADMIN_USER' | base64 -d) \
--param ADMIN_EMAIL=$(cat system-seed.json | jq -r '.data.ADMIN_EMAIL' | base64 -d) \
--param ADMIN_ACCESS_TOKEN=$(cat system-seed.json | jq -r '.data.ADMIN_ACCESS_TOKEN' | base64 -d) \
--param MASTER_NAME=$(cat system-seed.json | jq -r '.data.MASTER_DOMAIN' | base64 -d) \
--param MASTER_USER=$(cat system-seed.json | jq -r '.data.MASTER_USER' | base64 -d) \
--param MASTER_PASSWORD=$(cat system-seed.json | jq -r '.data.MASTER_PASSWORD' | base64 -d) \
--param MASTER_ACCESS_TOKEN=$(cat system-seed.json | jq -r '.data.MASTER_ACCESS_TOKEN' | base64 -d) \
--param RECAPTCHA_PUBLIC_KEY="$(cat system-recaptcha.json | jq -r '.data.PUBLIC_KEY' | base64 -d)" \
--param RECAPTCHA_PRIVATE_KEY="$(cat system-recaptcha.json | jq -r '.data.PRIVATE_KEY' | base64 -d)" \
--param SYSTEM_REDIS_URL=$(cat system-redis.json | jq -r '.data.URL' | base64 -d) \
--param SYSTEM_MESSAGE_BUS_REDIS_URL="$(cat system-redis.json | jq -r '.data.MESSAGE_BUS_URL' | base64 -d)" \
--param SYSTEM_REDIS_NAMESPACE="$(cat system-redis.json | jq -r '.data.NAMESPACE' | base64 -d)" \
--param SYSTEM_MESSAGE_BUS_REDIS_NAMESPACE="$(cat system-redis.json | jq -r '.data.MESSAGE_BUS_NAMESPACE' | base64 -d)" \
--param ZYNC_DATABASE_PASSWORD=$(cat zync.json | jq -r '.data.ZYNC_DATABASE_PASSWORD' | base64 -d) \
--param ZYNC_SECRET_KEY_BASE=$(cat zync.json | jq -r '.data.SECRET_KEY_BASE' | base64 -d) \
--param ZYNC_AUTHENTICATION_TOKEN=$(cat zync.json | jq -r '.data.ZYNC_AUTHENTICATION_TOKEN' | base64 -d) \
--param APICAST_ACCESS_TOKEN=$(cat system-master-apicast.json | jq -r '.data.ACCESS_TOKEN' | base64 -d) \
--param APICAST_MANAGEMENT_API=$(cat apicast-environment.json | jq -r '.data.APICAST_MANAGEMENT_API') \
--param APICAST_OPENSSL_VERIFY=$(cat apicast-environment.json | jq -r '.data.OPENSSL_VERIFY') \
--param APICAST_RESPONSE_CODES=$(cat apicast-environment.json | jq -r '.data.APICAST_RESPONSE_CODES') \
--param APICAST_REGISTRY_URL=$(cat system-environment.json | jq -r '.data.APICAST_REGISTRY_URL')
Restore secrets before creating APIManager resource.
oc apply -f system-smtp.json
oc apply -f system-seed.json
oc apply -f system-database.json
oc apply -f backend-internal-api.json
oc apply -f system-events-hook.json
oc apply -f system-app.json
oc apply -f system-recaptcha.json
oc apply -f system-redis.json
oc apply -f zync.json
oc apply -f system-master-apicast.json
oc apply -f backend-listener.json
oc apply -f backend-redis.json
oc apply -f system-memcache.json
Restore configmaps before creating APIManager resource.
oc apply -f system-environment.json
oc apply -f apicast-environment.json
oc apply -f backend-environment.json
oc apply -f mysql-extra-conf.json
oc apply -F mysql-main-conf.json
oc apply -f redis-config.json
oc apply -f system.json
Copy the MySQL dump to the system-mysql pod
oc cp ./system-mysql-backup.gz $(oc get pods -l 'deploymentConfig=system-mysql' -o json | jq '.items[0].metadata.name' -r):/var/lib/mysql
Decompress the Backup File
oc rsh $(oc get pods -l 'deploymentConfig=system-mysql' -o json | jq -r '.items[0].metadata.name') bash -c 'gzip -d ${HOME}/system-mysql-backup.gz'
Restore the MySQL DB Backup file
oc rsh $(oc get pods -l 'deploymentConfig=system-mysql' -o json | jq -r '.items[0].metadata.name') bash -c 'export MYSQL_PWD=${MYSQL_ROOT_PASSWORD}; mysql -hsystem-mysql -uroot system < ${HOME}/system-mysql-backup'
Copy the PostgreSQL Database dump to the system-database pod
oc cp ./system-postgres-backup.gz $(oc get pods -l 'deploymentConfig=system-database' -o json | jq '.items[0].metadata.name' -r):/var/lib/pgsql/
Decompress the Backup File
oc rsh $(oc get pods -l 'deploymentConfig=system-database' -o json | jq -r '.items[0].metadata.name') bash -c 'gzip -d ${HOME}/system-postgres-backup.gz'
Restore the PostgreSQL DB Backup file
oc rsh $(oc get pods -l 'deploymentConfig=system-database' -o json | jq -r '.items[0].metadata.name') bash -c 'psql -f ${HOME}/system-postgres-backup.gz'
Restore the archived files from a different location.
oc rsync ./local/dir/system/ $(oc get pods -l 'deploymentConfig=system-app' -o json | jq '.items[0].metadata.name' -r):/opt/system/public/system --delete=true
Copy the Zync Database dump to the zync-database pod
oc cp ./zync-database-backup.gz $(oc get pods -l 'deploymentConfig=zync-database' -o json | jq '.items[0].metadata.name' -r):/var/lib/pgsql/
Decompress the Backup File
oc rsh $(oc get pods -l 'deploymentConfig=zync-database' -o json | jq -r '.items[0].metadata.name') bash -c 'gzip -d ${HOME}/zync-database-backup.gz'
Restore the PostgreSQL DB Backup file
oc rsh $(oc get pods -l 'deploymentConfig=zync-database' -o json | jq -r '.items[0].metadata.name') bash -c 'psql -f ${HOME}/zync-database-backup'
-
After restoring
backend-redis
a sync of the Config information fromSystem
should be forced, to ensure the information inBackend
is consistent with that inSystem
(the source of truth).
Edit the redis-config
configmap
oc edit configmap redis-config
Comment save
comands in the redis-config
configmap
#save 900 1
#save 300 10
#save 60 10000
Set appendonly
to no in the redis-config
configmap
appendonly no
Re-deploy backend-redis
to load the new configurations
oc rollout latest dc/backend-redis
Rename the dump.rb
file
oc rsh $(oc get pods -l 'deploymentConfig=backend-redis' -o json | jq '.items[0].metadata.name' -r) bash -c 'mv ${HOME}/data/dump.rdb ${HOME}/data/dump.rdb-old'
Rename the appendonly.aof
file
oc rsh $(oc get pods -l 'deploymentConfig=backend-redis' -o json | jq '.items[0].metadata.name' -r) bash -c 'mv ${HOME}/data/appendonly.aof ${HOME}/data/appendonly.aof-old'
Move the Backup file to the POD
oc cp ./backend-redis-dump.rdb $(oc get pods -l 'deploymentConfig=backend-redis' -o json | jq '.items[0].metadata.name' -r):/var/lib/redis/data/dump.rdb
Re-deploy backend-redis
to load the backup
oc rollout latest dc/backend-redis
Edit the redis-config
configmap
oc edit configmap redis-config
Uncomment SAVE
comands in the redis-config
configmap
save 900 1
save 300 10
save 60 10000
Set appendonly
to yes in the redis-config
configmap
appendonly yes
Re-deploy backend-redis
to reload the default configurations
oc rollout latest dc/backend-redis
Edit the redis-config
configmap
oc edit configmap redis-config
Comment SAVE
comands in the redis-config
configmap
#save 900 1
#save 300 10
#save 60 10000
Set appendonly
to no in the redis-config
configmap
appendonly no
Re-deploy system-redis
to load the new configurations
oc rollout latest dc/system-redis
Rename the dump.rb
file
oc rsh $(oc get pods -l 'deploymentConfig=system-redis' -o json | jq '.items[0].metadata.name' -r) bash -c 'mv ${HOME}/data/dump.rdb ${HOME}/data/dump.rdb-old'
Rename the appendonly.aof
file
oc rsh $(oc get pods -l 'deploymentConfig=system-redis' -o json | jq '.items[0].metadata.name' -r) bash -c 'mv ${HOME}/data/appendonly.aof ${HOME}/data/appendonly.aof-old'
Move the Backup file to the POD
oc cp ./system-redis-dump.rdb $(oc get pods -l 'deploymentConfig=system-redis' -o json | jq '.items[0].metadata.name' -r):/var/lib/redis/data/dump.rdb
Re-deploy system-redis
to load the backup
oc rollout latest dc/system-redis
Edit the redis-config
configmap
oc edit configmap redis-config
Uncomment SAVE
comands in the redis-config
configmap
save 900 1
save 300 10
save 60 10000
Set appendonly
to yes in the redis-config
configmap
appendonly yes
Re-deploy system-redis
to reload the default configurations
oc rollout latest dc/system-redis