Concept for implementation of boilerplate largely based on CoreOS operators for etcd and prometheus which introduced operator pattern to wide audience.
Automated cluster bootstraping Resilient to single node failure Undisrupted during pod (re)integration Handles backups automatically Handles restores automaticaly Should recover fresh nodes quickly from local snapshot Exposing metrics for collection / alerting
Use StatefulSet to have a predictable network identity. Assume 3 replicas at minimum which can be treated as the core nodes to which additional nodes can initiate during bootstrap. Leader election: Snapshots need to ensure only one pod is writing to the shared storage, leader election will indicate which POD is a master pod and as such allowed to save snapshot. Other PODs will be acting as hot standby.
From elected master a quick method to save database dump to snapshot folder is required. For that use of xtrabackup seems inevitable.
non-client facing snapshot node ? incremental snapshots what if snapshot pod changes? incremental snap corruption ?
After snapshot is complete, it should be transferred to external storage according to expected retention rules (ie. daily, hourly etc., keeping always last week, every second for a month, every seventh for a year)
Be able to define that in case of lack of or corrupted snapshot, dump can be downloaded from some location to seed the database. Seed process needs to be a part of init and blocking for startup of other pods in StatefulSet https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/#ordered-pod-creation
point in time recovery ? corrupted snapshot ?
Default values for CPU and memory allocation, small alocation,
alerting when potentialy too small ?
Databases should never be scheduled on the same physical node. To achieve that a Pod-AntiAffinity needs to be configured so that two pods for db can never be scheduled side by side.
Test for seamless upgrades to newer version of MariaDB engine
Needs to accommodate for uninterrupted storage space growth. Applying with modified size and deleting pods sequentially should be enough, make use of snapshot and IST to recover data on new pod.
could it grow automaticaly ?
define both starting and max PV size ?
would it require PVC creation outside of POD volumeClaimTemplates to avoid reset of size on oc apply
?
CPU Memory IO MySQL Metrics Galera metrics
Readiness probe to check for status (ie. exclude new node and donor untill IST is done)