Skip to content

Commit

Permalink
Add doc and examples for auto-scaler and intializer (#1772)
Browse files Browse the repository at this point in the history
* add doc and examples

* fix by lint

* revise the example

* revise init

* revise examples

* Update tidb-cluster.yaml

* revise by comment

* revise examples

* fix by lint

* address the comment

* Update examples/initialize/README.md

Co-Authored-By: DanielZhangQD <36026334+DanielZhangQD@users.noreply.github.com>

* Update examples/auto-scale/README.md

Co-Authored-By: DanielZhangQD <36026334+DanielZhangQD@users.noreply.github.com>

Co-authored-by: DanielZhangQD <36026334+DanielZhangQD@users.noreply.github.com>
  • Loading branch information
Yisaer and DanielZhangQD authored Mar 20, 2020
1 parent b62752b commit 2aec9c9
Show file tree
Hide file tree
Showing 7 changed files with 241 additions and 0 deletions.
50 changes: 50 additions & 0 deletions examples/auto-scale/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,50 @@
# Deploying TidbCluster with Auto-scaling

> **Note:**
>
> This setup is for test or demo purpose only and **IS NOT** applicable for critical environment. Refer to the [Documents](https://pingcap.com/docs/stable/tidb-in-kubernetes/deploy/prerequisites/) for production setup.

The following steps will create a TiDB cluster with monitoring and auto-scaler, the monitoring data is not persisted by default.

**Prerequisites**:
- Has TiDB operator `v1.1.0-beta.2` or higher version installed. [Doc](https://pingcap.com/docs/stable/tidb-in-kubernetes/deploy/tidb-operator/)
- Has default `StorageClass` configured, and there are enough PVs (by default, 6 PVs are required) of that storageClass:

This could be verified by the following command:

```bash
> kubectl get storageclass
```

The output is similar to this:

```bash
NAME PROVISIONER AGE
standard (default) kubernetes.io/gce-pd 1d
gold kubernetes.io/gce-pd 1d
```

Alternatively, you could specify the storageClass explicitly by modifying `tidb-cluster.yaml`.


## Enabling Auto-scaling

> **Note:**
>
> The Auto-scaling feature is still in alpha, you should enable this feature in TiDB Operator by setting values.yaml:
```yaml
features:
AutoScaling=true
```

Auto-scale the cluster based on CPU load
```bash
> kubectl -n <namespace> apply -f ./
```

## Destroy

```bash
> kubectl -n <namespace> delete -f ./
```
31 changes: 31 additions & 0 deletions examples/auto-scale/tidb-cluster-auto-scaler.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
apiVersion: pingcap.com/v1alpha1
kind: TidbClusterAutoScaler
metadata:
name: auto-scaling-demo
spec:
cluster:
name: auto-scaling-demo
monitor:
name: auto-scaling-demo
tikv:
minReplicas: 3
maxReplicas: 4
metricsTimeDuration: "1m"
metrics:
- type: "Resource"
resource:
name: "cpu"
target:
type: "Utilization"
averageUtilization: 80
tidb:
minReplicas: 2
maxReplicas: 3
metricsTimeDuration: "1m"
metrics:
- type: "Resource"
resource:
name: "cpu"
target:
type: "Utilization"
averageUtilization: 80
26 changes: 26 additions & 0 deletions examples/auto-scale/tidb-cluster.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
apiVersion: pingcap.com/v1alpha1
kind: TidbCluster
metadata:
name: auto-scaling-demo
spec:
version: v3.0.8
timezone: UTC
pvReclaimPolicy: Delete
pd:
baseImage: pingcap/pd
replicas: 3
requests:
storage: "1Gi"
config: {}
tikv:
baseImage: pingcap/tikv
replicas: 3
requests:
storage: "1Gi"
config: {}
tidb:
baseImage: pingcap/tidb
replicas: 2
service:
type: ClusterIP
config: {}
20 changes: 20 additions & 0 deletions examples/auto-scale/tidb-monitor.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
apiVersion: pingcap.com/v1alpha1
kind: TidbMonitor
metadata:
name: auto-scaling-demo
spec:
clusters:
- name: auto-scaling-demo
prometheus:
baseImage: prom/prometheus
version: v2.11.1
grafana:
baseImage: grafana/grafana
version: 6.0.1
initializer:
baseImage: pingcap/tidb-monitor-initializer
version: v3.0.5
reloader:
baseImage: pingcap/tidb-monitor-reloader
version: v1.0.1
imagePullPolicy: IfNotPresent
67 changes: 67 additions & 0 deletions examples/initialize/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,67 @@
# Creating TidbCluster with Initialization

> **Note:**
>
> This setup is for test or demo purpose only and **IS NOT** applicable for critical environment. Refer to the [Documents](https://pingcap.com/docs/stable/tidb-in-kubernetes/deploy/prerequisites/) for production setup.

The following steps will create a TiDB cluster with Initialization.

**Prerequisites**:
- Has TiDB operator `v1.1.0-beta.1` or higher version installed. [Doc](https://pingcap.com/docs/stable/tidb-in-kubernetes/deploy/tidb-operator/)
- Has default `StorageClass` configured, and there are enough PVs (by default, 6 PVs are required) of that storageClass:

This could by verified by the following command:

```bash
> kubectl get storageclass
```

The output is similar to this:

```bash
NAME PROVISIONER AGE
standard (default) kubernetes.io/gce-pd 1d
gold kubernetes.io/gce-pd 1d
```

Alternatively, you could specify the storageClass explicitly by modifying `tidb-cluster.yaml`.


## Initialize


> **Note:**
>
> The Initialization should be done once the TiDB Cluster was created

The following commands is assumed to be executed in this directory.

You can create the root user and set its password by creating secret and link it to the Initializer:

```bash
> kubectl create secret generic tidb-secret --from-literal=root=<root-password> --namespace=<namespace>
```

You can also create other users and set their password:
```bash
> kubectl create secret generic tidb-secret --from-literal=root=<root-password> --from-literal=developer=<developer-passowrd> --namespace=<namespace>
```

Initialize the cluster to create the users and create the database named `hello`:

```bash
> kubectl -n <namespace> apply -f ./
```

Wait for Initialize job done:
```bash
$ kubectl get pod -n <namespace>| grep initialize-demo-tidb-initializer
initialize-demo-tidb-initializer-whzn7 0/1 Completed 0 57s
```

## Destroy

```bash
> kubectl -n <namespace> delete -f ./
```
26 changes: 26 additions & 0 deletions examples/initialize/tidb-cluster.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
apiVersion: pingcap.com/v1alpha1
kind: TidbCluster
metadata:
name: initialize-demo
spec:
version: v3.0.8
timezone: UTC
pvReclaimPolicy: Delete
pd:
baseImage: pingcap/pd
replicas: 1
requests:
storage: "1Gi"
config: {}
tikv:
baseImage: pingcap/tikv
replicas: 1
requests:
storage: "1Gi"
config: {}
tidb:
baseImage: pingcap/tidb
replicas: 1
service:
type: ClusterIP
config: {}
21 changes: 21 additions & 0 deletions examples/initialize/tidb-initializer.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
apiVersion: pingcap.com/v1alpha1
kind: TidbInitializer
metadata:
name: initialize-demo
spec:
image: tnir/mysqlclient
imagePullPolicy: IfNotPresent
cluster:
name: initialize-demo
initSql: "create database hello;"
# initSqlConfigMap: tidb-initsql
passwordSecret: "tidb-secret"
# permitHost: 172.6.5.8
# resources:
# limits:
# cpu: 1000m
# memory: 500Mi
# requests:
# cpu: 100m
# memory: 50Mi
# timezone: "Asia/Shanghai"

0 comments on commit 2aec9c9

Please sign in to comment.