Skip to content

Commit

Permalink
en, zh: add scheduler scheduling based on customer topology (#483)
Browse files Browse the repository at this point in the history
Co-authored-by: ti-srebot <66930949+ti-srebot@users.noreply.github.com>
  • Loading branch information
PengJi and ti-srebot authored Jun 25, 2020
1 parent a3cf2dd commit 3a327d7
Show file tree
Hide file tree
Showing 2 changed files with 59 additions and 2 deletions.
30 changes: 29 additions & 1 deletion en/tidb-scheduler.md
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,35 @@ TiDB Scheduler is a TiDB implementation of [Kubernetes scheduler extender](https

A TiDB cluster includes three key components: PD, TiKV, and TiDB. Each consists of multiple nodes: PD is a Raft cluster, and TiKV is a multi-Raft group cluster. PD and TiKV components are stateful. The default scheduling rules of the Kubernetes scheduler cannot meet the high availability scheduling requirements of the TiDB cluster, so the Kubernetes scheduling rules need to be extended.

TiDB Scheduler implements the following customized scheduling rules:
Currently, pods can be scheduled according to specific dimensions by modifying `metadata.annotations` in TidbCluster, such as:

{{< copyable "" >}}

```yaml
metadata:
annotations:
pingcap.com/ha-topology-key: kubernetes.io/hostname
```
Or by modifying tidb-cluster Chart `values.yaml`:

{{< copyable "" >}}

```yaml
haTopologyKey: kubernetes.io/hostname
```

The configuration above indicates scheduling by the node dimension (default). If you want to schedule pods by other dimensions, such as `pingcap.com/ha-topology-key: zone`, which means scheduling by zone, each node should also be labeled as follows:

{{< copyable "shell-regular" >}}

```shell
kubectl label nodes node1 zone=zone1
```

Different nodes may have different labels or the same label, and if a node is not labeled, the scheduler will not schedule any pod to that node.

TiDB Scheduler implements the following customized scheduling rules. The following example is based on node scheduling, scheduling rules based on other dimensions are the same.

### PD component

Expand Down
31 changes: 30 additions & 1 deletion zh/tidb-scheduler.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,36 @@ TiDB Scheduler 是 [Kubernetes 调度器扩展](https://github.com/kubernetes/co

TiDB 集群包括 PD,TiKV 以及 TiDB 三个核心组件,每个组件又是由多个节点组成,PD 是一个 Raft 集群,TiKV 是一个多 Raft Group 集群,并且这两个组件都是有状态的。默认 Kubernetes 的调度器的调度规则无法满足 TiDB 集群的高可用调度需求,需要扩展 Kubernetes 的调度规则。

目前,TiDB Scheduler 实现了如下几种自定义的调度规则。
目前,可通过修改 TidbCluster 的 `metadata.annotations` 来按照特定的维度进行调度,比如:

{{< copyable "" >}}

```yaml
metadata:
annotations:
pingcap.com/ha-topology-key: kubernetes.io/hostname
```
或者修改 tidb-cluster chart 的 `values.yaml` :

{{< copyable "" >}}

```yaml
haTopologyKey: kubernetes.io/hostname
```

上述配置按照节点(默认值)维度进行调度,若要按照其他维度调度,比如: `pingcap.com/ha-topology-key: zone`,表示按照 zone 调度,
还需给各节点打如下标签:

{{< copyable "shell-regular" >}}

```shell
kubectl label nodes node1 zone=zone1
```

不同节点可有不同的标签,也可有相同的标签,如果某一个节点没有打该标签,则调度器不会调度 pod 到该节点。

目前,TiDB Scheduler 实现了如下几种自定义的调度规则(下述示例基于节点调度,基于其他维度的调度规则相同)。

### PD 组件

Expand Down

0 comments on commit 3a327d7

Please sign in to comment.