Skip to content

Commit

Permalink
add deploy yamls for dm with new ha architecture (#1738)
Browse files Browse the repository at this point in the history
* add deploy yamls for dm with new ha architecture

* fix format

* address comments

* add configmap for dm-master
  • Loading branch information
DanielZhangQD authored Feb 24, 2020
1 parent e8902f8 commit f80c8c2
Show file tree
Hide file tree
Showing 8 changed files with 370 additions and 0 deletions.
48 changes: 48 additions & 0 deletions manifests/dm/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
---
title: Deploy DM on Kubernetes
summary: Deploy DM on Kubernetes
category: how-to
---

# Deploy DM on Kubernetes

This document describes how to deploy DM of the new HA architecture with the yamls in this directory.

## Deploy dm-master

Update the rpc configs if necessary in `master/config/config.toml`.

{{< copyable "shell-regular" >}}

``` shell
kubectl apply -k master -n <namespace>
```

> **Note: **
>
> - `3` replicas are deployed by default.
> - `storageClassName` is set to `local-storage` by default.
## Deploy dm-worker

- If you only need to use DM for incremental data migration, no need to create PVC for dm-worker, just deploy it with below command:

{{< copyable "shell-regular" >}}

``` shell
kubectl apply -k worker/base -n <namespace>
```

- If you need to use DM for both full and incremental data migration, you have to create PVC for dm-worker, deploy it with below command:

{{< copyable "shell-regular" >}}

``` shell
kubectl apply -k worker/overlays/full -n <namespace>
```

> **Note: **
>
> - `3` replicas are deployed by default.
> - `storageClassName` is set to `local-storage` for PVC by default.
> - If PVCs are created, they are mounted to `/data` directory.
14 changes: 14 additions & 0 deletions manifests/dm/master/config/config.toml
Original file line number Diff line number Diff line change
@@ -0,0 +1,14 @@
# rpc configuration
#
# rpc timeout is a positive number plus time unit. we use golang standard time
# units including: "ns", "us", "ms", "s", "m", "h". You should provide a proper
# rpc timeout according to your use scenario.
rpc-timeout = "30s"
# rpc limiter controls how frequently events are allowed to happen.
# It implements a "token bucket" of size `rpc-rate-limit`, initially full and
# refilled at rate `rpc-rate-limit` tokens per second. Note `rpc-rate-limit`
# is float64 type, so remember to add a decimal point and one trailing 0 if its
# literal value happens to be an integer.
rpc-rate-limit = 10.0
rpc-rate-burst = 40

148 changes: 148 additions & 0 deletions manifests/dm/master/dm-master.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,148 @@
apiVersion: apps/v1
kind: StatefulSet
metadata:
labels:
app.kubernetes.io/component: dm-master
name: dm-master
spec:
podManagementPolicy: Parallel
replicas: 3
revisionHistoryLimit: 10
selector:
matchLabels:
app.kubernetes.io/component: dm-master
serviceName: dm-master-peer
template:
metadata:
annotations:
prometheus.io/path: /metrics
prometheus.io/port: "8261"
prometheus.io/scrape: "true"
creationTimestamp: null
labels:
app.kubernetes.io/component: dm-master
spec:
affinity: {}
containers:
- command:
- /dm-master
- --data-dir=/data
- --config=/etc/config/config.toml
- --name=$(MY_POD_NAME)
- --master-addr=:8261
- --advertise-addr=$(MY_POD_NAME).$(PEER_SERVICE_NAME).$(NAMESPACE):8261
- --peer-urls=:8291
- --advertise-peer-urls=http://$(MY_POD_NAME).$(PEER_SERVICE_NAME).$(NAMESPACE):8291
- --initial-cluster=dm-master-0=http://dm-master-0.$(PEER_SERVICE_NAME).$(NAMESPACE):8291,dm-master-1=http://dm-master-1.$(PEER_SERVICE_NAME).$(NAMESPACE):8291,dm-master-2=http://dm-master-2.$(PEER_SERVICE_NAME).$(NAMESPACE):8291
env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: PEER_SERVICE_NAME
value: dm-master-peer
- name: SERVICE_NAME
value: dm-master
- name: TZ
value: UTC
image: pingcap/dm:ha-alpha
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 8
httpGet:
path: /status
port: 8261
scheme: HTTP
initialDelaySeconds: 5
timeoutSeconds: 5
readinessProbe:
failureThreshold: 5
httpGet:
path: /status
port: 8261
scheme: HTTP
initialDelaySeconds: 5
timeoutSeconds: 5
name: master
ports:
- containerPort: 8291
name: server
protocol: TCP
- containerPort: 8261
name: client
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /data
name: data
- mountPath: /etc/config
name: config
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
volumes:
- configMap:
name: dm-master-config
name: config
updateStrategy:
rollingUpdate:
partition: 3
type: RollingUpdate
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes:
- ReadWriteOnce
dataSource: null
resources:
requests:
storage: 10Gi
storageClassName: local-storage
---
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/component: dm-master
name: dm-master
spec:
ports:
- name: client
port: 8261
protocol: TCP
targetPort: 8261
selector:
app.kubernetes.io/component: dm-master
sessionAffinity: None
type: ClusterIP
---
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/component: dm-master
name: dm-master-peer
spec:
clusterIP: None
ports:
- name: peer
port: 8291
protocol: TCP
targetPort: 8291
selector:
app.kubernetes.io/component: dm-master
publishNotReadyAddresses: true
sessionAffinity: None
type: ClusterIP

10 changes: 10 additions & 0 deletions manifests/dm/master/kustomization.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,10 @@
resources:
- dm-master.yaml
configMapGenerator:
- name: dm-master-config
files:
- config/config.toml
generatorOptions:
labels:
app.kubernetes.io/component: dm-master

121 changes: 121 additions & 0 deletions manifests/dm/worker/base/dm-worker.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,121 @@
apiVersion: apps/v1
kind: StatefulSet
metadata:
labels:
app.kubernetes.io/component: dm-worker
name: dm-worker
spec:
podManagementPolicy: Parallel
replicas: 3
revisionHistoryLimit: 10
selector:
matchLabels:
app.kubernetes.io/component: dm-worker
serviceName: dm-worker-peer
template:
metadata:
annotations:
prometheus.io/path: /metrics
prometheus.io/port: "8262"
prometheus.io/scrape: "true"
creationTimestamp: null
labels:
app.kubernetes.io/component: dm-worker
spec:
affinity: {}
containers:
- command:
- /dm-worker
- --name=$(MY_POD_NAME)
- --worker-addr=:8262
- --advertise-addr=$(MY_POD_NAME).$(PEER_SERVICE_NAME).$(NAMESPACE):8262
- --join=dm-master.$(NAMESPACE):8261
env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: PEER_SERVICE_NAME
value: dm-worker-peer
- name: SERVICE_NAME
value: dm-worker
- name: TZ
value: UTC
image: pingcap/dm:ha-alpha
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 8
httpGet:
path: /status
port: 8262
scheme: HTTP
initialDelaySeconds: 5
timeoutSeconds: 5
readinessProbe:
failureThreshold: 5
httpGet:
path: /status
port: 8262
scheme: HTTP
initialDelaySeconds: 5
timeoutSeconds: 5
name: worker
ports:
- containerPort: 8262
name: client
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
updateStrategy:
rollingUpdate:
partition: 3
type: RollingUpdate
---
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/component: dm-worker
name: dm-worker
spec:
ports:
- name: client
port: 8262
protocol: TCP
targetPort: 8262
selector:
app.kubernetes.io/component: dm-worker
sessionAffinity: None
type: ClusterIP
---
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/component: dm-worker
name: dm-worker-peer
spec:
clusterIP: None
ports:
- name: peer
port: 8262
protocol: TCP
targetPort: 8262
selector:
app.kubernetes.io/component: dm-worker
publishNotReadyAddresses: true
sessionAffinity: None
type: ClusterIP

2 changes: 2 additions & 0 deletions manifests/dm/worker/base/kustomization.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
resources:
- dm-worker.yaml
23 changes: 23 additions & 0 deletions manifests/dm/worker/overlays/full/dm-worker-pvc.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: dm-worker
spec:
template:
spec:
containers:
- name: worker
volumeMounts:
- mountPath: /data
name: data
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes:
- ReadWriteOnce
dataSource: null
resources:
requests:
storage: 10Gi
storageClassName: local-storage
4 changes: 4 additions & 0 deletions manifests/dm/worker/overlays/full/kustomization.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
bases:
- ../../base
patches:
- dm-worker-pvc.yaml

0 comments on commit f80c8c2

Please sign in to comment.