Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

add deploy yamls for dm with new ha architecture #1738

Merged
merged 7 commits into from
Feb 24, 2020
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
46 changes: 46 additions & 0 deletions manifests/dm/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,46 @@
---
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

move docs to docs/ directory?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is just for temp deployment, we may need to support DM with chart or controller later, I think we can leave it here for now.

title: Deploy DM on Kubernetes
summary: Deploy DM on Kubernetes
category: how-to
---

# Deploy DM on Kubernetes

This document describes how to deploy DM of the new HA architecture with the yamls in this directory.

## Deploy dm-master

{{< copyable "shell-regular" >}}

``` shell
kubectl create -f master/dm-master.yaml -n <namespace>
```

> **Note: **
>
> - `3` replicas are deployed by default.
> - `storageClassName` is set to `local-storage` by default.

## Deploy dm-worker

- If you only need to use DM for incremental data migration, no need to create PVC for dm-worker, just deploy it with below command:

{{< copyable "shell-regular" >}}

``` shell
kubectl kustomize worker/base | kubectl apply -f - -n <namespace>
```

- If you need to use DM for both full and incremental data migration, you have to create PVC for dm-worker, deploy it with below command:

{{< copyable "shell-regular" >}}

``` shell
kubectl kustomize worker/overlays/full | kubectl apply -f - -n <namespace>
```

> **Note: **
>
> - `3` replicas are deployed by default.
> - `storageClassName` is set to `local-storage` for PVC by default.
> - If PVCs are created, they are mounted to `/data` directory.
144 changes: 144 additions & 0 deletions manifests/dm/master/dm-master.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,144 @@
apiVersion: apps/v1
kind: StatefulSet
metadata:
labels:
app.kubernetes.io/component: dm-master
name: dm-master
spec:
podManagementPolicy: Parallel
replicas: 3
revisionHistoryLimit: 10
selector:
matchLabels:
app.kubernetes.io/component: dm-master
serviceName: dm-master-peer
template:
metadata:
annotations:
prometheus.io/path: /metrics
prometheus.io/port: "8261"
prometheus.io/scrape: "true"
creationTimestamp: null
labels:
app.kubernetes.io/component: dm-master
spec:
affinity: {}
containers:
- command:
- /dm-master
- -data-dir=/data
- --name=$(MY_POD_NAME)
- --master-addr=:8261
- --advertise-addr=$(MY_POD_NAME).$(PEER_SERVICE_NAME).$(NAMESPACE):8261
- --peer-urls=:8291
- --advertise-peer-urls=http://$(MY_POD_NAME).$(PEER_SERVICE_NAME).$(NAMESPACE):8291
- --initial-cluster=dm-master-0=http://dm-master-0.$(PEER_SERVICE_NAME).$(NAMESPACE):8291,dm-master-1=http://dm-master-1.$(PEER_SERVICE_NAME).$(NAMESPACE):8291,dm-master-2=http://dm-master-2.$(PEER_SERVICE_NAME).$(NAMESPACE):8291
env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: PEER_SERVICE_NAME
value: dm-master-peer
- name: SERVICE_NAME
value: dm-master
- name: TZ
value: UTC
image: 127.0.0.1:30002/pingcap/dm:2040
DanielZhangQD marked this conversation as resolved.
Show resolved Hide resolved
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 8
httpGet:
path: /status
port: 8261
scheme: HTTP
initialDelaySeconds: 5
timeoutSeconds: 5
readinessProbe:
failureThreshold: 5
httpGet:
path: /status
port: 8261
scheme: HTTP
initialDelaySeconds: 5
timeoutSeconds: 5
name: master
ports:
- containerPort: 8291
name: server
protocol: TCP
- containerPort: 8261
name: client
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
volumeMounts:
- mountPath: /data
name: data
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
updateStrategy:
rollingUpdate:
partition: 3
type: RollingUpdate
volumeClaimTemplates:
- metadata:
creationTimestamp: null
DanielZhangQD marked this conversation as resolved.
Show resolved Hide resolved
name: data
spec:
accessModes:
- ReadWriteOnce
dataSource: null
resources:
requests:
storage: 10Gi
storageClassName: local-storage
status:
phase: Pending
DanielZhangQD marked this conversation as resolved.
Show resolved Hide resolved
---
apiVersion: v1
DanielZhangQD marked this conversation as resolved.
Show resolved Hide resolved
kind: Service
metadata:
labels:
app.kubernetes.io/component: dm-master
name: dm-master
spec:
ports:
- name: client
port: 8261
protocol: TCP
targetPort: 8261
selector:
app.kubernetes.io/component: dm-master
sessionAffinity: None
type: ClusterIP
---
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/component: dm-master
name: dm-master-peer
spec:
clusterIP: None
ports:
- name: peer
port: 8291
protocol: TCP
targetPort: 8291
selector:
app.kubernetes.io/component: dm-master
publishNotReadyAddresses: true
sessionAffinity: None
type: ClusterIP

121 changes: 121 additions & 0 deletions manifests/dm/worker/base/dm-worker.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,121 @@
apiVersion: apps/v1
kind: StatefulSet
metadata:
labels:
app.kubernetes.io/component: dm-worker
name: dm-worker
spec:
podManagementPolicy: Parallel
replicas: 3
revisionHistoryLimit: 10
selector:
matchLabels:
app.kubernetes.io/component: dm-worker
serviceName: dm-worker-peer
template:
metadata:
annotations:
prometheus.io/path: /metrics
prometheus.io/port: "8262"
prometheus.io/scrape: "true"
creationTimestamp: null
labels:
app.kubernetes.io/component: dm-worker
spec:
affinity: {}
containers:
- command:
- /dm-worker
- --name=$(MY_POD_NAME)
- --worker-addr=:8262
- --advertise-addr=$(MY_POD_NAME).$(PEER_SERVICE_NAME).$(NAMESPACE):8262
- --join=dm-master.$(NAMESPACE):8261
env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: PEER_SERVICE_NAME
value: dm-worker-peer
- name: SERVICE_NAME
value: dm-worker
- name: TZ
value: UTC
image: 127.0.0.1:30002/pingcap/dm:2040
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 8
httpGet:
path: /status
port: 8262
scheme: HTTP
initialDelaySeconds: 5
timeoutSeconds: 5
readinessProbe:
failureThreshold: 5
httpGet:
path: /status
port: 8262
scheme: HTTP
initialDelaySeconds: 5
timeoutSeconds: 5
name: worker
ports:
- containerPort: 8262
name: client
protocol: TCP
resources: {}
terminationMessagePath: /dev/termination-log
terminationMessagePolicy: File
dnsPolicy: ClusterFirst
restartPolicy: Always
schedulerName: default-scheduler
securityContext: {}
terminationGracePeriodSeconds: 30
updateStrategy:
rollingUpdate:
partition: 3
type: RollingUpdate
---
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/component: dm-worker
name: dm-worker
spec:
ports:
- name: client
port: 8262
protocol: TCP
targetPort: 8262
selector:
app.kubernetes.io/component: dm-worker
sessionAffinity: None
type: ClusterIP
---
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/component: dm-worker
name: dm-worker-peer
spec:
clusterIP: None
ports:
- name: peer
port: 8262
protocol: TCP
targetPort: 8262
selector:
app.kubernetes.io/component: dm-worker
publishNotReadyAddresses: true
sessionAffinity: None
type: ClusterIP

2 changes: 2 additions & 0 deletions manifests/dm/worker/base/kustomization.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
resources:
- dm-worker.yaml
26 changes: 26 additions & 0 deletions manifests/dm/worker/overlays/full/dm-worker-pvc.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: dm-worker
spec:
template:
spec:
containers:
- name: worker
volumeMounts:
- mountPath: /data
name: data
volumeClaimTemplates:
- metadata:
creationTimestamp: null
name: data
spec:
accessModes:
- ReadWriteOnce
dataSource: null
resources:
requests:
storage: 10Gi
storageClassName: local-storage
status:
phase: Pending
4 changes: 4 additions & 0 deletions manifests/dm/worker/overlays/full/kustomization.yaml
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
bases:
- ../../base
patches:
- dm-worker-pvc.yaml