Skip to content

Commit

Permalink
zh, en/: update tidb-operator chart version to v1.1.5 (#717)
Browse files Browse the repository at this point in the history
  • Loading branch information
lichunzhu authored Sep 18, 2020
1 parent 92e7638 commit c614ed9
Show file tree
Hide file tree
Showing 16 changed files with 78 additions and 78 deletions.
6 changes: 3 additions & 3 deletions en/cheat-sheet.md
Original file line number Diff line number Diff line change
Expand Up @@ -475,7 +475,7 @@ For example:
{{< copyable "shell-regular" >}}

```shell
helm inspect values pingcap/tidb-operator --version=v1.1.4 > values-tidb-operator.yaml
helm inspect values pingcap/tidb-operator --version=v1.1.5 > values-tidb-operator.yaml
```

### Deploy using Helm chart
Expand All @@ -491,7 +491,7 @@ For example:
{{< copyable "shell-regular" >}}

```shell
helm install pingcap/tidb-operator --name=tidb-operator --namespace=tidb-admin --version=v1.1.4 -f values-tidb-operator.yaml
helm install pingcap/tidb-operator --name=tidb-operator --namespace=tidb-admin --version=v1.1.5 -f values-tidb-operator.yaml
```

### View the deployed Helm release
Expand All @@ -515,7 +515,7 @@ For example:
{{< copyable "shell-regular" >}}

```shell
helm upgrade tidb-operator pingcap/tidb-operator --version=v1.1.4 -f values-tidb-operator.yaml
helm upgrade tidb-operator pingcap/tidb-operator --version=v1.1.5 -f values-tidb-operator.yaml
```

### Delete Helm release
Expand Down
6 changes: 3 additions & 3 deletions en/configure-storage-class.md
Original file line number Diff line number Diff line change
Expand Up @@ -75,15 +75,15 @@ Kubernetes currently supports statically allocated local storage. To create a lo
{{< copyable "shell-regular" >}}

```shell
kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.4/manifests/local-dind/local-volume-provisioner.yaml
kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.5/manifests/local-dind/local-volume-provisioner.yaml
```

If the server has no access to the Internet, download the `local-volume-provisioner.yaml` file on a machine with Internet access and then install it.

{{< copyable "shell-regular" >}}

```shell
wget https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.4/manifests/local-dind/local-volume-provisioner.yaml &&
wget https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.5/manifests/local-dind/local-volume-provisioner.yaml &&
kubectl apply -f ./local-volume-provisioner.yaml
```

Expand Down Expand Up @@ -246,7 +246,7 @@ Finally, execute the `kubectl apply` command to deploy `local-volume-provisioner
{{< copyable "shell-regular" >}}
```shell
kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.4/manifests/local-dind/local-volume-provisioner.yaml
kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.5/manifests/local-dind/local-volume-provisioner.yaml
```
When you later deploy tidb clusters, deploy TiDB Binlog for incremental backups, or do full backups, configure the corresponding `StorageClass` for use.
Expand Down
2 changes: 1 addition & 1 deletion en/deploy-on-alibaba-cloud.md
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,7 @@ All the instances except ACK mandatory workers are deployed across availability
tikv_count = 3
tidb_count = 2
pd_count = 3
operator_version = "v1.1.4"
operator_version = "v1.1.5"
```

* To deploy TiFlash in the cluster, set `create_tiflash_node_pool = true` in `terraform.tfvars`. You can also configure the node count and instance type of the TiFlash node pool by modifying `tiflash_count` and `tiflash_instance_type`. By default, the value of `tiflash_count` is `2`, and the value of `tiflash_instance_type` is `ecs.i2.2xlarge`.
Expand Down
4 changes: 2 additions & 2 deletions en/deploy-tidb-from-kubernetes-gke.md
Original file line number Diff line number Diff line change
Expand Up @@ -97,15 +97,15 @@ If you see `Ready` for all nodes, congratulations! You've set up your first Kube
TiDB Operator uses [Custom Resource Definition (CRD)](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#customresourcedefinitions) to extend Kubernetes. Therefore, to use TiDB Operator, you must first create the `TidbCluster` CRD.

```shell
kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.4/manifests/crd.yaml && \
kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.5/manifests/crd.yaml && \
kubectl get crd tidbclusters.pingcap.com
```

After the `TidbCluster` CRD is created, install TiDB Operator in your Kubernetes cluster.

```shell
kubectl create namespace tidb-admin
helm install --namespace tidb-admin tidb-operator pingcap/tidb-operator --version v1.1.4
helm install --namespace tidb-admin tidb-operator pingcap/tidb-operator --version v1.1.5
kubectl get po -n tidb-admin -l app.kubernetes.io/name=tidb-operator
```

Expand Down
28 changes: 14 additions & 14 deletions en/deploy-tidb-operator.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,15 +45,15 @@ TiDB Operator uses [Custom Resource Definition (CRD)](https://kubernetes.io/docs
{{< copyable "shell-regular" >}}

```shell
kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.4/manifests/crd.yaml
kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.5/manifests/crd.yaml
```

If the server cannot access the Internet, you need to download the `crd.yaml` file on a machine with Internet access before installing:

{{< copyable "shell-regular" >}}

```shell
wget https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.4/manifests/crd.yaml
wget https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.5/manifests/crd.yaml
kubectl apply -f ./crd.yaml
```

Expand Down Expand Up @@ -93,7 +93,7 @@ After the various CRDs above are created, you can install TiDB Operator on your

> **Note:**
>
> `${chart_version}` represents the chart version of TiDB Operator. For example, `v1.1.4`. You can view the currently supported versions by running the `helm search -l tidb-operator` command.
> `${chart_version}` represents the chart version of TiDB Operator. For example, `v1.1.5`. You can view the currently supported versions by running the `helm search -l tidb-operator` command.

2. Configure TiDB Operator

Expand Down Expand Up @@ -133,15 +133,15 @@ If your server cannot access the Internet, install TiDB Operator offline by the
{{< copyable "shell-regular" >}}

```shell
wget http://charts.pingcap.org/tidb-operator-v1.1.4.tgz
wget http://charts.pingcap.org/tidb-operator-v1.1.5.tgz
```

Copy the `tidb-operator-v1.1.4.tgz` file to the target server and extract it to the current directory:
Copy the `tidb-operator-v1.1.5.tgz` file to the target server and extract it to the current directory:

{{< copyable "shell-regular" >}}

```shell
tar zxvf tidb-operator.v1.1.4.tgz
tar zxvf tidb-operator.v1.1.5.tgz
```

2. Download the Docker images used by TiDB Operator
Expand All @@ -153,8 +153,8 @@ If your server cannot access the Internet, install TiDB Operator offline by the
{{< copyable "shell-regular" >}}

```shell
pingcap/tidb-operator:v1.1.4
pingcap/tidb-backup-manager:v1.1.4
pingcap/tidb-operator:v1.1.5
pingcap/tidb-backup-manager:v1.1.5
bitnami/kubectl:latest
pingcap/advanced-statefulset:v0.3.3
k8s.gcr.io/kube-scheduler:v1.16.9
Expand All @@ -167,13 +167,13 @@ If your server cannot access the Internet, install TiDB Operator offline by the
{{< copyable "shell-regular" >}}

```shell
docker pull pingcap/tidb-operator:v1.1.4
docker pull pingcap/tidb-backup-manager:v1.1.4
docker pull pingcap/tidb-operator:v1.1.5
docker pull pingcap/tidb-backup-manager:v1.1.5
docker pull bitnami/kubectl:latest
docker pull pingcap/advanced-statefulset:v0.3.3
docker save -o tidb-operator-v1.1.4.tar pingcap/tidb-operator:v1.1.4
docker save -o tidb-backup-manager-v1.1.4.tar pingcap/tidb-backup-manager:v1.1.4
docker save -o tidb-operator-v1.1.5.tar pingcap/tidb-operator:v1.1.5
docker save -o tidb-backup-manager-v1.1.5.tar pingcap/tidb-backup-manager:v1.1.5
docker save -o bitnami-kubectl.tar bitnami/kubectl:latest
docker save -o advanced-statefulset-v0.3.3.tar pingcap/advanced-statefulset:v0.3.3
```
Expand All @@ -183,8 +183,8 @@ If your server cannot access the Internet, install TiDB Operator offline by the
{{< copyable "shell-regular" >}}

```shell
docker load -i tidb-operator-v1.1.4.tar
docker load -i tidb-backup-manager-v1.1.4.tar
docker load -i tidb-operator-v1.1.5.tar
docker load -i tidb-backup-manager-v1.1.5.tar
docker load -i bitnami-kubectl.tar
docker load -i advanced-statefulset-v0.3.3.tar
```
Expand Down
18 changes: 9 additions & 9 deletions en/get-started.md
Original file line number Diff line number Diff line change
Expand Up @@ -237,7 +237,7 @@ Before proceeding, make sure the following requirements are satisfied:
{{< copyable "shell-regular" >}}

```shell
kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.4/manifests/crd.yaml
kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.5/manifests/crd.yaml
```

Expected output:
Expand Down Expand Up @@ -321,17 +321,17 @@ Before proceeding, make sure the following requirements are satisfied:
{{< copyable "shell-regular" >}}
```shell
helm install --namespace tidb-admin --name tidb-operator pingcap/tidb-operator --version v1.1.4
helm install --namespace tidb-admin --name tidb-operator pingcap/tidb-operator --version v1.1.5
```
If the network connection to the Docker Hub is slow, you can try images hosted in Alibaba Cloud:
{{< copyable "shell-regular" >}}
```
helm install --namespace tidb-admin --name tidb-operator pingcap/tidb-operator --version v1.1.4 \
--set operatorImage=registry.cn-beijing.aliyuncs.com/tidb/tidb-operator:v1.1.4 \
--set tidbBackupManagerImage=registry.cn-beijing.aliyuncs.com/tidb/tidb-backup-manager:v1.1.4 \
helm install --namespace tidb-admin --name tidb-operator pingcap/tidb-operator --version v1.1.5 \
--set operatorImage=registry.cn-beijing.aliyuncs.com/tidb/tidb-operator:v1.1.5 \
--set tidbBackupManagerImage=registry.cn-beijing.aliyuncs.com/tidb/tidb-backup-manager:v1.1.5 \
--set scheduler.kubeSchedulerImageName=registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler
```
Expand Down Expand Up @@ -386,17 +386,17 @@ Before proceeding, make sure the following requirements are satisfied:
{{< copyable "shell-regular" >}}
```shell
helm install --namespace tidb-admin tidb-operator pingcap/tidb-operator --version v1.1.4
helm install --namespace tidb-admin tidb-operator pingcap/tidb-operator --version v1.1.5
```
If the network connection to the Docker Hub is slow, you can try images hosted in Alibaba Cloud:
{{< copyable "shell-regular" >}}
```
helm install --namespace tidb-admin tidb-operator pingcap/tidb-operator --version v1.1.4 \
--set operatorImage=registry.cn-beijing.aliyuncs.com/tidb/tidb-operator:v1.1.4 \
--set tidbBackupManagerImage=registry.cn-beijing.aliyuncs.com/tidb/tidb-backup-manager:v1.1.4 \
helm install --namespace tidb-admin tidb-operator pingcap/tidb-operator --version v1.1.5 \
--set operatorImage=registry.cn-beijing.aliyuncs.com/tidb/tidb-operator:v1.1.5 \
--set tidbBackupManagerImage=registry.cn-beijing.aliyuncs.com/tidb/tidb-backup-manager:v1.1.5 \
--set scheduler.kubeSchedulerImageName=registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler
```
Expand Down
12 changes: 6 additions & 6 deletions en/tidb-toolkit.md
Original file line number Diff line number Diff line change
Expand Up @@ -188,15 +188,15 @@ The Helm server is a service called `tiller`, first install the `RBAC` rules req
{{< copyable "shell-regular" >}}

```shell
kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.4/manifests/tiller-rbac.yaml
kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.5/manifests/tiller-rbac.yaml
```

If the server cannot access the Internet, download the `tiller-rbac.yaml` file on a machine with Internet access:

{{< copyable "shell-regular" >}}

```shell
wget https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.4/manifests/tiller-rbac.yaml
wget https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.5/manifests/tiller-rbac.yaml
```

Copy the file `tiller-rbac.yaml` to the server and install the `RBAC`:
Expand Down Expand Up @@ -357,17 +357,17 @@ Use the following command to download the chart file required for cluster instal
{{< copyable "shell-regular" >}}

```shell
wget http://charts.pingcap.org/tidb-operator-v1.1.4.tgz
wget http://charts.pingcap.org/tidb-drainer-v1.1.4.tgz
wget http://charts.pingcap.org/tidb-lightning-v1.1.4.tgz
wget http://charts.pingcap.org/tidb-operator-v1.1.5.tgz
wget http://charts.pingcap.org/tidb-drainer-v1.1.5.tgz
wget http://charts.pingcap.org/tidb-lightning-v1.1.5.tgz
```

Copy these chart files to the server and decompress them. You can use these charts to install the corresponding components by running the `helm install` command. Take `tidb-operator` as an example:

{{< copyable "shell-regular" >}}

```shell
tar zxvf tidb-operator.v1.1.4.tgz
tar zxvf tidb-operator.v1.1.5.tgz
helm install ./tidb-operator --name=${release_name} --namespace=${namespace}
```

Expand Down
2 changes: 1 addition & 1 deletion en/upgrade-tidb-operator.md
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@ This document describes how to upgrade TiDB Operator and Kubernetes.

> **Note:**
>
> The `${version}` in this document represents the version of TiDB Operator, such as `v1.1.4`. You can check the currently supported version using the `helm search -l tidb-operator` command.
> The `${version}` in this document represents the version of TiDB Operator, such as `v1.1.5`. You can check the currently supported version using the `helm search -l tidb-operator` command.

2. Get the `values.yaml` file of the `tidb-operator` chart that you want to install:

Expand Down
6 changes: 3 additions & 3 deletions zh/cheat-sheet.md
Original file line number Diff line number Diff line change
Expand Up @@ -475,7 +475,7 @@ helm inspect values ${chart_name} --version=${chart_version} > values.yaml
{{< copyable "shell-regular" >}}

```shell
helm inspect values pingcap/tidb-operator --version=v1.1.4 > values-tidb-operator.yaml
helm inspect values pingcap/tidb-operator --version=v1.1.5 > values-tidb-operator.yaml
```

### 使用 Helm Chart 部署
Expand All @@ -491,7 +491,7 @@ helm install ${chart_name} --name=${name} --namespace=${namespace} --version=${c
{{< copyable "shell-regular" >}}

```shell
helm install pingcap/tidb-operator --name=tidb-operator --namespace=tidb-admin --version=v1.1.4 -f values-tidb-operator.yaml
helm install pingcap/tidb-operator --name=tidb-operator --namespace=tidb-admin --version=v1.1.5 -f values-tidb-operator.yaml
```

### 查看已经部署的 Helm Release
Expand All @@ -515,7 +515,7 @@ helm upgrade ${name} ${chart_name} --version=${chart_version} -f ${values_file}
{{< copyable "shell-regular" >}}

```shell
helm upgrade tidb-operator pingcap/tidb-operator --version=v1.1.4 -f values-tidb-operator.yaml
helm upgrade tidb-operator pingcap/tidb-operator --version=v1.1.5 -f values-tidb-operator.yaml
```

### 删除 Helm Release
Expand Down
6 changes: 3 additions & 3 deletions zh/configure-storage-class.md
Original file line number Diff line number Diff line change
Expand Up @@ -75,15 +75,15 @@ Kubernetes 当前支持静态分配的本地存储。可使用 [local-static-pro
{{< copyable "shell-regular" >}}

```shell
kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.4/manifests/local-dind/local-volume-provisioner.yaml
kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.5/manifests/local-dind/local-volume-provisioner.yaml
```

如果服务器没有外网,需要先用有外网的机器下载 `local-volume-provisioner.yaml` 文件,然后再进行安装:

{{< copyable "shell-regular" >}}

```shell
wget https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.4/manifests/local-dind/local-volume-provisioner.yaml
wget https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.5/manifests/local-dind/local-volume-provisioner.yaml
kubectl apply -f ./local-volume-provisioner.yaml
```

Expand Down Expand Up @@ -246,7 +246,7 @@ data:
{{< copyable "shell-regular" >}}

```shell
kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.4/manifests/local-dind/local-volume-provisioner.yaml
kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.5/manifests/local-dind/local-volume-provisioner.yaml
```

后续创建 TiDB 集群或备份等组件的时候,再配置相应的 `StorageClass` 供其使用。
Expand Down
2 changes: 1 addition & 1 deletion zh/deploy-on-alibaba-cloud.md
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,7 @@ aliases: ['/docs-cn/tidb-in-kubernetes/dev/deploy-on-alibaba-cloud/']
tikv_count = 3
tidb_count = 2
pd_count = 3
operator_version = "v1.1.4"
operator_version = "v1.1.5"
```

如果需要在集群中部署 TiFlash,需要在 `terraform.tfvars` 中设置 `create_tiflash_node_pool = true`,也可以设置 `tiflash_count``tiflash_instance_type` 来配置 TiFlash 节点池的节点数量和实例类型,`tiflash_count` 默认为 `2``tiflash_instance_type` 默认为 `ecs.i2.2xlarge`
Expand Down
4 changes: 2 additions & 2 deletions zh/deploy-tidb-from-kubernetes-gke.md
Original file line number Diff line number Diff line change
Expand Up @@ -94,15 +94,15 @@ kubectl get nodes
TiDB Operator 使用 [Custom Resource Definition (CRD)](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/#customresourcedefinitions) 扩展 Kubernetes,所以要使用 TiDB Operator,必须先创建 `TidbCluster` 等各种自定义资源类型:

```shell
kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.4/manifests/crd.yaml && \
kubectl apply -f https://raw.githubusercontent.com/pingcap/tidb-operator/v1.1.5/manifests/crd.yaml && \
kubectl get crd tidbclusters.pingcap.com
```

创建 `TidbCluster` 自定义资源类型后,接下来在 Kubernetes 集群上安装 TiDB Operator。

```shell
kubectl create namespace tidb-admin
helm install --namespace tidb-admin tidb-operator pingcap/tidb-operator --version v1.1.4
helm install --namespace tidb-admin tidb-operator pingcap/tidb-operator --version v1.1.5
kubectl get po -n tidb-admin -l app.kubernetes.io/name=tidb-operator
```

Expand Down
Loading

0 comments on commit c614ed9

Please sign in to comment.