Skip to content

Commit

Permalink
en,zh: Bump tidb components to v7.5.3 (#2619)
Browse files Browse the repository at this point in the history
  • Loading branch information
csuzhangxc authored Sep 13, 2024
1 parent ba7eb48 commit c805c75
Show file tree
Hide file tree
Showing 52 changed files with 199 additions and 419 deletions.
2 changes: 1 addition & 1 deletion en/access-dashboard.md
Original file line number Diff line number Diff line change
Expand Up @@ -238,7 +238,7 @@ To enable this feature, you need to deploy TidbNGMonitoring CR using TiDB Operat
ngMonitoring:
requests:
storage: 10Gi
version: v7.5.1
version: v7.5.3
# storageClassName: default
baseImage: pingcap/ng-monitoring
EOF
Expand Down
6 changes: 3 additions & 3 deletions en/advanced-statefulset.md
Original file line number Diff line number Diff line change
Expand Up @@ -95,7 +95,7 @@ kind: TidbCluster
metadata:
name: asts
spec:
version: v7.5.1
version: v7.5.3
timezone: UTC
pvReclaimPolicy: Delete
pd:
Expand Down Expand Up @@ -147,7 +147,7 @@ metadata:
tikv.tidb.pingcap.com/delete-slots: '[1]'
name: asts
spec:
version: v7.5.1
version: v7.5.3
timezone: UTC
pvReclaimPolicy: Delete
pd:
Expand Down Expand Up @@ -201,7 +201,7 @@ metadata:
tikv.tidb.pingcap.com/delete-slots: '[]'
name: asts
spec:
version: v7.5.1
version: v7.5.3
timezone: UTC
pvReclaimPolicy: Delete
pd:
Expand Down
2 changes: 1 addition & 1 deletion en/aggregate-multiple-cluster-monitor-data.md
Original file line number Diff line number Diff line change
Expand Up @@ -170,7 +170,7 @@ spec:
version: 7.5.11
initializer:
baseImage: registry.cn-beijing.aliyuncs.com/tidb/tidb-monitor-initializer
version: v7.5.1
version: v7.5.3
reloader:
baseImage: registry.cn-beijing.aliyuncs.com/tidb/tidb-monitor-reloader
version: v1.0.1
Expand Down
8 changes: 4 additions & 4 deletions en/backup-restore-cr.md
Original file line number Diff line number Diff line change
Expand Up @@ -20,10 +20,10 @@ This section introduces the fields in the `Backup` CR.

- When using BR for backup, you can specify the BR version in this field.
- If the field is not specified or the value is empty, the `pingcap/br:${tikv_version}` image is used for backup by default.
- If the BR version is specified in this field, such as `.spec.toolImage: pingcap/br:v7.5.1`, the image of the specified version is used for backup.
- If the BR version is specified in this field, such as `.spec.toolImage: pingcap/br:v7.5.3`, the image of the specified version is used for backup.
- If an image is specified without the version, such as `.spec.toolImage: private/registry/br`, the `private/registry/br:${tikv_version}` image is used for backup.
- When using Dumpling for backup, you can specify the Dumpling version in this field.
- If the Dumpling version is specified in this field, such as `spec.toolImage: pingcap/dumpling:v7.5.1`, the image of the specified version is used for backup.
- If the Dumpling version is specified in this field, such as `spec.toolImage: pingcap/dumpling:v7.5.3`, the image of the specified version is used for backup.
- If the field is not specified, the Dumpling version specified in `TOOLKIT_VERSION` of the [Backup Manager Dockerfile](https://github.com/pingcap/tidb-operator/blob/v1.5.3/images/tidb-backup-manager/Dockerfile) is used for backup by default.

* `.spec.backupType`: the backup type. This field is valid only when you use BR for backup. Currently, the following three types are supported, and this field can be combined with the `.spec.tableFilter` field to configure table filter rules:
Expand Down Expand Up @@ -260,8 +260,8 @@ This section introduces the fields in the `Restore` CR.
* `.spec.metadata.namespace`: the namespace where the `Restore` CR is located.
* `.spec.toolImage`:the tools image used by `Restore`. TiDB Operator supports this configuration starting from v1.1.9.
- When using BR for restoring, you can specify the BR version in this field. For example,`spec.toolImage: pingcap/br:v7.5.1`. If not specified, `pingcap/br:${tikv_version}` is used for restoring by default.
- When using Lightning for restoring, you can specify the Lightning version in this field. For example, `spec.toolImage: pingcap/lightning:v7.5.1`. If not specified, the Lightning version specified in `TOOLKIT_VERSION` of the [Backup Manager Dockerfile](https://github.com/pingcap/tidb-operator/blob/v1.5.3/images/tidb-backup-manager/Dockerfile) is used for restoring by default.
- When using BR for restoring, you can specify the BR version in this field. For example,`spec.toolImage: pingcap/br:v7.5.3`. If not specified, `pingcap/br:${tikv_version}` is used for restoring by default.
- When using Lightning for restoring, you can specify the Lightning version in this field. For example, `spec.toolImage: pingcap/lightning:v7.5.3`. If not specified, the Lightning version specified in `TOOLKIT_VERSION` of the [Backup Manager Dockerfile](https://github.com/pingcap/tidb-operator/blob/v1.5.3/images/tidb-backup-manager/Dockerfile) is used for restoring by default.
* `.spec.backupType`: the restore type. This field is valid only when you use BR to restore data. Currently, the following three types are supported, and this field can be combined with the `.spec.tableFilter` field to configure table filter rules:
* `full`: restore all databases in a TiDB cluster.
Expand Down
2 changes: 1 addition & 1 deletion en/backup-restore-faq.md
Original file line number Diff line number Diff line change
Expand Up @@ -212,7 +212,7 @@ Solution:
backupType: full
restoreMode: volume-snapshot
serviceAccount: tidb-backup-manager
toolImage: pingcap/br:v7.5.1
toolImage: pingcap/br:v7.5.3
br:
cluster: basic
clusterNamespace: tidb-cluster
Expand Down
4 changes: 2 additions & 2 deletions en/configure-a-tidb-cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -41,11 +41,11 @@ Usually, components in a cluster are in the same version. It is recommended to c

Here are the formats of the parameters:

- `spec.version`: the format is `imageTag`, such as `v7.5.1`
- `spec.version`: the format is `imageTag`, such as `v7.5.3`

- `spec.<pd/tidb/tikv/pump/tiflash/ticdc>.baseImage`: the format is `imageName`, such as `pingcap/tidb`

- `spec.<pd/tidb/tikv/pump/tiflash/ticdc>.version`: the format is `imageTag`, such as `v7.5.1`
- `spec.<pd/tidb/tikv/pump/tiflash/ticdc>.version`: the format is `imageTag`, such as `v7.5.3`

### Recommended configuration

Expand Down
2 changes: 1 addition & 1 deletion en/deploy-cluster-on-arm64.md
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ Before starting the process, make sure that Kubernetes clusters are deployed on
name: ${cluster_name}
namespace: ${cluster_namespace}
spec:
version: "v7.5.1"
version: "v7.5.3"
# ...
helper:
image: busybox:1.33.0
Expand Down
6 changes: 3 additions & 3 deletions en/deploy-heterogeneous-tidb-cluster.md
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,7 @@ To deploy a heterogeneous cluster, do the following:
name: ${heterogeneous_cluster_name}
spec:
configUpdateStrategy: RollingUpdate
version: v7.5.1
version: v7.5.3
timezone: UTC
pvReclaimPolicy: Delete
discovery: {}
Expand Down Expand Up @@ -129,7 +129,7 @@ After creating certificates, take the following steps to deploy a TLS-enabled he
tlsCluster:
enabled: true
configUpdateStrategy: RollingUpdate
version: v7.5.1
version: v7.5.3
timezone: UTC
pvReclaimPolicy: Delete
discovery: {}
Expand Down Expand Up @@ -218,7 +218,7 @@ If you need to deploy a monitoring component for a heterogeneous cluster, take t
version: 7.5.11
initializer:
baseImage: pingcap/tidb-monitor-initializer
version: v7.5.1
version: v7.5.3
reloader:
baseImage: pingcap/tidb-monitor-reloader
version: v1.0.1
Expand Down
2 changes: 1 addition & 1 deletion en/deploy-on-aws-eks.md
Original file line number Diff line number Diff line change
Expand Up @@ -496,7 +496,7 @@ After the bastion host is created, you can connect to the bastion host via SSH a
$ mysql --comments -h abfc623004ccb4cc3b363f3f37475af1-9774d22c27310bc1.elb.us-west-2.amazonaws.com -P 4000 -u root
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MySQL connection id is 1189
Server version: 8.0.11-TiDB-v7.5.1 TiDB Server (Apache License 2.0) Community Edition, MySQL 8.0 compatible
Server version: 8.0.11-TiDB-v7.5.3 TiDB Server (Apache License 2.0) Community Edition, MySQL 8.0 compatible
Copyright (c) 2000, 2022, Oracle and/or its affiliates.
Expand Down
2 changes: 1 addition & 1 deletion en/deploy-on-azure-aks.md
Original file line number Diff line number Diff line change
Expand Up @@ -343,7 +343,7 @@ After access to the internal host via SSH, you can access the TiDB cluster throu
$ mysql --comments -h 20.240.0.7 -P 4000 -u root
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MySQL connection id is 1189
Server version: 8.0.11-TiDB-v7.5.1 TiDB Server (Apache License 2.0) Community Edition, MySQL 8.0 compatible
Server version: 8.0.11-TiDB-v7.5.3 TiDB Server (Apache License 2.0) Community Edition, MySQL 8.0 compatible
Copyright (c) 2000, 2022, Oracle and/or its affiliates.
Expand Down
2 changes: 1 addition & 1 deletion en/deploy-on-gcp-gke.md
Original file line number Diff line number Diff line change
Expand Up @@ -280,7 +280,7 @@ After the bastion host is created, you can connect to the bastion host via SSH a
$ mysql --comments -h 10.128.15.243 -P 4000 -u root
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MySQL connection id is 7823
Server version: 8.0.11-TiDB-v7.5.1 TiDB Server (Apache License 2.0) Community Edition, MySQL 8.0 compatible
Server version: 8.0.11-TiDB-v7.5.3 TiDB Server (Apache License 2.0) Community Edition, MySQL 8.0 compatible
Copyright (c) 2000, 2022, Oracle and/or its affiliates.
Expand Down
58 changes: 29 additions & 29 deletions en/deploy-on-general-kubernetes.md
Original file line number Diff line number Diff line change
Expand Up @@ -42,18 +42,18 @@ This document describes how to deploy a TiDB cluster on general Kubernetes.

If the server does not have an external network, you need to download the Docker image used by the TiDB cluster on a machine with Internet access and upload it to the server, and then use `docker load` to install the Docker image on the server.

To deploy a TiDB cluster, you need the following Docker images (assuming the version of the TiDB cluster is v7.5.1):
To deploy a TiDB cluster, you need the following Docker images (assuming the version of the TiDB cluster is v7.5.3):

```shell
pingcap/pd:v7.5.1
pingcap/tikv:v7.5.1
pingcap/tidb:v7.5.1
pingcap/tidb-binlog:v7.5.1
pingcap/ticdc:v7.5.1
pingcap/tiflash:v7.5.1
pingcap/pd:v7.5.3
pingcap/tikv:v7.5.3
pingcap/tidb:v7.5.3
pingcap/tidb-binlog:v7.5.3
pingcap/ticdc:v7.5.3
pingcap/tiflash:v7.5.3
pingcap/tiproxy:latest
pingcap/tidb-monitor-reloader:v1.0.1
pingcap/tidb-monitor-initializer:v7.5.1
pingcap/tidb-monitor-initializer:v7.5.3
grafana/grafana:7.5.11
prom/prometheus:v2.18.1
busybox:1.26.2
Expand All @@ -64,28 +64,28 @@ This document describes how to deploy a TiDB cluster on general Kubernetes.
{{< copyable "shell-regular" >}}

```shell
docker pull pingcap/pd:v7.5.1
docker pull pingcap/tikv:v7.5.1
docker pull pingcap/tidb:v7.5.1
docker pull pingcap/tidb-binlog:v7.5.1
docker pull pingcap/ticdc:v7.5.1
docker pull pingcap/tiflash:v7.5.1
docker pull pingcap/pd:v7.5.3
docker pull pingcap/tikv:v7.5.3
docker pull pingcap/tidb:v7.5.3
docker pull pingcap/tidb-binlog:v7.5.3
docker pull pingcap/ticdc:v7.5.3
docker pull pingcap/tiflash:v7.5.3
docker pull pingcap/tiproxy:latest
docker pull pingcap/tidb-monitor-reloader:v1.0.1
docker pull pingcap/tidb-monitor-initializer:v7.5.1
docker pull pingcap/tidb-monitor-initializer:v7.5.3
docker pull grafana/grafana:7.5.11
docker pull prom/prometheus:v2.18.1
docker pull busybox:1.26.2
docker save -o pd-v7.5.1.tar pingcap/pd:v7.5.1
docker save -o tikv-v7.5.1.tar pingcap/tikv:v7.5.1
docker save -o tidb-v7.5.1.tar pingcap/tidb:v7.5.1
docker save -o tidb-binlog-v7.5.1.tar pingcap/tidb-binlog:v7.5.1
docker save -o ticdc-v7.5.1.tar pingcap/ticdc:v7.5.1
docker save -o pd-v7.5.3.tar pingcap/pd:v7.5.3
docker save -o tikv-v7.5.3.tar pingcap/tikv:v7.5.3
docker save -o tidb-v7.5.3.tar pingcap/tidb:v7.5.3
docker save -o tidb-binlog-v7.5.3.tar pingcap/tidb-binlog:v7.5.3
docker save -o ticdc-v7.5.3.tar pingcap/ticdc:v7.5.3
docker save -o tiproxy-latest.tar pingcap/tiproxy:latest
docker save -o tiflash-v7.5.1.tar pingcap/tiflash:v7.5.1
docker save -o tiflash-v7.5.3.tar pingcap/tiflash:v7.5.3
docker save -o tidb-monitor-reloader-v1.0.1.tar pingcap/tidb-monitor-reloader:v1.0.1
docker save -o tidb-monitor-initializer-v7.5.1.tar pingcap/tidb-monitor-initializer:v7.5.1
docker save -o tidb-monitor-initializer-v7.5.3.tar pingcap/tidb-monitor-initializer:v7.5.3
docker save -o grafana-7.5.11.tar grafana/grafana:7.5.11
docker save -o prometheus-v2.18.1.tar prom/prometheus:v2.18.1
docker save -o busybox-1.26.2.tar busybox:1.26.2
Expand All @@ -96,15 +96,15 @@ This document describes how to deploy a TiDB cluster on general Kubernetes.
{{< copyable "shell-regular" >}}

```shell
docker load -i pd-v7.5.1.tar
docker load -i tikv-v7.5.1.tar
docker load -i tidb-v7.5.1.tar
docker load -i tidb-binlog-v7.5.1.tar
docker load -i ticdc-v7.5.1.tar
docker load -i pd-v7.5.3.tar
docker load -i tikv-v7.5.3.tar
docker load -i tidb-v7.5.3.tar
docker load -i tidb-binlog-v7.5.3.tar
docker load -i ticdc-v7.5.3.tar
docker load -i tiproxy-latest.tar
docker load -i tiflash-v7.5.1.tar
docker load -i tiflash-v7.5.3.tar
docker load -i tidb-monitor-reloader-v1.0.1.tar
docker load -i tidb-monitor-initializer-v7.5.1.tar
docker load -i tidb-monitor-initializer-v7.5.3.tar
docker load -i grafana-6.0.1.tar
docker load -i prometheus-v2.18.1.tar
docker load -i busybox-1.26.2.tar
Expand Down
6 changes: 3 additions & 3 deletions en/deploy-tidb-binlog.md
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@ TiDB Binlog is disabled in the TiDB cluster by default. To create a TiDB cluster
...
pump:
baseImage: pingcap/tidb-binlog
version: v7.5.1
version: v7.5.3
replicas: 1
storageClassName: local-storage
requests:
Expand All @@ -51,7 +51,7 @@ TiDB Binlog is disabled in the TiDB cluster by default. To create a TiDB cluster
...
pump:
baseImage: pingcap/tidb-binlog
version: v7.5.1
version: v7.5.3
replicas: 1
storageClassName: local-storage
requests:
Expand Down Expand Up @@ -192,7 +192,7 @@ To deploy multiple drainers using the `tidb-drainer` Helm chart for a TiDB clust

```yaml
clusterName: example-tidb
clusterVersion: v7.5.1
clusterVersion: v7.5.3
baseImage:pingcap/tidb-binlog
storageClassName: local-storage
storage: 10Gi
Expand Down
8 changes: 4 additions & 4 deletions en/deploy-tidb-cluster-across-multiple-kubernetes.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,7 +52,7 @@ kind: TidbCluster
metadata:
name: "${tc_name_1}"
spec:
version: v7.5.1
version: v7.5.3
timezone: UTC
pvReclaimPolicy: Delete
enableDynamicConfiguration: true
Expand Down Expand Up @@ -106,7 +106,7 @@ kind: TidbCluster
metadata:
name: "${tc_name_2}"
spec:
version: v7.5.1
version: v7.5.3
timezone: UTC
pvReclaimPolicy: Delete
enableDynamicConfiguration: true
Expand Down Expand Up @@ -383,7 +383,7 @@ kind: TidbCluster
metadata:
name: "${tc_name_1}"
spec:
version: v7.5.1
version: v7.5.3
timezone: UTC
tlsCluster:
enabled: true
Expand Down Expand Up @@ -441,7 +441,7 @@ kind: TidbCluster
metadata:
name: "${tc_name_2}"
spec:
version: v7.5.1
version: v7.5.3
timezone: UTC
tlsCluster:
enabled: true
Expand Down
16 changes: 8 additions & 8 deletions en/deploy-tidb-dm.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,9 +29,9 @@ Usually, components in a cluster are in the same version. It is recommended to c

The formats of the related parameters are as follows:

- `spec.version`: the format is `imageTag`, such as `v7.5.1`.
- `spec.version`: the format is `imageTag`, such as `v7.5.3`.
- `spec.<master/worker>.baseImage`: the format is `imageName`, such as `pingcap/dm`.
- `spec.<master/worker>.version`: the format is `imageTag`, such as `v7.5.1`.
- `spec.<master/worker>.version`: the format is `imageTag`, such as `v7.5.3`.

TiDB Operator only supports deploying DM 2.0 and later versions.

Expand All @@ -50,7 +50,7 @@ metadata:
name: ${dm_cluster_name}
namespace: ${namespace}
spec:
version: v7.5.1
version: v7.5.3
configUpdateStrategy: RollingUpdate
pvReclaimPolicy: Retain
discovery: {}
Expand Down Expand Up @@ -141,27 +141,27 @@ kubectl apply -f ${dm_cluster_name}.yaml -n ${namespace}

If the server does not have an external network, you need to download the Docker image used by the DM cluster and upload the image to the server, and then execute `docker load` to install the Docker image on the server:

1. Deploy a DM cluster requires the following Docker image (assuming the version of the DM cluster is v7.5.1):
1. Deploy a DM cluster requires the following Docker image (assuming the version of the DM cluster is v7.5.3):

```shell
pingcap/dm:v7.5.1
pingcap/dm:v7.5.3
```

2. To download the image, execute the following command:

{{< copyable "shell-regular" >}}

```shell
docker pull pingcap/dm:v7.5.1
docker save -o dm-v7.5.1.tar pingcap/dm:v7.5.1
docker pull pingcap/dm:v7.5.3
docker save -o dm-v7.5.3.tar pingcap/dm:v7.5.3
```

3. Upload the Docker image to the server, and execute `docker load` to install the image on the server:

{{< copyable "shell-regular" >}}

```shell
docker load -i dm-v7.5.1.tar
docker load -i dm-v7.5.3.tar
```

After deploying the DM cluster, execute the following command to view the Pod status:
Expand Down
2 changes: 1 addition & 1 deletion en/deploy-tidb-monitor-across-multiple-kubernetes.md
Original file line number Diff line number Diff line change
Expand Up @@ -302,7 +302,7 @@ After collecting data using Prometheus, you can visualize multi-cluster monitori
```shell
# set tidb version here
version=v7.5.1
version=v7.5.3
docker run --rm -i -v ${PWD}/dashboards:/dashboards/ pingcap/tidb-monitor-initializer:${version} && \
cd dashboards
```
Expand Down
2 changes: 1 addition & 1 deletion en/enable-monitor-shards.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,7 @@ spec:
version: v2.27.1
initializer:
baseImage: pingcap/tidb-monitor-initializer
version: v5.2.1
version: v7.5.3
reloader:
baseImage: pingcap/tidb-monitor-reloader
version: v1.0.1
Expand Down
Loading

0 comments on commit c805c75

Please sign in to comment.