Skip to content

Commit

Permalink
ticdc: refine changefeed scheduler configuration description (#15050) (
Browse files Browse the repository at this point in the history
  • Loading branch information
ti-chi-bot authored Oct 17, 2023
1 parent f2c719f commit 26bb0c4
Show file tree
Hide file tree
Showing 5 changed files with 48 additions and 24 deletions.
2 changes: 1 addition & 1 deletion TOC.md
Original file line number Diff line number Diff line change
Expand Up @@ -552,6 +552,7 @@
- [DDL Replication](/ticdc/ticdc-ddl.md)
- [Bidirectional Replication](/ticdc/ticdc-bidirectional-replication.md)
- [Data Integrity Validation for Single-Row Data](/ticdc/ticdc-integrity-check.md)
- [Data Consistency Validation for TiDB Upstream/Downstream Clusters](/ticdc/ticdc-upstream-downstream-check.md)
- Monitor and Alert
- [Monitoring Metrics Summary](/ticdc/ticdc-summary-monitor.md)
- [Monitoring Metrics Details](/ticdc/monitor-ticdc.md)
Expand Down Expand Up @@ -605,7 +606,6 @@
- [Overview](/sync-diff-inspector/sync-diff-inspector-overview.md)
- [Data Check for Tables with Different Schema/Table Names](/sync-diff-inspector/route-diff.md)
- [Data Check in the Sharding Scenario](/sync-diff-inspector/shard-diff.md)
- [Data Check for TiDB Upstream/Downstream Clusters](/sync-diff-inspector/upstream-downstream-diff.md)
- [Data Check in the DM Replication Scenario](/sync-diff-inspector/dm-diff.md)
- Reference
- Cluster Architecture
Expand Down
2 changes: 1 addition & 1 deletion releases/release-6.3.0.md
Original file line number Diff line number Diff line change
Expand Up @@ -198,7 +198,7 @@ In v6.3.0-DMR, the key new features and improvements are as follows:

* TiCDC supports keeping the snapshots consistent between the upstream and the downstream (sync point) [#6977](https://github.com/pingcap/tiflow/issues/6977) @[asddongmen](https://github.com/asddongmen)

In the scenarios of data replication for disaster recovery, TiCDC supports [periodically maintaining a downstream data snapshot](/sync-diff-inspector/upstream-downstream-diff.md#data-check-for-tidb-upstream-and-downstream-clusters) so that the downstream snapshot is consistent with the upstream snapshot. With this feature, TiCDC can better support the scenarios where reads and writes are separate, and help you lower the cost.
In the scenarios of data replication for disaster recovery, TiCDC supports [periodically maintaining a downstream data snapshot](/ticdc/ticdc-upstream-downstream-check.md) so that the downstream snapshot is consistent with the upstream snapshot. With this feature, TiCDC can better support the scenarios where reads and writes are separate, and help you lower the cost.

* TiCDC supports graceful upgrade [#4757](https://github.com/pingcap/tiflow/issues/4757) @[overvenus](https://github.com/overvenus) @[3AceShowHand](https://github.com/3AceShowHand)

Expand Down
2 changes: 1 addition & 1 deletion sync-diff-inspector/sync-diff-inspector-overview.md
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ This guide introduces the key features of sync-diff-inspector and describes how
* Generate the SQL statements used to repair data if the data inconsistency exists
* Support [data check for tables with different schema or table names](/sync-diff-inspector/route-diff.md)
* Support [data check in the sharding scenario](/sync-diff-inspector/shard-diff.md)
* Support [data check for TiDB upstream-downstream clusters](/sync-diff-inspector/upstream-downstream-diff.md)
* Support [data check for TiDB upstream-downstream clusters](/ticdc/ticdc-upstream-downstream-check.md)
* Support [data check in the DM replication scenario](/sync-diff-inspector/dm-diff.md)

## Restrictions of sync-diff-inspector
Expand Down
24 changes: 11 additions & 13 deletions ticdc/ticdc-changefeed-config.md
Original file line number Diff line number Diff line change
Expand Up @@ -52,19 +52,19 @@ enable-old-value = true

# Specifies whether to enable the Syncpoint feature, which is supported since v6.3.0 and is disabled by default.
# Since v6.4.0, only the changefeed with the SYSTEM_VARIABLES_ADMIN or SUPER privilege can use the TiCDC Syncpoint feature.
# Note: This configuration item only takes effect if the downstream is Kafka or a storage service.
# Note: This configuration item only takes effect if the downstream is TiDB.
# enable-sync-point = false

# Specifies the interval at which Syncpoint aligns the upstream and downstream snapshots.
# The format is in h m s. For example, "1h30m30s".
# The default value is "10m" and the minimum value is "30s".
# Note: This configuration item only takes effect if the downstream is Kafka or a storage service.
# Note: This configuration item only takes effect if the downstream is TiDB.
# sync-point-interval = "5m"

# Specifies how long the data is retained by Syncpoint in the downstream table. When this duration is exceeded, the data is cleaned up.
# The format is in h m s. For example, "24h30m30s".
# The default value is "24h".
# Note: This configuration item only takes effect if the downstream is Kafka or a storage service.
# Note: This configuration item only takes effect if the downstream is TiDB.
# sync-point-retention = "1h"

[mounter]
Expand Down Expand Up @@ -98,20 +98,18 @@ rules = ['*.*', '!test.*']
# ignore-insert-value-expr = "price > 1000 and origin = 'no where'" # Ignore insert DMLs that contain the conditions "price > 1000" and "origin = 'no where'".

[scheduler]
# Splits a table into multiple replication ranges based on the number of Regions, and these ranges can be replicated by multiple TiCDC nodes.
# Allocate tables to multiple TiCDC nodes for replication on a per-Region basis.
# Note: This configuration item only takes effect on Kafka changefeeds and is not supported on MySQL changefeeds.
# The value is "false" by default. Set it to "true" to enable this feature.
enable-table-across-nodes = false
# When you enable this feature, it takes effect for tables with the number of Regions greater than the `region-threshold` value.
region-threshold = 100000
# When you enable this feature, it takes effect for tables with the number of rows modified per minute greater than the `write-key-threshold` value.
# When `enable-table-across-nodes` is enabled, there are two allocation modes:
# 1. Allocate tables based on the number of Regions, so that each TiCDC node handles roughly the same number of Regions. If the number of Regions for a table exceeds the value of `region-threshold`, the table will be allocated to multiple nodes for replication. The default value of `region-threshold` is 10000.
# region-threshold = 10000
# 2. Allocate tables based on the write traffic, so that each TiCDC node handles roughly the same number of modified rows. Only when the number of modified rows per minute in a table exceeds the value of `write-key-threshold`, will this allocation take effect.
# write-key-threshold = 30000
# Note:
# * The default value of `write-key-threshold` is 0, which means that the feature does not split the table replication range according to the number of rows modified in a table by default.
# * You can configure this parameter according to your cluster workload. For example, if it is configured as 30000, it means that the feature will split the replication range of a table when the number of modified rows per minute in the table exceeds 30000.
# * When `region-threshold` and `write-key-threshold` are configured at the same time:
# TiCDC will check whether the number of modified rows is greater than `write-key-threshold` first.
# If not, next check whether the number of Regions is greater than `region-threshold`.
write-key-threshold = 0
# * The default value of `write-key-threshold` is 0, which means that the traffic allocation mode is not used by default.
# * You only need to configure one of the two modes. If both `region-threshold` and `write-key-threshold` are configured, TiCDC prioritizes the traffic allocation mode, namely `write-key-threshold`.

[sink]
# For the sink of MQ type, you can use dispatchers to configure the event dispatcher.
Expand Down
Original file line number Diff line number Diff line change
@@ -1,17 +1,25 @@
---
title: Data Check for TiDB Upstream and Downstream Clusters
title: Upstream and Downstream Clusters Data Validation and Snapshot Read
summary: Learn how to check data for TiDB upstream and downstream clusters.
aliases: ['/tidb/stable/upstream-downstream-diff']
---

# Data Check for TiDB Upstream and Downstream Clusters
# Upstream and Downstream Clusters Data Validation and Snapshot Read

When you use TiCDC to build upstream and downstream clusters of TiDB, you might need to verify the consistency of upstream and downstream data without stopping replication. In the regular replication mode, TiCDC only guarantees that the data is eventually consistent, but cannot guarantee that the data is consistent during the replication process. Therefore, it is difficult to verify the consistency of dynamically changing data. To meet such a need, TiCDC provides the Syncpoint feature.
When you use TiCDC to build upstream and downstream clusters of TiDB, you might need to perform consistent snapshot read or data consistency validation on the upstream and downstream without stopping the replication. In the regular replication mode, TiCDC only guarantees that the data is eventually consistent, but cannot guarantee that the data is consistent during the replication process. Therefore, it is difficult to perform consistent read of dynamically changing data. To meet such a need, TiCDC provides the Syncpoint feature.

Syncpoint uses the snapshot feature provided by TiDB and enables TiCDC to maintain a `ts-map` that has consistency between upstream and downstream snapshots during the replication process. In this way, the issue of verifying the consistency of dynamic data is converted to the issue of verifying the consistency of static snapshot data, which achieves the effect of nearly real-time verification.

To enable the Syncpoint feature, set the value of the TiCDC configuration item `enable-sync-point` to `true` when creating a replication task. After enabling Syncpoint, TiCDC will periodically align the upstream and downstream snapshots according to the TiCDC parameter `sync-point-interval` during the data replication process, and will save the upstream and downstream TSO correspondences in the downstream `tidb_cdc.syncpoint_v1` table.
## Enable Syncpoint

Then, you only need to configure `snapshot` in sync-diff-inspector to verify the data of the TiDB upstream-downstream clusters. The following TiCDC configuration example enables Syncpoint for a created replication task:
After enabling the Syncpoint feature, you can use [Consistent snapshot read](#consistent-snapshot-read) and [Data consistency validation](#data-consistency-validation).

To enable the Syncpoint feature, set the value of the TiCDC configuration item `enable-sync-point` to `true` when creating a replication task. After enabling Syncpoint, TiCDC writes the following information to the downstream TiDB cluster:

1. During the replication, TiCDC periodically (configured by `sync-point-interval`) aligns snapshots between the upstream and downstream and saves the upstream and downstream TSO correspondences in the downstream `tidb_cdc.syncpoint_v1` table.
2. During the replication, TiCDC also periodically (configured by `sync-point-interval`) executes `SET GLOBAL tidb_external_ts = @@tidb_current_ts`, which sets a consistent snapshot point that has been replicated in backup clusters.

The following TiCDC configuration example enables Syncpoint when creating a replication task:

```toml
# Enables SyncPoint.
Expand All @@ -24,7 +32,25 @@ sync-point-interval = "5m"
sync-point-retention = "1h"
```

## Step 1: obtain `ts-map`
## Consistent snapshot read

> **Note:**
>
> Before you perform consistent snapshot read, make sure that you have [enabled the Syncpoint feature](#enable-syncpoint).
When you need to query the data from the backup cluster, you can set `SET GLOBAL|SESSION tidb_enable_external_ts_read = ON;` for the application to obtain transactionally consistent data on the backup cluster.

In addition, you can also select a previous point in time for snapshot read by querying `ts-map`.

## Data consistency validation

> **Note:**
>
> Before you perform data consistency validation, make sure that you have [enabled the Syncpoint feature](#enable-syncpoint).
To validate the data of upstream and downstream clusters, you only need to configure `snapshot` in sync-diff-inspector.

### Step 1: obtain `ts-map`

You can execute the following SQL statement in the downstream TiDB cluster to obtain the upstream TSO (`primary_ts`) and downstream TSO (`secondary_ts`):

Expand All @@ -45,7 +71,7 @@ The fields in the preceding `syncpoint_v1` table are described as follows:
- `secondary_ts`: The timestamp of the downstream database snapshot.
- `created_at`: The time when this record is inserted.

## Step 2: configure snapshot
### Step 2: configure snapshot

Then configure the snapshot information of the upstream and downstream databases by using the `ts-map` information obtained in [Step 1](#step-1-obtain-ts-map).

Expand All @@ -70,6 +96,6 @@ Here is a configuration example of the `Datasource config` section:
## Notes

- Before TiCDC creates a changefeed, make sure that the value of the TiCDC configuration item `enable-sync-point` is set to `true`. Only in this way, Syncpoint is enabled and the `ts-map` is saved in the downstream. For the complete configuration, see [TiCDC task configuration file](/ticdc/ticdc-changefeed-config.md).
- Modify the Garbage Collection (GC) time of TiKV to ensure that the historical data corresponding to snapshot is not collected by GC during the data check. It is recommended that you modify the GC time to 1 hour and recover the setting after the check.
- When you perform data validation using Syncpoint, you need to modify the Garbage Collection (GC) time of TiKV to ensure that the historical data corresponding to snapshot is not collected by GC during the data check. It is recommended that you modify the GC time to 1 hour and recover the setting after the check.
- The above example only shows the section of `Datasource config`. For complete configuration, refer to [sync-diff-inspector User Guide](/sync-diff-inspector/sync-diff-inspector-overview.md).
- Since v6.4.0, only the changefeed with the `SYSTEM_VARIABLES_ADMIN` or `SUPER` privilege can use the TiCDC Syncpoint feature.

0 comments on commit 26bb0c4

Please sign in to comment.