Skip to content

Commit

Permalink
cdc: remove maxwell protocol, add simple protocol (pingcap#18286)
Browse files Browse the repository at this point in the history
  • Loading branch information
Oreoxmt authored Jul 29, 2024
1 parent ac22dfe commit cd3fbda
Show file tree
Hide file tree
Showing 10 changed files with 7 additions and 12 deletions.
2 changes: 0 additions & 2 deletions releases/release-4.0.10.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,7 +34,6 @@ TiDB version: 4.0.10

+ TiCDC

- Enable the old value feature for the `maxwell` protocol [#1144](https://github.com/pingcap/tiflow/pull/1144)
- Enable the unified sorter feature by default [#1230](https://github.com/pingcap/tiflow/pull/1230)

+ Dumpling
Expand Down Expand Up @@ -83,7 +82,6 @@ TiDB version: 4.0.10

+ TiCDC

- Fix the `maxwell` protocol issues, including the issue of `base64` data output and the issue of outputting TSO to unix timestamp [#1173](https://github.com/pingcap/tiflow/pull/1173)
- Fix a bug that outdated metadata might cause the newly created changefeed abnormal [#1184](https://github.com/pingcap/tiflow/pull/1184)
- Fix the issue of creating the receiver on the closed notifier [#1199](https://github.com/pingcap/tiflow/pull/1199)
- Fix a bug that the TiCDC owner might consume too much memory in the etcd watch client [#1227](https://github.com/pingcap/tiflow/pull/1227)
Expand Down
2 changes: 0 additions & 2 deletions releases/release-4.0.6.md
Original file line number Diff line number Diff line change
Expand Up @@ -27,8 +27,6 @@ TiDB version: 4.0.6

+ TiCDC (GA since v4.0.6)

- Support outputting data in the `maxwell` format [#869](https://github.com/pingcap/tiflow/pull/869)

## Improvements

+ TiDB
Expand Down
1 change: 0 additions & 1 deletion releases/release-4.0.8.md
Original file line number Diff line number Diff line change
Expand Up @@ -149,7 +149,6 @@ TiDB version: 4.0.8

- Fix the unexpected exit caused by the failure to update the GC safepoint [#979](https://github.com/pingcap/tiflow/pull/979)
- Fix the issue that the task status is unexpectedly flushed because of the incorrect mod revision cache [#1017](https://github.com/pingcap/tiflow/pull/1017)
- Fix the unexpected empty Maxwell messages [#978](https://github.com/pingcap/tiflow/pull/978)

+ TiDB Lightning

Expand Down
2 changes: 1 addition & 1 deletion releases/release-5.0.6.md
Original file line number Diff line number Diff line change
Expand Up @@ -151,7 +151,7 @@ TiDB version: 5.0.6
- Fix the issue that Avro sink does not support parsing JSON type columns [#3624](https://github.com/pingcap/tiflow/issues/3624)
- Fix the bug that TiCDC reads the incorrect schema snapshot from TiKV when the TiKV owner restarts [#2603](https://github.com/pingcap/tiflow/issues/2603)
- Fix the memory leak issue after processing DDLs [#3174](https://github.com/pingcap/ticdc/issues/3174)
- Fix the bug that the `enable-old-value` configuration item is not automatically set to `true` on Canal and Maxwell protocols [#3676](https://github.com/pingcap/tiflow/issues/3676)
- Fix the bug that the `enable-old-value` configuration item is not automatically set to `true` on the Canal protocol [#3676](https://github.com/pingcap/tiflow/issues/3676)
- Fix the timezone error that occurs when the `cdc server` command runs on some Red Hat Enterprise Linux releases (such as 6.8 and 6.9) [#3584](https://github.com/pingcap/tiflow/issues/3584)
- Fix the issue of the inaccurate `txn_batch_size` monitoring metric for Kafka sink [#3431](https://github.com/pingcap/tiflow/issues/3431)
- Fix the issue that `tikv_cdc_min_resolved_ts_no_change_for_1m` keeps alerting when there is no changefeed [#11017](https://github.com/tikv/tikv/issues/11017)
Expand Down
2 changes: 1 addition & 1 deletion releases/release-5.1.4.md
Original file line number Diff line number Diff line change
Expand Up @@ -153,7 +153,7 @@ TiDB version: 5.1.4
- Fix the TiCDC panic issue that occurs when manually cleaning the task status in etcd [#2980](https://github.com/pingcap/tiflow/issues/2980)
- Fix the issue that the service cannot be started because of a timezone issue in the RHEL release [#3584](https://github.com/pingcap/tiflow/issues/3584)
- Fix the issue of overly frequent warnings caused by MySQL sink deadlock [#2706](https://github.com/pingcap/tiflow/issues/2706)
- Fix the bug that the `enable-old-value` configuration item is not automatically set to `true` on Canal and Maxwell protocols [#3676](https://github.com/pingcap/tiflow/issues/3676)
- Fix the bug that the `enable-old-value` configuration item is not automatically set to `true` on the Canal protocol [#3676](https://github.com/pingcap/tiflow/issues/3676)
- Fix the issue that Avro sink does not support parsing JSON type columns [#3624](https://github.com/pingcap/tiflow/issues/3624)
- Fix the negative value error in the changefeed checkpoint lag [#3010](https://github.com/pingcap/ticdc/issues/3010)
- Fix the OOM issue in the container environment [#1798](https://github.com/pingcap/tiflow/issues/1798)
Expand Down
2 changes: 1 addition & 1 deletion releases/release-5.2.4.md
Original file line number Diff line number Diff line change
Expand Up @@ -182,7 +182,7 @@ TiDB version: 5.2.4
- Fix the issue that the service cannot be started because of a timezone issue in the RHEL release [#3584](https://github.com/pingcap/tiflow/issues/3584)
- Fix the issue that `stopped` changefeeds resume automatically after a cluster upgrade [#3473](https://github.com/pingcap/tiflow/issues/3473)
- Fix the issue of overly frequent warnings caused by MySQL sink deadlock [#2706](https://github.com/pingcap/tiflow/issues/2706)
- Fix the bug that the `enable-old-value` configuration item is not automatically set to `true` on Canal and Maxwell protocols [#3676](https://github.com/pingcap/tiflow/issues/3676)
- Fix the bug that the `enable-old-value` configuration item is not automatically set to `true` on the Canal protocol [#3676](https://github.com/pingcap/tiflow/issues/3676)
- Fix the issue that Avro sink does not support parsing JSON type columns [#3624](https://github.com/pingcap/tiflow/issues/3624)
- Fix the negative value error in the changefeed checkpoint lag [#3010](https://github.com/pingcap/tiflow/issues/3010)
- Fix the OOM issue in the container environment [#1798](https://github.com/pingcap/tiflow/issues/1798)
Expand Down
2 changes: 1 addition & 1 deletion releases/release-5.3.1.md
Original file line number Diff line number Diff line change
Expand Up @@ -143,7 +143,7 @@ TiDB version: 5.3.1
- Fix the issue that `stopped` changefeeds resume automatically after a cluster upgrade [#3473](https://github.com/pingcap/tiflow/issues/3473)
- Fix the issue that default values cannot be replicated [#3793](https://github.com/pingcap/tiflow/issues/3793)
- Fix the issue of overly frequent warnings caused by MySQL sink deadlock [#2706](https://github.com/pingcap/tiflow/issues/2706)
- Fix the bug that the `enable-old-value` configuration item is not automatically set to `true` on Canal and Maxwell protocols [#3676](https://github.com/pingcap/tiflow/issues/3676)
- Fix the bug that the `enable-old-value` configuration item is not automatically set to `true` on the Canal protocol [#3676](https://github.com/pingcap/tiflow/issues/3676)
- Fix the issue that Avro sink does not support parsing JSON type columns [#3624](https://github.com/pingcap/tiflow/issues/3624)
- Fix the negative value error in the changefeed checkpoint lag [#3010](https://github.com/pingcap/tiflow/issues/3010)

Expand Down
2 changes: 1 addition & 1 deletion ticdc/ticdc-open-api-v2.md
Original file line number Diff line number Diff line change
Expand Up @@ -310,7 +310,7 @@ The `sink` parameters are described as follows:
| `date_separator` | `STRING` type. Indicates the date separator type of the file directory. Value options are `none`, `year`, `month`, and `day`. `none` is the default value and means that the date is not separated. (Optional) |
| `dispatchers` | An configuration array for event dispatching. (Optional) |
| `encoder_concurrency` | `INT` type. The number of encoder threads in the MQ sink. The default value is `16`. (Optional) |
| `protocol` | `STRING` type. For MQ sinks, you can specify the protocol format of the message. The following protocols are currently supported: `canal-json`, `open-protocol`, `avro`, and `maxwell`. |
| `protocol` | `STRING` type. For MQ sinks, you can specify the protocol format of the message. The following protocols are currently supported: `canal-json`, `open-protocol`, `avro`, `debezium`, and `simple`. |
| `schema_registry` | `STRING` type. The schema registry address. (Optional) |
| `terminator` | `STRING` type. The terminator is used to separate two data change events. The default value is null, which means `"\r\n"` is used as the terminator. (Optional) |
| `transaction_atomicity` | `STRING` type. The atomicity level of the transaction. (Optional) |
Expand Down
2 changes: 1 addition & 1 deletion ticdc/ticdc-open-api.md
Original file line number Diff line number Diff line change
Expand Up @@ -163,7 +163,7 @@ The configuration parameters of sink are as follows:

`matcher`: The matching syntax of matcher is the same as the filter rule syntax.

`protocol`: For the sink of MQ type, you can specify the protocol format of the message. Currently the following protocols are supported: `canal-json`, `open-protocol`, `avro`, and `maxwell`.
`protocol`: For the sink of MQ type, you can specify the protocol format of the message. Currently the following protocols are supported: `canal-json`, `open-protocol`, `avro`, `debezium`, and `simple`.

### Example

Expand Down
2 changes: 1 addition & 1 deletion ticdc/ticdc-sink-to-kafka.md
Original file line number Diff line number Diff line change
Expand Up @@ -80,7 +80,7 @@ The following are descriptions of sink URI parameters and values that can be con
| `replication-factor` | The number of Kafka message replicas that can be saved (optional, `1` by default). This value must be greater than or equal to the value of [`min.insync.replicas`](https://kafka.apache.org/33/documentation.html#brokerconfigs_min.insync.replicas) in Kafka. |
| `required-acks` | A parameter used in the `Produce` request, which notifies the broker of the number of replica acknowledgements it needs to receive before responding. Value options are `0` (`NoResponse`: no response, only `TCP ACK` is provided), `1` (`WaitForLocal`: responds only after local commits are submitted successfully), and `-1` (`WaitForAll`: responds after all replicated replicas are committed successfully. You can configure the minimum number of replicated replicas using the [`min.insync.replicas`](https://kafka.apache.org/33/documentation.html#brokerconfigs_min.insync.replicas) configuration item of the broker). (Optional, the default value is `-1`). |
| `compression` | The compression algorithm used when sending messages (value options are `none`, `lz4`, `gzip`, `snappy`, and `zstd`; `none` by default). Note that the Snappy compressed file must be in the [official Snappy format](https://github.com/google/snappy). Other variants of Snappy compression are not supported.|
| `protocol` | The protocol with which messages are output to Kafka. The value options are `canal-json`, `open-protocol`, `avro` and `maxwell`. |
| `protocol` | The protocol with which messages are output to Kafka. The value options are `canal-json`, `open-protocol`, `avro`, `debezium`, and `simple`. |
| `auto-create-topic` | Determines whether TiCDC creates the topic automatically when the `topic-name` passed in does not exist in the Kafka cluster (optional, `true` by default). |
| `enable-tidb-extension` | Optional. `false` by default. When the output protocol is `canal-json`, if the value is `true`, TiCDC sends [WATERMARK events](/ticdc/ticdc-canal-json.md#watermark-event) and adds the [TiDB extension field](/ticdc/ticdc-canal-json.md#tidb-extension-field) to Kafka messages. From v6.1.0, this parameter is also applicable to the `avro` protocol. If the value is `true`, TiCDC adds [three TiDB extension fields](/ticdc/ticdc-avro-protocol.md#tidb-extension-fields) to the Kafka message. |
| `max-batch-size` | New in v4.0.9. If the message protocol supports outputting multiple data changes to one Kafka message, this parameter specifies the maximum number of data changes in one Kafka message. It currently takes effect only when Kafka's `protocol` is `open-protocol` (optional, `16` by default). |
Expand Down

0 comments on commit cd3fbda

Please sign in to comment.