Skip to content

Commit 4d591ad

Browse files
authored
Merge branch 'release-3.0' into automated-cherry-pick-of-pingcap#11047-release-3.0
2 parents b43af42 + e2426c3 commit 4d591ad

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

61 files changed

+1488
-223
lines changed

CHANGELOG.md

+142
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,148 @@
11
# TiDB Changelog
22
All notable changes to this project will be documented in this file. See also [Release Notes](https://github.com/pingcap/docs/blob/master/releases/rn.md), [TiKV Changelog](https://github.com/tikv/tikv/blob/master/CHANGELOG.md) and [PD Changelog](https://github.com/pingcap/pd/blob/master/CHANGELOG.md).
33

4+
## [3.0.0] 2019-06-28
5+
## New Features
6+
* Support Window Functions; compatible with all window functions in MySQL 8.0, including `NTILE`, `LEAD`, `LAG`, `PERCENT_RANK`, `NTH_VALUE`, `CUME_DIST`, `FIRST_VALUE` , `LAST_VALUE`, `RANK`, `DENSE_RANK`, and `ROW_NUMBER`
7+
* Support Views (Experimental)
8+
* Improve Table Partition
9+
- Support Range Partition
10+
- Support Hash Partition
11+
* Add the plug-in framework, supporting plugins such as IP Whitelist (Enterprise feature) and Audit Log (Enterprise feature).
12+
* Support the SQL Plan Management function to create SQL execution plan binding to ensure query stability (Experimental)
13+
14+
## SQL Optimizer
15+
* Optimize the `NOT EXISTS` subquery and convert it to `Anti Semi Join` to improve performance
16+
* Optimize the constant propagation on the `Outer Join`, and add the optimization rule of `Outer Join` elimination to reduce non-effective computations and improve performance
17+
* Optimize the `IN` subquery to execute `Inner Join` after aggregation to improve performance
18+
* Optimize `Index Join` to adapt to more scenarios
19+
* Improve the Partition Pruning optimization rule of Range Partition
20+
* Optimize the query logic for `_tidb_rowid`to avoid full table scan and improve performance
21+
* Match more prefix columns of the indexes when extracting access conditions of composite indexes if there are relevant columns in the filter to improve performance
22+
* Improve the accuracy of cost estimates by using order correlation between columns
23+
* Optimize `Join Reorder` based on the Greedy algorithm and the dynamic planning algorithm to improve accuracy for index selection using `Join`
24+
* Support Skyline Pruning, with some rules to prevent the execution plan from relying too heavily on statistics to improve query stability
25+
* Improve the accuracy of row count estimation for single-column indexes with NULL values
26+
* Support `FAST ANALYZE` that randomly samples in each Region to avoid full table scan and improve performance with statistics collection
27+
* Support the incremental Analyze operation on monotonically increasing index columns to improve performance with statistics collection
28+
* Support using subqueries in the `DO` statement
29+
* Support using `Index Join` in transactions
30+
* Optimize `prepare`/`execute` to support DDL statements with no parameters
31+
* Modify the system behaviour to auto load statistics when the `stats-lease` variable value is 0
32+
* Support exporting historical statistics
33+
* Support the `dump`/`load` correlation of histograms
34+
35+
## SQL Execution Engine
36+
* Optimize log output: `EXECUTE` outputs user variables and `COMMIT` outputs slow query logs to facilitate troubleshooting
37+
* Support the `EXPLAIN ANALYZE` function to improve SQL tuning usability
38+
* Support the `admin show next_row_id` command to get the ID of the next row
39+
* Add six built-in functions: `JSON_QUOTE`, `JSON_ARRAY_APPEND`, `JSON_MERGE_PRESERVE`, `BENCHMARK` ,`COALESCE`, and `NAME_CONST`
40+
* Optimize control logics on the chunk size to dynamically adjust based on the query context, to reduce the SQL execution time and resource consumption
41+
* Support tracking and controlling memory usage in three operators - `TableReader`, `IndexReader` and `IndexLookupReader`
42+
* Optimize the Merge Join operator to support an empty `ON` condition
43+
* Optimize write performance for single tables that contains too many columns
44+
* Improve the performance of `admin show ddl jobs` by supporting scanning data in reverse order
45+
* Add the `split table region` statement to manually split the table Region to alleviate the hotspot issue
46+
* Add the `split index region` statement to manually split the index Region to alleviate the hotspot issue
47+
* Add a blacklist to prohibit pushing down expressions to Coprocessor
48+
* Optimize the `Expensive Query` log to print the SQL query in the log when it exceeds the configured limit of execution time or memory
49+
50+
## DDL
51+
* Support migrating from character set `utf8` to `utf8mb4`
52+
* Change the default character set from`utf8` to `utf8mb4`
53+
* Add the `alter schema` statement to modify the character set and the collation of the database
54+
* Support ALTER algorithm `INPLACE`/`INSTANT`
55+
* Support `SHOW CREATE VIEW`
56+
* Support `SHOW CREATE USER`
57+
* Support fast recovery of mistakenly deleted tables
58+
* Support adjusting the number of concurrencies of ADD INDEX dynamically
59+
* Add the `pre_split_regions` option that pre-allocates Regions when creating the table using the `CREATE TABLE` statement, to relieve write hot Regions caused by lots of writes after the table creation
60+
* Support splitting Regions by the index and range of the table specified using SQL statements to relieve hotspot issues
61+
* Add the `ddl_error_count_limit` global variable to limit the number of DDL task retries
62+
* Add a feature to use `SHARD_ROW_ID_BITS` to scatter row IDs when the column contains an AUTO_INCREMENT attribute to relieve the hotspot issue
63+
* Optimize the lifetime of invalid DDL metadata to speed up recovering the normal execution of DDL operations after upgrading the TiDB cluster
64+
65+
## Transactions
66+
* Support the pessimistic transaction model (Experimental)
67+
* Optimize transaction processing logics to adapt to more scenarios:
68+
- Change the default value `tidb_disable_txn_auto_retry` to `on`, which means non-auto committed transactions will not be retried
69+
- Add the `tidb_batch_commit` system variable to split a transaction into multiple ones to be executed concurrently
70+
- Add the `tidb_low_resolution_tso` system variable to control the number of TSOs to obtain in batches and reduce the number of times that transactions request for TSOs, to improve performance in scenarios with relatively low requirement of consistency
71+
- Add the `tidb_skip_isolation_level_check` variable to control whether to report errors when the isolation level is set to SERIALIZABLE
72+
- Modify the `tidb_disable_txn_auto_retry` system variable to make it work on all retryable errors
73+
74+
## Permission Management
75+
* Perform permission check on the `ANALYZE`, `USE`, `SET GLOBAL`, and `SHOW PROCESSLIST` statements
76+
* Support Role Based Access Control (RBAC) (Experimental)
77+
78+
## Server
79+
* Optimize slow query logs
80+
- Restructure the log format
81+
- Optimize the log content
82+
- Optimize the log query method to support using the `INFORMATION_SCHEMA.SLOW_QUERY` and `ADMIN SHOW SLOW` statements of the memory table to query slow query logs
83+
* Develop a unified log format specification with restructured log system to facilitate collection and analysis by tools
84+
* Support using SQL statements to manage Binlog services, including querying status, enabling Binlog, maintaining and sending Binlog strategies.
85+
* Support using `unix_socket` to connect to the database
86+
* Support `Trace` for SQL statements
87+
* Support getting information for a TiDB instance via the `/debug/zip` HTTP interface to facilitate troubleshooting.
88+
* Optimize monitoring items to facilitate troubleshooting:
89+
- Add the `high_error_rate_feedback_total` monitoring item to monitor the difference between the actual data volume and the estimated data volume based on statistics
90+
- Add a QPS monitoring item in the database dimension
91+
* Optimize the system initialization process to only allow the DDL owner to perform the initialization. This reduces the startup time for initialization or upgrading.
92+
* Optimize the execution logic of `kill query` to improve performance and ensure resource is release properly
93+
* Add a startup option `config-check` to check the validity of the configuration file
94+
* Add the `tidb_back_off_weight` system variable to control the backoff time of internal error retries
95+
* Add the `wait_timeout`and `interactive_timeout` system variables to control the maximum idle connections allowed
96+
* Add the connection pool for TiKV to shorten the connection establishing time
97+
98+
## Compatibility
99+
* Support the `ALLOW_INVALID_DATES` SQL mode
100+
* Support the MySQL 320 Handshake protocol
101+
* Support manifesting unsigned BIGINT columns as auto-increment columns
102+
* Support the `SHOW CREATE DATABASE IF NOT EXISTS` syntax
103+
* Optimize the fault tolerance of `load data` for CSV files
104+
* Abandon the predicate pushdown operation when the filtering condition contains a user variable to improve the compatibility with MySQL’s behavior of using user variables to simulate Window Functions
105+
106+
107+
## [3.0.0-rc.3] 2019-06-21
108+
## SQL Optimizer
109+
* Remove the feature of collecting virtual generated column statistics[#10629](https://github.com/pingcap/tidb/pull/10629)
110+
* Fix the issue that the primary key constant overflows during point queries [#10699](https://github.com/pingcap/tidb/pull/10699)
111+
* Fix the issue that using uninitialized information in `fast analyze` causes panic [#10691](https://github.com/pingcap/tidb/pull/10691)
112+
* Fix the issue that executing the `create view` statement using `prepare` causes panic because of wrong column information [#10713](https://github.com/pingcap/tidb/pull/10713)
113+
* Fix the issue that the column information is not cloned when handling window functions [#10720](https://github.com/pingcap/tidb/pull/10720)
114+
* Fix the wrong estimation for the selectivity rate of the inner table selection in index join [#10854](https://github.com/pingcap/tidb/pull/10854)
115+
* Support automatic loading statistics when the `stats-lease` variable value is 0 [#10811](https://github.com/pingcap/tidb/pull/10811)
116+
117+
## Execution Engine
118+
* Fix the issue that resources are not correctly released when calling the `Close` function in `StreamAggExec` [#10636](https://github.com/pingcap/tidb/pull/10636)
119+
* Fix the issue that the order of `table_option` and `partition_options` is incorrect in the result of executing the `show create table` statement for partitioned tables [#10689](https://github.com/pingcap/tidb/pull/10689)
120+
* Improve the performance of `admin show ddl jobs` by supporting scanning data in reverse order [#10687](https://github.com/pingcap/tidb/pull/10687)
121+
* Fix the issue that the result of the `show grants` statement in RBAC is incompatible with that of MySQL when this statement has the `current_user` field [#10684](https://github.com/pingcap/tidb/pull/10684)
122+
* Fix the issue that UUIDs might generate duplicate values ​​on multiple nodes [#10712](https://github.com/pingcap/tidb/pull/10712)
123+
* Fix the issue that the `show view` privilege is not considered in `explain` [#10635](https://github.com/pingcap/tidb/pull/10635)
124+
* Add the `split table region` statement to manually split the table Region to alleviate the hotspot issue [#10765](https://github.com/pingcap/tidb/pull/10765)
125+
* Add the `split index region` statement to manually split the index Region to alleviate the hotspot issue [#10764](https://github.com/pingcap/tidb/pull/10764)
126+
* Fix the incorrect execution issue when you execute multiple statements such as `create user`, `grant`, or `revoke` consecutively [#10737] (https://github.com/pingcap/tidb/pull/10737)
127+
* Add a blacklist to prohibit pushing down expressions to Coprocessor [#10791](https://github.com/pingcap/tidb/pull/10791)
128+
* Add the feature of printing the `expensive query` log when a query exceeds the memory configuration limit [#10849](https://github.com/pingcap/tidb/pull/10849)
129+
* Add the `bind-info-lease` configuration item to control the update time of the modified binding execution plan [#10727](https://github.com/pingcap/tidb/pull/10727)
130+
* Fix the OOM issue in high concurrent scenarios caused by the failure to quickly release Coprocessor resources, resulted from the `execdetails.ExecDetails` pointer [#10832] (https://github.com/pingcap/tidb/pull/10832)
131+
* Fix the panic issue caused by the `kill` statement in some cases [#10876](https://github.com/pingcap/tidb/pull/10876)
132+
## Server
133+
* Fix the issue that goroutine might leak when repairing GC [#10683](https://github.com/pingcap/tidb/pull/10683)
134+
* Support displaying the `host` information in slow queries [#10693](https://github.com/pingcap/tidb/pull/10693)
135+
* Support reusing idle links that interact with TiKV [#10632](https://github.com/pingcap/tidb/pull/10632)
136+
* Fix the support for enabling the `skip-grant-table` option in RBAC [#10738](https://github.com/pingcap/tidb/pull/10738)
137+
* Fix the issue that `pessimistic-txn` configuration goes invalid [#10825](https://github.com/pingcap/tidb/pull/10825)
138+
* Fix the issue that the actively cancelled ticlient requests are still retried [#10850](https://github.com/pingcap/tidb/pull/10850)
139+
* Improve performance in the case where pessimistic transactions conflict with optimistic transactions [#10881](https://github.com/pingcap/tidb/pull/10881)
140+
## DDL
141+
* Fix the issue that modifying charset using `alter table` causes the `blob` type change [#10698](https://github.com/pingcap/tidb/pull/10698)
142+
* Add a feature to use `SHARD_ROW_ID_BITS` to scatter row IDs when the column contains an `AUTO_INCREMENT` attribute to alleviate the hotspot issue [#10794](https://github.com/pingcap/tidb/pull/10794)
143+
* Prohibit adding stored generated columns by using the `alter table` statement [#10808](https://github.com/pingcap/tidb/pull/10808)
144+
* Optimize the invalid survival time of DDL metadata to shorten the period during which the DDL operation is slower after cluster upgrade [#10795](https://github.com/pingcap/tidb/pull/10795)
145+
4146
## [3.0.0-rc.2] 2019-05-28
5147
### SQL Optimizer
6148
* Support Index Join in more scenarios

cmd/explaintest/r/explain_easy.result

+32
Original file line numberDiff line numberDiff line change
@@ -192,6 +192,35 @@ HashAgg_18 24000.00 root group by:c1, funcs:firstrow(join_agg_0)
192192
└─IndexReader_62 8000.00 root index:StreamAgg_53
193193
└─StreamAgg_53 8000.00 cop group by:test.t2.c1, funcs:firstrow(test.t2.c1), firstrow(test.t2.c1)
194194
└─IndexScan_60 10000.00 cop table:t2, index:c1, range:[NULL,+inf], keep order:true, stats:pseudo
195+
explain select count(1) from (select count(1) from (select * from t1 where c3 = 100) k) k2;
196+
id count task operator info
197+
StreamAgg_13 1.00 root funcs:count(1)
198+
└─StreamAgg_28 1.00 root funcs:firstrow(col_0)
199+
└─TableReader_29 1.00 root data:StreamAgg_17
200+
└─StreamAgg_17 1.00 cop funcs:firstrow(1)
201+
└─Selection_27 10.00 cop eq(test.t1.c3, 100)
202+
└─TableScan_26 10000.00 cop table:t1, range:[-inf,+inf], keep order:false, stats:pseudo
203+
explain select 1 from (select count(c2), count(c3) from t1) k;
204+
id count task operator info
205+
Projection_5 1.00 root 1
206+
└─StreamAgg_17 1.00 root funcs:firstrow(col_0)
207+
└─TableReader_18 1.00 root data:StreamAgg_9
208+
└─StreamAgg_9 1.00 cop funcs:firstrow(1)
209+
└─TableScan_16 10000.00 cop table:t1, range:[-inf,+inf], keep order:false, stats:pseudo
210+
explain select count(1) from (select max(c2), count(c3) as m from t1) k;
211+
id count task operator info
212+
StreamAgg_11 1.00 root funcs:count(1)
213+
└─StreamAgg_23 1.00 root funcs:firstrow(col_0)
214+
└─TableReader_24 1.00 root data:StreamAgg_15
215+
└─StreamAgg_15 1.00 cop funcs:firstrow(1)
216+
└─TableScan_22 10000.00 cop table:t1, range:[-inf,+inf], keep order:false, stats:pseudo
217+
explain select count(1) from (select count(c2) from t1 group by c3) k;
218+
id count task operator info
219+
StreamAgg_11 1.00 root funcs:count(1)
220+
└─HashAgg_23 8000.00 root group by:col_1, funcs:firstrow(col_0)
221+
└─TableReader_24 8000.00 root data:HashAgg_20
222+
└─HashAgg_20 8000.00 cop group by:test.t1.c3, funcs:firstrow(1)
223+
└─TableScan_15 10000.00 cop table:t1, range:[-inf,+inf], keep order:false, stats:pseudo
195224
set @@session.tidb_opt_insubq_to_join_and_agg=0;
196225
explain select sum(t1.c1 in (select c1 from t2)) from t1;
197226
id count task operator info
@@ -434,6 +463,9 @@ id count task operator info
434463
Projection_3 10000.00 root or(NULL, gt(test.t.a, 1))
435464
└─TableReader_5 10000.00 root data:TableScan_4
436465
└─TableScan_4 10000.00 cop table:t, range:[-inf,+inf], keep order:false, stats:pseudo
466+
explain select * from t where a = 1 for update;
467+
id count task operator info
468+
Point_Get_1 1.00 root table:t, handle:1
437469
drop table if exists ta, tb;
438470
create table ta (a varchar(20));
439471
create table tb (a varchar(20));

cmd/explaintest/t/explain_easy.test

+7
Original file line numberDiff line numberDiff line change
@@ -35,6 +35,12 @@ explain select if(10, t1.c1, t1.c2) from t1;
3535
explain select c1 from t2 union select c1 from t2 union all select c1 from t2;
3636
explain select c1 from t2 union all select c1 from t2 union select c1 from t2;
3737

38+
# https://github.com/pingcap/tidb/issues/9125
39+
explain select count(1) from (select count(1) from (select * from t1 where c3 = 100) k) k2;
40+
explain select 1 from (select count(c2), count(c3) from t1) k;
41+
explain select count(1) from (select max(c2), count(c3) as m from t1) k;
42+
explain select count(1) from (select count(c2) from t1 group by c3) k;
43+
3844
set @@session.tidb_opt_insubq_to_join_and_agg=0;
3945

4046
explain select sum(t1.c1 in (select c1 from t2)) from t1;
@@ -79,6 +85,7 @@ drop table if exists t;
7985
create table t(a bigint primary key);
8086
explain select * from t where a = 1 and a = 2;
8187
explain select null or a > 1 from t;
88+
explain select * from t where a = 1 for update;
8289

8390
drop table if exists ta, tb;
8491
create table ta (a varchar(20));

0 commit comments

Comments
 (0)