Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ddl: fix reassigned partition id in truncate table does not take effect #8102

Closed
wants to merge 41 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
41 commits
Select commit Hold shift + click to select a range
8823f12
session: set Sleep state for process info (#7826) (#7839)
coocood Oct 8, 2018
e6025cb
executor: refine the precision for avg (#7860) (#7874)
XuHuaiyu Oct 11, 2018
4323e84
domain: fix memory leak for stats (#7864) (#7873)
alivxxx Oct 11, 2018
1db4288
stats: fix combined index low-bound check (#7814) (#7856)
lysu Oct 11, 2018
fbdcf63
plan: exclude IsNull from constant propagation(cherry-pick #7835) (#7…
eurekaka Oct 11, 2018
3950070
util: refine chunk.SwapColumn to rebuild the column reference (#7841)…
XuHuaiyu Oct 11, 2018
deea24f
store/tikv,executor: redesign the latch scheduler (#7711) (#7859)
tiancaiamao Oct 12, 2018
a84cce1
executor: remove some useless code and avoid some redundancy check (#…
jackysp Oct 15, 2018
8fae90e
expression: make sysdate unfoldable (#7838) (#7895)
zz-jason Oct 15, 2018
fd5d666
expression: fix painc on substring_index (#7806) (#7897)
zz-jason Oct 15, 2018
62a6be9
*: udpate pd client vendor (#7905)
disksing Oct 16, 2018
c91290f
stats: fix panic caused by empty histogram (#7912) (#7928)
alivxxx Oct 17, 2018
52d5ee2
*: make `explain` support `explain anaylze` (#7827)(#7888) (#7925)
lysu Oct 18, 2018
4021862
stats: fix histogram boundaries overflow error (#7883) (#7944)
alivxxx Oct 18, 2018
d9137e2
executor: fix a bug in point get (#7934) (#7943)
winoros Oct 18, 2018
0f587c3
planner, executor: refine ColumnPrune for LogicalUnionAll (#7930) (#7…
XuHuaiyu Oct 18, 2018
3e3b905
expression: maintain `DeferredExpr` in aggressive constant folding. (…
eurekaka Oct 18, 2018
e48149e
executor: add the slow log for commit (#7951) (#7964)
jackysp Oct 19, 2018
60364fe
ddl: fix invailid ddl job panic (#7940) (#7958)
crazycs520 Oct 20, 2018
f6d68e6
domain: close slow query channel after closing session pool (#7847) (…
tiancaiamao Oct 22, 2018
f5d9852
domain: close slow query channel after closing session pool (#7847) (…
tiancaiamao Oct 22, 2018
f4c18b6
domain: close slow query channel after closing session pool (#7847) (…
tiancaiamao Oct 22, 2018
07a9d52
domain: close slow query channel after closing session pool (#7847) (…
tiancaiamao Oct 22, 2018
7cb4fd8
admin: fix admin check table compare bug (#7818) (#7975)
crazycs520 Oct 22, 2018
65f77f7
domain: close slow query channel after closing session pool (#7847) (…
tiancaiamao Oct 23, 2018
c2c7d3d
stats: limit the length of sample values (#7931) (#7982)
alivxxx Oct 23, 2018
b47f6a0
executor: fix panic when limit is too large (#7936) (#8002)
winoros Oct 23, 2018
59d6a93
store/tikv: log more information when other err occurs (#7948) (#8006)
winoros Oct 23, 2018
096d2b2
parser: fix bug empty string in "ESCAPED BY" subclause of "FIELDS" ca…
lzmhhh123 Oct 23, 2018
4741e6b
executor, planner: clone proj schema for different children in buildP…
XuHuaiyu Oct 23, 2018
6cb942b
types: fix bug which Float type is not effective in AddDate & SubDate…
lzmhhh123 Oct 23, 2018
500c9bb
add changelog for 2.1.0 rc4 (#8020) (#8027)
zz-jason Oct 23, 2018
80e7584
stats: fix estimation for out of range point queries (#8015) (#8035)
alivxxx Oct 24, 2018
73692d1
executor: improve wide table insert & update performance (#7935) (#8024)
lysu Oct 24, 2018
7fb086f
executor: print arguments in execute statement in log files (#7684) (…
jackysp Oct 25, 2018
1b8102a
planner: fix a panic of a prepared statement with IndexScan when usin…
dbjoa Oct 25, 2018
5707a9b
server: add log for binary execute statement (#7987) (#8063)
jackysp Oct 26, 2018
75192d7
expression: refine built-in func truncate to support uint arg (#8000)…
yu34po Oct 26, 2018
423e9b6
*: fix the issue of executing DDL after executing SQL failure in txn …
zimulala Oct 29, 2018
ebebc2a
update pkg/errors && add pump client in vendor (#8093)
WangXiangUSTC Oct 29, 2018
5d8a48e
fix reassigned partition id in truncate table does not take effect
ciscoxll Oct 30, 2018
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
33 changes: 32 additions & 1 deletion CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,37 @@
# TiDB Changelog
All notable changes to this project will be documented in this file. See also [Release Notes](https://github.com/pingcap/docs/blob/master/releases/rn.md), [TiKV Changelog](https://github.com/tikv/tikv/blob/master/CHANGELOG.md) and [PD Changelog](https://github.com/pingcap/pd/blob/master/CHANGELOG.md).

## [2.1.0-rc.4] - 2018-10-23
### SQL Optimizer
* Fix the issue that column pruning of `UnionAll` is incorrect in some cases [#7941](https://github.com/pingcap/tidb/pull/7941)
* Fix the issue that the result of the `UnionAll` operator is incorrect in some cases [#8007](https://github.com/pingcap/tidb/pull/8007)
### SQL Execution Engine
* Fix the precision issue of the `AVG` function [#7874](https://github.com/pingcap/tidb/pull/7874)
* Support using the `EXPLAIN ANALYZE` statement to check the runtime statistics including the execution time and the number of returned rows of each operator during the query execution process [#7925](https://github.com/pingcap/tidb/pull/7925)
* Fix the panic issue of the `PointGet` operator when a column of a table appears multiple times in the result set [#7943](https://github.com/pingcap/tidb/pull/7943)
* Fix the panic issue caused by too large values in the `Limit` subclause [#8002](https://github.com/pingcap/tidb/pull/8002)
* Fix the panic issue during the execution process of the `AddDate`/`SubDate` statement in some cases [#8009](https://github.com/pingcap/tidb/pull/8009)
### Statistics
* Fix the issue of judging the prefix of the histogram low-bound of the combined index as out of range [#7856](https://github.com/pingcap/tidb/pull/7856)
* Fix the memory leak issue caused by statistics collecting [#7873](https://github.com/pingcap/tidb/pull/7873)
* Fix the panic issue when the histogram is empty [#7928](https://github.com/pingcap/tidb/pull/7928)
* Fix the issue that the histogram bound is out of range when the statistics is being uploaded [#7944](https://github.com/pingcap/tidb/pull/7944)
* Limit the maximum length of values in the statistics sampling process [#7982](https://github.com/pingcap/tidb/pull/7982)
### Server
* Refactor Latch to avoid misjudgment of transaction conflicts and improve the execution performance of concurrent transactions [#7711](https://github.com/pingcap/tidb/pull/7711)
* Fix the panic issue caused by collecting slow queries in some cases [#7874](https://github.com/pingcap/tidb/pull/7847)
* Fix the panic issue when `ESCAPED BY` is an empty string in the `LOAD DATA` statement [#8005](https://github.com/pingcap/tidb/pull/8005)
* Complete the “coprocessor error” log information [#8006](https://github.com/pingcap/tidb/pull/8006)
### Compatibility
* Set the `Command` field of the `SHOW PROCESSLIST` result to `Sleep` when the query is empty [#7839](https://github.com/pingcap/tidb/pull/7839)
### Expressions
* Fix the constant folding issue of the `SYSDATE` function [#7895](https://github.com/pingcap/tidb/pull/7895)
* Fix the issue that `SUBSTRING_INDEX` panics in some cases [#7897](https://github.com/pingcap/tidb/pull/7897)
### DDL
* Fix the stack overflow issue caused by throwing the `invalid ddl job type` error [#7958](https://github.com/pingcap/tidb/pull/7958)
* Fix the issue that the result of `ADMIN CHECK TABLE` is incorrect in some cases [#7975](https://github.com/pingcap/tidb/pull/7975)


## [2.1.0-rc.2] - 2018-09-14
### SQL Optimizer
* Put forward a proposal of the next generation Planner [#7543](https://github.com/pingcap/tidb/pull/7543)
Expand All @@ -15,7 +46,7 @@ All notable changes to this project will be documented in this file. See also [R
* Optimize the performance of Hash aggregate operators [#7541](https://github.com/pingcap/tidb/pull/7541)
* Optimize the performance of Join operators [#7493](https://github.com/pingcap/tidb/pull/7493), [#7433](https://github.com/pingcap/tidb/pull/7433)
* Fix the issue that the result of `UPDATE JOIN` is incorrect when the Join order is changed [#7571](https://github.com/pingcap/tidb/pull/7571)
* Improve the performance of Chunk’s iterator [#7585](https://github.com/pingcap/tidb/pull/7585)
* Improve the performance of Chunk’s iterator [#7585](https://github.com/pingcap/tidb/pull/7585)
### Statistics
* Fix the issue that the auto Analyze work repeatedly analyzes the statistics [#7550](https://github.com/pingcap/tidb/pull/7550)
* Fix the statistics update error that occurs when there is no statistics change [#7530](https://github.com/pingcap/tidb/pull/7530)
Expand Down
28 changes: 21 additions & 7 deletions Gopkg.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

6 changes: 5 additions & 1 deletion Gopkg.toml
Original file line number Diff line number Diff line change
Expand Up @@ -99,6 +99,10 @@ required = ["github.com/golang/protobuf/jsonpb"]

[[constraint]]
name = "github.com/pkg/errors"
version = "0.9.0"
version = "0.11.0"
source = "https://github.com/pingcap/errors.git"


[[constraint]]
name = "github.com/pingcap/tidb-tools"
revision = "5db58e3b7e6613456551c40d011806a346b2f44a"
13 changes: 11 additions & 2 deletions ast/misc.go
Original file line number Diff line number Diff line change
Expand Up @@ -118,8 +118,9 @@ func (n *TraceStmt) Accept(v Visitor) (Node, bool) {
type ExplainStmt struct {
stmtNode

Stmt StmtNode
Format string
Stmt StmtNode
Format string
Analyze bool
}

// Accept implements Node Accept interface.
Expand Down Expand Up @@ -183,6 +184,14 @@ func (n *DeallocateStmt) Accept(v Visitor) (Node, bool) {
return v.Leave(n)
}

// Prepared represents a prepared statement.
type Prepared struct {
Stmt StmtNode
Params []*ParamMarkerExpr
SchemaVersion int64
UseCache bool
}

// ExecuteStmt is a statement to execute PreparedStmt.
// See https://dev.mysql.com/doc/refman/5.7/en/execute.html
type ExecuteStmt struct {
Expand Down
10 changes: 5 additions & 5 deletions cmd/explaintest/r/explain_complex_stats.result
Original file line number Diff line number Diff line change
Expand Up @@ -158,11 +158,11 @@ Projection_5 39.28 root test.st.cm, test.st.p1, test.st.p2, test.st.p3, test.st.
└─TableScan_14 160.23 cop table:st, keep order:false
explain select dt.id as id, dt.aid as aid, dt.pt as pt, dt.dic as dic, dt.cm as cm, rr.gid as gid, rr.acd as acd, rr.t as t,dt.p1 as p1, dt.p2 as p2, dt.p3 as p3, dt.p4 as p4, dt.p5 as p5, dt.p6_md5 as p6, dt.p7_md5 as p7 from dt dt join rr rr on (rr.pt = 'ios' and rr.t > 1478185592 and dt.aid = rr.aid and dt.dic = rr.dic) where dt.pt = 'ios' and dt.t > 1478185592 and dt.bm = 0 limit 2000;
id count task operator info
Projection_9 428.55 root dt.id, dt.aid, dt.pt, dt.dic, dt.cm, rr.gid, rr.acd, rr.t, dt.p1, dt.p2, dt.p3, dt.p4, dt.p5, dt.p6_md5, dt.p7_md5
└─Limit_12 428.55 root offset:0, count:2000
└─IndexJoin_18 428.55 root inner join, inner:IndexLookUp_17, outer key:dt.aid, dt.dic, inner key:rr.aid, rr.dic
├─TableReader_42 428.55 root data:Selection_41
│ └─Selection_41 428.55 cop eq(dt.bm, 0), eq(dt.pt, "ios"), gt(dt.t, 1478185592)
Projection_9 428.32 root dt.id, dt.aid, dt.pt, dt.dic, dt.cm, rr.gid, rr.acd, rr.t, dt.p1, dt.p2, dt.p3, dt.p4, dt.p5, dt.p6_md5, dt.p7_md5
└─Limit_12 428.32 root offset:0, count:2000
└─IndexJoin_18 428.32 root inner join, inner:IndexLookUp_17, outer key:dt.aid, dt.dic, inner key:rr.aid, rr.dic
├─TableReader_42 428.32 root data:Selection_41
│ └─Selection_41 428.32 cop eq(dt.bm, 0), eq(dt.pt, "ios"), gt(dt.t, 1478185592)
│ └─TableScan_40 2000.00 cop table:dt, range:[0,+inf], keep order:false
└─IndexLookUp_17 970.00 root
├─IndexScan_14 1.00 cop table:rr, index:aid, dic, range: decided by [dt.aid dt.dic], keep order:false
Expand Down
9 changes: 9 additions & 0 deletions cmd/explaintest/r/explain_easy.result
Original file line number Diff line number Diff line change
Expand Up @@ -347,6 +347,15 @@ TableDual_5 0.00 root rows:0
explain select * from t where b = 1 and b = 2;
id count task operator info
TableDual_5 0.00 root rows:0
explain select * from t t1 join t t2 where t1.b = t2.b and t2.b is null;
id count task operator info
Projection_7 12.50 root t1.a, t1.b, t2.a, t2.b
└─HashRightJoin_9 12.50 root inner join, inner:TableReader_12, equal:[eq(t2.b, t1.b)]
├─TableReader_12 10.00 root data:Selection_11
│ └─Selection_11 10.00 cop isnull(t2.b)
│ └─TableScan_10 10000.00 cop table:t2, range:[-inf,+inf], keep order:false, stats:pseudo
└─TableReader_14 10000.00 root data:TableScan_13
└─TableScan_13 10000.00 cop table:t1, range:[-inf,+inf], keep order:false, stats:pseudo
drop table if exists t;
create table t(a bigint primary key);
explain select * from t where a = 1 and a = 2;
Expand Down
6 changes: 3 additions & 3 deletions cmd/explaintest/r/explain_easy_stats.result
Original file line number Diff line number Diff line change
Expand Up @@ -47,10 +47,10 @@ explain select * from t1 left join t2 on t1.c2 = t2.c1 where t1.c1 > 1;
id count task operator info
Projection_6 2481.25 root test.t1.c1, test.t1.c2, test.t1.c3, test.t2.c1, test.t2.c2
└─MergeJoin_7 2481.25 root left outer join, left key:test.t1.c2, right key:test.t2.c1
├─IndexLookUp_17 1999.00 root
│ ├─Selection_16 1999.00 cop gt(test.t1.c1, 1)
├─IndexLookUp_17 1998.00 root
│ ├─Selection_16 1998.00 cop gt(test.t1.c1, 1)
│ │ └─IndexScan_14 1999.00 cop table:t1, index:c2, range:[NULL,+inf], keep order:true
│ └─TableScan_15 1999.00 cop table:t1, keep order:false
│ └─TableScan_15 1998.00 cop table:t1, keep order:false
└─IndexLookUp_21 1985.00 root
├─IndexScan_19 1985.00 cop table:t2, index:c1, range:[NULL,+inf], keep order:true
└─TableScan_20 1985.00 cop table:t2, keep order:false
Expand Down
4 changes: 4 additions & 0 deletions cmd/explaintest/r/select.result
Original file line number Diff line number Diff line change
Expand Up @@ -328,3 +328,7 @@ Point_Get_1 1.00 root table:t, handle:1
desc select * from t where a = '1';
id count task operator info
Point_Get_1 1.00 root table:t, handle:1
desc select sysdate(), sleep(1), sysdate();
id count task operator info
Projection_3 1.00 root sysdate(), sleep(1), sysdate()
└─TableDual_4 1.00 root rows:1
20 changes: 10 additions & 10 deletions cmd/explaintest/r/tpch.result
Original file line number Diff line number Diff line change
Expand Up @@ -251,7 +251,7 @@ limit 10;
id count task operator info
Projection_14 10.00 root tpch.lineitem.l_orderkey, 7_col_0, tpch.orders.o_orderdate, tpch.orders.o_shippriority
└─TopN_17 10.00 root 7_col_0:desc, tpch.orders.o_orderdate:asc, offset:0, count:10
└─HashAgg_20 40256361.71 root group by:tpch.lineitem.l_orderkey, tpch.orders.o_orderdate, tpch.orders.o_shippriority, funcs:sum(mul(tpch.lineitem.l_extendedprice, minus(1, tpch.lineitem.l_discount))), firstrow(tpch.orders.o_orderdate), firstrow(tpch.orders.o_shippriority), firstrow(tpch.lineitem.l_orderkey)
└─HashAgg_20 40227041.09 root group by:tpch.lineitem.l_orderkey, tpch.orders.o_orderdate, tpch.orders.o_shippriority, funcs:sum(mul(tpch.lineitem.l_extendedprice, minus(1, tpch.lineitem.l_discount))), firstrow(tpch.orders.o_orderdate), firstrow(tpch.orders.o_shippriority), firstrow(tpch.lineitem.l_orderkey)
└─IndexJoin_26 91515927.49 root inner join, inner:IndexLookUp_25, outer key:tpch.orders.o_orderkey, inner key:tpch.lineitem.l_orderkey
├─HashRightJoin_46 22592975.51 root inner join, inner:TableReader_52, equal:[eq(tpch.customer.c_custkey, tpch.orders.o_custkey)]
│ ├─TableReader_52 1498236.00 root data:Selection_51
Expand All @@ -260,9 +260,9 @@ Projection_14 10.00 root tpch.lineitem.l_orderkey, 7_col_0, tpch.orders.o_orderd
│ └─TableReader_49 36870000.00 root data:Selection_48
│ └─Selection_48 36870000.00 cop lt(tpch.orders.o_orderdate, 1995-03-13 00:00:00.000000)
│ └─TableScan_47 75000000.00 cop table:orders, range:[-inf,+inf], keep order:false
└─IndexLookUp_25 163063881.42 root
└─IndexLookUp_25 162945114.27 root
├─IndexScan_22 1.00 cop table:lineitem, index:L_ORDERKEY, L_LINENUMBER, range: decided by [tpch.orders.o_orderkey], keep order:false
└─Selection_24 163063881.42 cop gt(tpch.lineitem.l_shipdate, 1995-03-13 00:00:00.000000)
└─Selection_24 162945114.27 cop gt(tpch.lineitem.l_shipdate, 1995-03-13 00:00:00.000000)
└─TableScan_23 1.00 cop table:lineitem, keep order:false
/*
Q4 Order Priority Checking Query
Expand Down Expand Up @@ -922,13 +922,13 @@ p_brand,
p_type,
p_size;
id count task operator info
Sort_13 15.00 root supplier_cnt:desc, tpch.part.p_brand:asc, tpch.part.p_type:asc, tpch.part.p_size:asc
└─Projection_14 15.00 root tpch.part.p_brand, tpch.part.p_type, tpch.part.p_size, 9_col_0
└─HashAgg_17 15.00 root group by:tpch.part.p_brand, tpch.part.p_size, tpch.part.p_type, funcs:count(distinct tpch.partsupp.ps_suppkey), firstrow(tpch.part.p_brand), firstrow(tpch.part.p_type), firstrow(tpch.part.p_size)
└─HashLeftJoin_22 4022816.68 root anti semi join, inner:TableReader_46, equal:[eq(tpch.partsupp.ps_suppkey, tpch.supplier.s_suppkey)]
├─IndexJoin_26 5028520.85 root inner join, inner:IndexReader_25, outer key:tpch.part.p_partkey, inner key:tpch.partsupp.ps_partkey
│ ├─TableReader_41 1249969.60 root data:Selection_40
│ │ └─Selection_40 1249969.60 cop in(tpch.part.p_size, 48, 19, 12, 4, 41, 7, 21, 39), ne(tpch.part.p_brand, "Brand#34"), not(like(tpch.part.p_type, "LARGE BRUSHED%", 92))
Sort_13 14.41 root supplier_cnt:desc, tpch.part.p_brand:asc, tpch.part.p_type:asc, tpch.part.p_size:asc
└─Projection_14 14.41 root tpch.part.p_brand, tpch.part.p_type, tpch.part.p_size, 9_col_0
└─HashAgg_17 14.41 root group by:tpch.part.p_brand, tpch.part.p_size, tpch.part.p_type, funcs:count(distinct tpch.partsupp.ps_suppkey), firstrow(tpch.part.p_brand), firstrow(tpch.part.p_type), firstrow(tpch.part.p_size)
└─HashLeftJoin_22 3863988.24 root anti semi join, inner:TableReader_46, equal:[eq(tpch.partsupp.ps_suppkey, tpch.supplier.s_suppkey)]
├─IndexJoin_26 4829985.30 root inner join, inner:IndexReader_25, outer key:tpch.part.p_partkey, inner key:tpch.partsupp.ps_partkey
│ ├─TableReader_41 1200618.43 root data:Selection_40
│ │ └─Selection_40 1200618.43 cop in(tpch.part.p_size, 48, 19, 12, 4, 41, 7, 21, 39), ne(tpch.part.p_brand, "Brand#34"), not(like(tpch.part.p_type, "LARGE BRUSHED%", 92))
│ │ └─TableScan_39 10000000.00 cop table:part, range:[-inf,+inf], keep order:false
│ └─IndexReader_25 1.00 root index:IndexScan_24
│ └─IndexScan_24 1.00 cop table:partsupp, index:PS_PARTKEY, PS_SUPPKEY, range: decided by [tpch.part.p_partkey], keep order:false
Expand Down
1 change: 1 addition & 0 deletions cmd/explaintest/t/explain_easy.test
Original file line number Diff line number Diff line change
Expand Up @@ -66,6 +66,7 @@ explain select * from t where b in (1, 2) and b in (1, 3);
explain select * from t where a = 1 and a = 1;
explain select * from t where a = 1 and a = 2;
explain select * from t where b = 1 and b = 2;
explain select * from t t1 join t t2 where t1.b = t2.b and t2.b is null;

drop table if exists t;
create table t(a bigint primary key);
Expand Down
2 changes: 2 additions & 0 deletions cmd/explaintest/t/select.test
Original file line number Diff line number Diff line change
Expand Up @@ -163,3 +163,5 @@ drop table if exists t;
create table t(a bigint primary key, b bigint);
desc select * from t where a = 1;
desc select * from t where a = '1';

desc select sysdate(), sleep(1), sysdate();
2 changes: 1 addition & 1 deletion ddl/db_change_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -296,7 +296,7 @@ func (t *testExecInfo) compileSQL(idx int) (err error) {
ctx := context.TODO()
se.PrepareTxnCtx(ctx)
sctx := se.(sessionctx.Context)
if err = executor.ResetStmtCtx(sctx, c.rawStmt); err != nil {
if err = executor.ResetContextOfStmt(sctx, c.rawStmt); err != nil {
return errors.Trace(err)
}
c.stmt, err = compiler.Compile(ctx, c.rawStmt)
Expand Down
26 changes: 26 additions & 0 deletions ddl/db_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -3008,6 +3008,32 @@ func (s *testDBSuite) TestTruncatePartitionAndDropTable(c *C) {
hasOldPartitionData = checkPartitionDelRangeDone(c, s, partitionPrefix)
c.Assert(hasOldPartitionData, IsFalse)
s.testErrorCode(c, "select * from t4;", tmysql.ErrNoSuchTable)

// Test truncate table partition reassign a new partitionIDs.
s.tk.MustExec("drop table if exists t5;")
s.tk.MustExec("set @@session.tidb_enable_table_partition=1;")
s.tk.MustExec(`create table t5(
id int, name varchar(50),
purchased date
)
partition by range( year(purchased) ) (
partition p0 values less than (1990),
partition p1 values less than (1995),
partition p2 values less than (2000),
partition p3 values less than (2005),
partition p4 values less than (2010),
partition p5 values less than (2015)
);`)
is = domain.GetDomain(ctx).InfoSchema()
oldTblInfo, err = is.TableByName(model.NewCIStr("test"), model.NewCIStr("t5"))
c.Assert(err, IsNil)
oldPID = oldTblInfo.Meta().Partition.Definitions[0].ID
s.tk.MustExec("truncate table t5;")
is = domain.GetDomain(ctx).InfoSchema()
c.Assert(err, IsNil)
newTblInfo, err := is.TableByName(model.NewCIStr("test"), model.NewCIStr("t5"))
newPID := newTblInfo.Meta().Partition.Definitions[0].ID
c.Assert(oldPID != newPID, IsTrue)
}

func (s *testDBSuite) TestPartitionUniqueKeyNeedAllFieldsInPf(c *C) {
Expand Down
5 changes: 0 additions & 5 deletions ddl/ddl.go
Original file line number Diff line number Diff line change
Expand Up @@ -476,11 +476,6 @@ func (d *ddl) asyncNotifyWorker(jobTp model.ActionType) {
}

func (d *ddl) doDDLJob(ctx sessionctx.Context, job *model.Job) error {
// For every DDL, we must commit current transaction.
if err := ctx.NewTxn(); err != nil {
return errors.Trace(err)
}

// Get a global job ID and put the DDL job in the queue.
err := d.addDDLJob(ctx, job)
if err != nil {
Expand Down
2 changes: 1 addition & 1 deletion ddl/ddl_worker.go
Original file line number Diff line number Diff line change
Expand Up @@ -510,7 +510,7 @@ func (w *worker) runDDLJob(d *ddlCtx, t *meta.Meta, job *model.Job) (ver int64,
default:
// Invalid job, cancel it.
job.State = model.JobStateCancelled
err = errInvalidDDLJob.GenWithStack("invalid ddl job %v", job)
err = errInvalidDDLJob.GenWithStack("invalid ddl job type: %v", job.Type)
}

// Save errors in job, so that others can know errors happened.
Expand Down
18 changes: 18 additions & 0 deletions ddl/ddl_worker_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -136,6 +136,24 @@ func (s *testDDLSuite) TestTableError(c *C) {

}

func (s *testDDLSuite) TestInvalidDDLJob(c *C) {
store := testCreateStore(c, "test_invalid_ddl_job_type_error")
defer store.Close()
d := testNewDDL(context.Background(), nil, store, nil, nil, testLease)
defer d.Stop()
ctx := testNewContext(d)

job := &model.Job{
SchemaID: 0,
TableID: 0,
Type: model.ActionNone,
BinlogInfo: &model.HistoryInfo{},
Args: []interface{}{},
}
err := d.doDDLJob(ctx, job)
c.Assert(err.Error(), Equals, "[ddl:3]invalid ddl job type: none")
}

func (s *testDDLSuite) TestForeignKeyError(c *C) {
store := testCreateStore(c, "test_foreign_key_error")
defer store.Close()
Expand Down
4 changes: 3 additions & 1 deletion ddl/foreign_key_test.go
Original file line number Diff line number Diff line change
Expand Up @@ -78,7 +78,9 @@ func (s *testForeighKeySuite) testCreateForeignKey(c *C, tblInfo *model.TableInf
BinlogInfo: &model.HistoryInfo{},
Args: []interface{}{fkInfo},
}
err := s.d.doDDLJob(s.ctx, job)
err := s.ctx.NewTxn()
c.Assert(err, IsNil)
err = s.d.doDDLJob(s.ctx, job)
c.Assert(err, IsNil)
return job
}
Expand Down
Loading