Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

truncate table error,bug? #35284

Closed
lddlww opened this issue Jun 10, 2022 · 9 comments
Closed

truncate table error,bug? #35284

lddlww opened this issue Jun 10, 2022 · 9 comments
Assignees
Labels
component/pd severity/minor type/bug The issue is confirmed as a bug.

Comments

@lddlww
Copy link

lddlww commented Jun 10, 2022

Bug Report

tiup scale-out pd(three new pd),then scale-in old pd (three old pd)

exec truncate table,throw error:
image

1. Minimal reproduce step (Required)

tiup scale-out new pd,then scale-in old pd

exec truncate table

2. What did you expect to see? (Required)

exec success

3. What did you see instead (Required)

exec failed

4. What is your TiDB version? (Required)

5.3.0

@lddlww lddlww added the type/bug The issue is confirmed as a bug. label Jun 10, 2022
@Defined2014
Copy link
Contributor

I think it looks same as #35268.

cc @JmPotato

@JmPotato
Copy link
Member

I think it looks same as #35268.

cc @JmPotato

The deep reason is the same, but this bug seems to come from a different part of the code.

@JmPotato
Copy link
Member

JmPotato commented Jun 13, 2022

After reading the code, it seems that this problem is caused by "scale-in the old PD servers too early". TiDB sets its AutoSyncInterval config to 30s for the etcd client to update the member list periodically. If you scale in the old PD serves before the next updating, it will cause the TiDB server to never know the new scaled-out PD servers' URLs until rebooting. This problem could be prevented by transferring the PD leader to the new scaled-out PD server and waiting for at least 30s first.

@lddlww
Copy link
Author

lddlww commented Jun 13, 2022

This problem could be prevented by transferring the PD leader to the new scaled-out PD server

using tiup to scale in old pd nodes, it can auto transfer the pd leader to new pd node , is it right?

@lddlww
Copy link
Author

lddlww commented Jun 13, 2022

oh, the mean you are that i have to wait at least 30s before transfer pd leader to new pd node?

@JmPotato
Copy link
Member

This problem could be prevented by transferring the PD leader to the new scaled-out PD server

using tiup to scale in old pd nodes, it can auto transfer the pd leader to new pd node , is it right?

I am not sure about this, we better use pd-ctl to transfer the leader manually.

oh, the mean you are that i have to wait at least 30s before transfer pd leader to new pd node?

No, I mean we should wait for at least 30s after the leader transfer.

@Defined2014 Defined2014 added component/pd and removed sig/sql-infra SIG: SQL Infra labels Jun 14, 2022
@lddlww
Copy link
Author

lddlww commented Jun 14, 2022

thank you very much

@JmPotato
Copy link
Member

/close

@ti-chi-bot
Copy link
Member

@JmPotato: Closing this issue.

In response to this:

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component/pd severity/minor type/bug The issue is confirmed as a bug.
Projects
None yet
Development

No branches or pull requests

5 participants