Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix pv patch continuously problem #2769

Merged
merged 14 commits into from
Jun 19, 2020
Merged

Conversation

Yisaer
Copy link
Contributor

@Yisaer Yisaer commented Jun 18, 2020

What problem does this PR solve?

Currently, pv would be continuously patched which is not we want. I found that there are mainly 2 problems in the current logic:

  1. the selector for the tidbcluster pvc would select all the pvc including tidbmonitor' pvc.
case v1alpha1.TiDBClusterKind:	
     l, err = label.New().Instance(instanceName).Selector()	
     if err != nil {	
         return err	
     }
  1. If we didn't define the PersistentVolumeReclaimPolicy in the Spec, the comparing here will be not equal forever.
if pv.Spec.PersistentVolumeReclaimPolicy == policy {
    continue
}

What is changed and how does it work?

Make PersistentVolumeReclaimPolicy as pointer and fix the selector logic.

Does this PR introduce a user-facing change?:

Fix the bug that the `PV` managed by Operator would be patched continuously

@Yisaer Yisaer changed the title Fix pv patch problem Fix pv patch continuously problem Jun 18, 2020
@@ -60,6 +60,10 @@ func setTidbClusterSpecDefault(tc *v1alpha1.TidbCluster) {
d := false
tc.Spec.EnablePVReclaim = &d
}
recyclePVP := corev1.PersistentVolumeReclaimRecycle
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggest using retain as the default policy. WDYT @cofyc
Retain is what we suggested in our docs and only few PVs support recycle.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

updated.

l := label.Label(pvc.Labels)
switch kind {
case v1alpha1.TiDBClusterKind:
if !l.IsPD() && !l.IsTiDB() && !l.IsTiKV() && !l.IsTiFlash() {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pump also uses PV.

Copy link
Contributor Author

@Yisaer Yisaer Jun 18, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

updated.

DanielZhangQD
DanielZhangQD previously approved these changes Jun 18, 2020
Copy link
Contributor

@DanielZhangQD DanielZhangQD left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@Yisaer Yisaer requested a review from DanielZhangQD June 19, 2020 04:19
@Yisaer Yisaer added the area/monitor monitoring label Jun 19, 2020
@cofyc cofyc mentioned this pull request Jun 19, 2020
6 tasks
Copy link
Contributor

@DanielZhangQD DanielZhangQD left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@cofyc
Copy link
Contributor

cofyc commented Jun 19, 2020

/merge

@ti-srebot
Copy link
Contributor

Your auto merge job has been accepted, waiting for:

  • 2750

@ti-srebot
Copy link
Contributor

/run-all-tests

@ti-srebot
Copy link
Contributor

/run-all-tests

@ti-srebot ti-srebot merged commit eafc4b9 into pingcap:master Jun 19, 2020
ti-srebot pushed a commit to ti-srebot/tidb-operator that referenced this pull request Jun 19, 2020
Signed-off-by: ti-srebot <ti-srebot@pingcap.com>
@ti-srebot
Copy link
Contributor

cherry pick to release-1.1 in PR #2783

cofyc pushed a commit that referenced this pull request Jun 19, 2020
Signed-off-by: ti-srebot <ti-srebot@pingcap.com>

Co-authored-by: Song Gao <disxiaofei@163.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants