Skip to content
This repository has been archived by the owner on Jul 24, 2024. It is now read-only.

summary: log failure message to terminal #1033

Merged
merged 4 commits into from
Apr 20, 2021

Conversation

YuJuncen
Copy link
Collaborator

What problem does this PR solve?

Fixed #1016.

What is changed and how it works?

diff --git a/pkg/summary/collector.go b/pkg/summary/collector.go
index 84dabaeb..3e3264d1 100644
--- a/pkg/summary/collector.go
+++ b/pkg/summary/collector.go
@@ -188,7 +188,7 @@ func (tc *logCollector) Summary(name string) {
                for unitName, reason := range tc.failureReasons {
                        logFields = append(logFields, zap.String("unit-name", unitName), zap.Error(reason))
                }
-               log.Info(name+" failed summary", logFields...)
+               tc.log(name+" failed summary", logFields...)
                return
        }

tc.Log would tee events to terminal.

BTW: should be replace this with things like logutil.InfoTerm, which would be more generic?

Check List

Tests

  • Manual test (add detailed scripts or steps below)

image

Release Note

  • Fix a bug that caused, when backup failed, nothing would be printed to terminal.

@ti-chi-bot ti-chi-bot added the status/LGT1 LGTM1 label Apr 19, 2021
@YuJuncen
Copy link
Collaborator Author

/run-integration-tests

@overvenus
Copy link
Member

Could you also log the error to the log file?

@YuJuncen
Copy link
Collaborator Author

@overvenus tc.log did that.

) {
logF := log.L().Info
if hasLogFile {
conf := new(log.Config)
// Always duplicate summary to stdout.
logger, _, err := log.InitLogger(conf)
if err == nil {
logF = func(msg string, fields ...zap.Field) {
logger.Info(msg, fields...)
log.Info(msg, fields...)
}
}
}
collector = NewLogCollector(logF)

@YuJuncen
Copy link
Collaborator Author

YuJuncen commented Apr 20, 2021

information of the failure of br_tikv_outage

Seems the tikv.backoffer didn't back off as intended (20s), and exceeded the retry time limit too early...

Nope, PD always reply the leader is store 1, but seems it has been killed?

[Leader="{\"id\":180,\"store_id\":1}"]

@kennytm
Copy link
Collaborator

kennytm commented Apr 20, 2021

/lgtm

@ti-chi-bot
Copy link
Member

[REVIEW NOTIFICATION]

This pull request has been approved by:

  • kennytm
  • lance6716

To complete the pull request process, please ask the reviewers in the list to review by filling /cc @reviewer in the comment.
After your PR has acquired the required number of LGTMs, you can assign this pull request to the committer in the list by filling /assign @committer in the comment to help you merge this pull request.

The full list of commands accepted by this bot can be found here.

Reviewer can indicate their review by writing /lgtm in a comment.
Reviewer can cancel approval by writing /lgtm cancel in a comment.

@ti-chi-bot ti-chi-bot added status/LGT2 LGTM2 and removed status/LGT1 LGTM1 labels Apr 20, 2021
@kennytm
Copy link
Collaborator

kennytm commented Apr 20, 2021

/merge

@ti-chi-bot
Copy link
Member

This pull request has been accepted and is ready to merge.

Commit hash: d2fdd23

@ti-chi-bot ti-chi-bot merged commit ac3a0e7 into pingcap:master Apr 20, 2021
ti-srebot pushed a commit to ti-srebot/br that referenced this pull request Apr 20, 2021
Signed-off-by: ti-srebot <ti-srebot@pingcap.com>
@ti-srebot
Copy link
Contributor

cherry pick to release-5.0 in PR #1043

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

When backing up an empty cluster, nothing will be output.
6 participants