Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(detectors): reduce diag level on detectors failing to detect #2382

Merged

Conversation

johnli-developer
Copy link
Contributor

Which problem is this PR solving?

Short description of the changes

  • change log from warn to debug when detector failed to detect

@johnli-developer johnli-developer requested a review from a team August 15, 2024 02:13
Copy link

linux-foundation-easycla bot commented Aug 15, 2024

CLA Signed

The committers listed above are authorized under a signed CLA.

@david-luna
Copy link
Contributor

@johnli-developer thanks for contributing to the project :)

I see some other places in detectors where diag.warn is still used. Is this PR a work in progress?

@johnli-developer
Copy link
Contributor Author

@johnli-developer thanks for contributing to the project :)

I see some other places in detectors where diag.warn is still used. Is this PR a work in progress?

This PR is complete it's change to debug for detectors. Which one would you also like to change to debug?

@trentm
Copy link
Contributor

trentm commented Aug 19, 2024

Here are some other ones:

% rg 'diag\.(info|warn)' detectors
detectors/node/opentelemetry-resource-detector-container/src/detectors/ContainerDetector.ts
65:      diag.info(
131:        diag.info(

detectors/node/opentelemetry-resource-detector-aws/src/detectors/AwsEcsDetectorSync.ts
140:      diag.warn('AwsEcsDetector failed to read container ID', e);

detectors/node/opentelemetry-resource-detector-aws/src/detectors/AwsEksDetectorSync.ts
98:      diag.warn('Process is not running on K8S', e);
141:      diag.warn('Cannot get cluster name on EKS', e);
157:      diag.warn('Unable to read Kubernetes client token.', e);
192:      diag.warn(`AwsEksDetector failed to read container ID: ${e.message}`);

That doesn't mean that this PR has to deal with all of them. But it won't "Fix: #2268" unless it gets all of them.

@trentm trentm changed the title fix: 2268 fix(detectors): reduce diag level on detectors failing to detect Aug 19, 2024
Copy link

codecov bot commented Aug 19, 2024

Codecov Report

Attention: Patch coverage is 71.42857% with 2 lines in your changes missing coverage. Please review.

Project coverage is 90.52%. Comparing base (dfb2dff) to head (2e32496).
Report is 221 commits behind head on main.

Files Patch % Lines
...e-detector-aws/src/detectors/AwsEksDetectorSync.ts 75.00% 1 Missing ⚠️
...ector-container/src/detectors/ContainerDetector.ts 50.00% 1 Missing ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main    #2382      +/-   ##
==========================================
- Coverage   90.97%   90.52%   -0.45%     
==========================================
  Files         146      157      +11     
  Lines        7492     7622     +130     
  Branches     1502     1571      +69     
==========================================
+ Hits         6816     6900      +84     
- Misses        676      722      +46     
Files Coverage Δ
...e-detector-aws/src/detectors/AwsEcsDetectorSync.ts 97.02% <100.00%> (ø)
...e-detector-aws/src/detectors/AwsEksDetectorSync.ts 91.46% <75.00%> (ø)
...ector-container/src/detectors/ContainerDetector.ts 95.08% <50.00%> (+2.22%) ⬆️

... and 75 files with indirect coverage changes

@johnli-developer
Copy link
Contributor Author

Here are some other ones:

% rg 'diag\.(info|warn)' detectors
detectors/node/opentelemetry-resource-detector-container/src/detectors/ContainerDetector.ts
65:      diag.info(
131:        diag.info(

detectors/node/opentelemetry-resource-detector-aws/src/detectors/AwsEcsDetectorSync.ts
140:      diag.warn('AwsEcsDetector failed to read container ID', e);

detectors/node/opentelemetry-resource-detector-aws/src/detectors/AwsEksDetectorSync.ts
98:      diag.warn('Process is not running on K8S', e);
141:      diag.warn('Cannot get cluster name on EKS', e);
157:      diag.warn('Unable to read Kubernetes client token.', e);
192:      diag.warn(`AwsEksDetector failed to read container ID: ${e.message}`);

That doesn't mean that this PR has to deal with all of them. But it won't "Fix: #2268" unless it gets all of them.

Sure, I will update them

@johnli-developer
Copy link
Contributor Author

johnli-developer commented Aug 22, 2024

Here are some other ones:

% rg 'diag\.(info|warn)' detectors
detectors/node/opentelemetry-resource-detector-container/src/detectors/ContainerDetector.ts
65:      diag.info(
131:        diag.info(

detectors/node/opentelemetry-resource-detector-aws/src/detectors/AwsEcsDetectorSync.ts
140:      diag.warn('AwsEcsDetector failed to read container ID', e);

detectors/node/opentelemetry-resource-detector-aws/src/detectors/AwsEksDetectorSync.ts
98:      diag.warn('Process is not running on K8S', e);
141:      diag.warn('Cannot get cluster name on EKS', e);
157:      diag.warn('Unable to read Kubernetes client token.', e);
192:      diag.warn(`AwsEksDetector failed to read container ID: ${e.message}`);

That doesn't mean that this PR has to deal with all of them. But it won't "Fix: #2268" unless it gets all of them.

@trentm They are updated, could you help review the changes?

Copy link
Contributor

@trentm trentm left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good. Thanks.
From the original issue:

So my proposal would be:

reduce the level to debug
rephrase the messages

This handles the first point: reducing to diag.debug.

I think the messages could benefit from some rephrasing to be less concerning, even for the user that has OTEL_LOG_LEVEL set to debug. However that could be done in a separate PR.

@trentm trentm enabled auto-merge (squash) August 22, 2024 20:17
@trentm
Copy link
Contributor

trentm commented Aug 22, 2024

@johnli-developer I've set this up to merge when the PR is updated to the base branch. It looks like your fork repo doesn't allow me the access to do that. Can you update the base branch on this PR? I think the GitHub UI will show an "Update Branch" button for you.

auto-merge was automatically disabled August 23, 2024 01:24

Head branch was pushed to by a user without write access

@johnli-developer
Copy link
Contributor Author

@johnli-developer I've set this up to merge when the PR is updated to the base branch. It looks like your fork repo doesn't allow me the access to do that. Can you update the base branch on this PR? I think the GitHub UI will show an "Update Branch" button for you.
@trentm I clicked that update branch button, is it OK now?

commit 1289068
Author: John Li <john.li@fmr.com>
Date:   Tue Aug 6 13:54:15 2024 +0800

    fix: update detector log level

    Signed-off-by: John Li <john.li@fmr.com>

Signed-off-by: John Li <john.li@fmr.com>
Signed-off-by: John Li <john.li@fmr.com>
@pichlermarc pichlermarc merged commit d7a5bd4 into open-telemetry:main Aug 27, 2024
20 of 21 checks passed
@dyladan dyladan mentioned this pull request Aug 27, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Reduce log level on detectors failing to detect
4 participants