-
Notifications
You must be signed in to change notification settings - Fork 24
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Opentelemetry exporter #531
Conversation
bbbde61
to
756cb79
Compare
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Codecov ReportAttention:
Additional details and impacted files@@ Coverage Diff @@
## main #531 +/- ##
==========================================
- Coverage 66.84% 66.09% -0.76%
==========================================
Files 95 102 +7
Lines 6802 7409 +607
==========================================
+ Hits 4547 4897 +350
- Misses 1988 2221 +233
- Partials 267 291 +24
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
Thanks @KalmanMeth ! So perhaps, since the knowledge currently lies in the |
1525128
to
a2aaf87
Compare
a2aaf87
to
912c0ac
Compare
I added the otel option to add_kubernetes by setting the |
I have working counters and gauges for the opentelemetry metrics. I removed the code for opentelemetry historgrams. This may have to wait for a future PR. |
func (e *EncodeOtlpTrace) Encode(metricRecord config.GenericMap) { | ||
log.Tracef("entering EncodeOtlpTrace. entry = %v", metricRecord) | ||
_, span := e.tracer.Start(e.ctx, flpEncodeSpanName) | ||
defer span.End() | ||
attributes := obtainAttributesFromEntry(metricRecord) | ||
span.SetAttributes(*attributes...) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I am not sure what is the goal here with traces, but I don't think we're doing something useful here.
Normally, traces are provided by the workloads that are being traced, for instance when a workload A sends a request to service S, A would generate a span saying "I am sending a request to S".
I believe what we could possibly do, as an external observer, is to generate spans on behalf of the workloads & services as long as we see their IPs flowing. I'm not 100% sure how useful it is, but that's the best I can see. But it's not really what this is doing here: it creates spans for FLP without providing any real information about the network connections being observed in otel format.
case "nil": | ||
if found { | ||
return nil | ||
} | ||
case "!nil": | ||
if !found { | ||
return nil | ||
} | ||
default: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This code that came from encode_prom is obsolete, see https://github.com/netobserv/flowlogs-pipeline/pull/513/files#diff-30d11303a4ad1ec82e4f39e3ae6dc7d044c8dd370c536a98ba90febbfe2d425aR209
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
basically: these special keywords "nil" / "!nil" have been replaced with a more explicit API where you can have more advanced filters, e.g. using regexp
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
updated
Focusing on code reviewing strictly speaking rather than if it meets expectations because I must say I am not quite sure about the use cases but you @KalmanMeth have apparently one using instana so you're you own judge on that :-) I struggle a bit to see where netobserv can bring value on tracing (unless we captured http headers with our agents, which we don't) So apart from this comment this looks good to me. I guess there can be follow-ups to improve more the semantic conventions compliance. |
8eea719
to
1d4010a
Compare
@jotak The trace capability now breaks down a single log entry into sub-entries - one for source and one for destination. I added instructions to test the setup using the opentelemetry collector and jaeger tracing UI that is fed with information from flp. I would like to progress with this PR to merge. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
thanks @KalmanMeth !
/lgtm |
Adding breaking change label due to change in name type in the API |
Description
Export logs, traces, and metrics to an Opentelemetry collector.
Configuration enables to specify using either grpc or http protocol.
TLS details may be specified. If not specified, then the tls option is set to insecure.
Sample configuration:
Headers to be sent to collector may be specified.
Field names should be transformed to Opentelemetry-conformant format using a transform_generic stage.
Dependencies
n/a
Checklist
If you are not familiar with our processes or don't know what to answer in the list below, let us know in a comment: the maintainers will take care of that.