Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is there a metrics to log conversion connector ? #29456

Closed
Ishmeet opened this issue Nov 23, 2023 · 19 comments
Closed

Is there a metrics to log conversion connector ? #29456

Ishmeet opened this issue Nov 23, 2023 · 19 comments
Labels
closed as inactive question Further information is requested Stale

Comments

@Ishmeet
Copy link
Contributor

Ishmeet commented Nov 23, 2023

Component(s)

No response

What happened?

Description

Steps to Reproduce

Expected Result

Actual Result

Collector version

v0.89.0

Environment information

Environment

OS: (e.g., "Ubuntu 20.04")
Compiler(if manually compiled): (e.g., "go 14.2")

OpenTelemetry Collector configuration

No response

Log output

No response

Additional context

No response

@Ishmeet Ishmeet added bug Something isn't working needs triage New item requiring triage labels Nov 23, 2023
@andrzej-stencel
Copy link
Member

No, there isn't as of now. If you have a need for such a component, can you describe your use case?

@andrzej-stencel andrzej-stencel added question Further information is requested and removed needs triage New item requiring triage bug Something isn't working labels Nov 23, 2023
@Ishmeet
Copy link
Contributor Author

Ishmeet commented Nov 24, 2023

No, there isn't as of now. If you have a need for such a component, can you describe your use case?

I want the same data from a metrics receiver to be export to both thanos (metrics) and elastisearch (logs) backend.
Then in my project I have two micro services reading from thanos and elastisearch and processing.

@andrzej-stencel
Copy link
Member

Still, I don't quite understand why you want the metrics exported to both a metrics backend (Thanos) and a logs backed (Elasticsearch). What kind of processing do you want to do with those logs-from-metrics and why? What would you expect the format of the logs-from-metrics to be? Would it be Prometheus exposition format? OpenMetrics? OTLP/JSON? Something else?

Copy link
Contributor

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

@github-actions github-actions bot added the Stale label Mar 13, 2024
@mfyuce
Copy link

mfyuce commented Apr 4, 2024

For very big number of metrics this can be a life saver. For example, if influxdb cannot handle the write operations, if conversion was possible, we would redirect the metrics, i.e, to quickwit, or cassandra, elastic easily. So a very much needed feature, i think.

@github-actions github-actions bot removed the Stale label Apr 5, 2024
@andrzej-stencel
Copy link
Member

Let me ask the question again:

What would you expect the format of the logs-from-metrics to be? Would it be Prometheus exposition format? OpenMetrics? OTLP/JSON? Something else?

@mfyuce
Copy link

mfyuce commented Apr 12, 2024

Sorry, for not seeing previous question and thank you for the answer; OTLP would be our go to option.

You see, there is no way, as long as we can gather, to send OTLP metrics to logs backends (It seems elastic has, but we do not want to use it). For example, logs backends like Quickwit accept only trace or log format, but no metric format. If there would be some ways, we would just easily redirect metrics to logs backends/receivers. Additional fields is/should-be acceptable for us.

The otherway around is writing OTLP logs from programming languages (this is what we have done for now), or writing metrics to log files and receiving those from OTLP receivers.

@andrzej-stencel
Copy link
Member

andrzej-stencel commented Apr 16, 2024

If OTLP format is fine for you, you could probably use the File exporter. It outputs telemetry (logs, metrics, traces) as text formatted in OTLP JSON (by default) into a file. Then you could read this file with the Filelog receiver, getting them as logs.

This is a definitely a workaround, not the end solution, but it should work.

Copy link
Contributor

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

@github-actions github-actions bot added the Stale label Jun 17, 2024
@pecigonzalo
Copy link

Chiming in here, I think there is some movement to use the same tools to process telemetry data. If logs, metrics, traces are exported to the same destination, and possibly queried with the same query language. Either for live debugging or for analytics or other more deep dive debugging later on.

I think the OLTP JSON format would be great to expose to convert and route to logging exporters, in the same way its exposed in the FileExporter.
Ill take a look at the file exporter, as there might be an easy way to extract some of its code as a connector and re-use it for both goals right?

@github-actions github-actions bot removed the Stale label Jun 28, 2024
@mfyuce
Copy link

mfyuce commented Jul 1, 2024

Chiming in here, I think there is some movement to use the same tools to process telemetry data. If logs, metrics, traces are exported to the same destination, and possibly queried with the same query language. Either for live debugging or for analytics or other more deep dive debugging later on.

I think the OLTP JSON format would be great to expose to convert and route to logging exporters, in the same way its exposed in the FileExporter. Ill take a look at the file exporter, as there might be an easy way to extract some of its code as a connector and re-use it for both goals right?

That would be very nice, in fact. Thank you!

@pecigonzalo
Copy link

pecigonzalo commented Jul 2, 2024

I did some research and it does not seem to be as simple as that. I tried even doing JSONMarshal -> buf and then buf -.> JSONUnmarshall but its not working as expected. I believe Ill have to do something like

func (c *connectorImp) ConsumeMetrics(ctx context.Context, md pmetric.Metrics) error {
	logs := plog.NewLogs()

	for i := 0; i < md.ResourceMetrics().Len(); i++ {
		resourceMetric := md.ResourceMetrics().At(i)
		logResource := logs.ResourceLogs().AppendEmpty()

		resourceMetric.Resource().Attributes().CopyTo(logResource.Resource().Attributes())
		logResource.SetSchemaUrl(resourceMetric.SchemaUrl())

		for j := 0; j < resourceMetric.ScopeMetrics().Len(); j++ {
			scopeMetric := resourceMetric.ScopeMetrics().At(j)
			scopeLog := logResource.ScopeLogs().AppendEmpty()

			scopeLog.SetSchemaUrl(scopeMetric.SchemaUrl())

			for k := 0; k < scopeMetric.Metrics().Len(); k++ {
				metric := scopeMetric.Metrics().At(k)
				l := scopeLog.LogRecords().AppendEmpty()

				l.SetSeverityText("INFO")

				switch metric.Type() {
				case pmetric.MetricTypeSum:
				case pmetric.MetricTypeEmpty:
				case pmetric.MetricTypeExponentialHistogram:
				case pmetric.MetricTypeGauge:
				case pmetric.MetricTypeHistogram:
				case pmetric.MetricTypeSummary:
				default:
					panic("unexpected pmetric.MetricType")
				}
			}
		}
	}

	return c.logsConsumer.ConsumeLogs(ctx, logs)
}

but I have to do a munch more research as Im not familiar with many of the internals required to do such thing.

@mfyuce
Copy link

mfyuce commented Jul 3, 2024

I did some research and it does not seem to be as simple as that. I tried even doing JSONMarshal -> buf and then buf -.> JSONUnmarshall but its not working as expected. I believe Ill have to do something like

func (c *connectorImp) ConsumeMetrics(ctx context.Context, md pmetric.Metrics) error {
	logs := plog.NewLogs()

	for i := 0; i < md.ResourceMetrics().Len(); i++ {
		resourceMetric := md.ResourceMetrics().At(i)
		logResource := logs.ResourceLogs().AppendEmpty()

		resourceMetric.Resource().Attributes().CopyTo(logResource.Resource().Attributes())
		logResource.SetSchemaUrl(resourceMetric.SchemaUrl())

		for j := 0; j < resourceMetric.ScopeMetrics().Len(); j++ {
			scopeMetric := resourceMetric.ScopeMetrics().At(j)
			scopeLog := logResource.ScopeLogs().AppendEmpty()

			scopeLog.SetSchemaUrl(scopeMetric.SchemaUrl())

			for k := 0; k < scopeMetric.Metrics().Len(); k++ {
				metric := scopeMetric.Metrics().At(k)
				l := scopeLog.LogRecords().AppendEmpty()

				l.SetSeverityText("INFO")

				switch metric.Type() {
				case pmetric.MetricTypeSum:
				case pmetric.MetricTypeEmpty:
				case pmetric.MetricTypeExponentialHistogram:
				case pmetric.MetricTypeGauge:
				case pmetric.MetricTypeHistogram:
				case pmetric.MetricTypeSummary:
				default:
					panic("unexpected pmetric.MetricType")
				}
			}
		}
	}

	return c.logsConsumer.ConsumeLogs(ctx, logs)
}

but I have to do a munch more research as Im not familiar with many of the internals required to do such thing.

That seems a very good progress in the right direction

@pecigonzalo
Copy link

I pushed what I have here: https://github.com/pecigonzalo/opentelemetry-collector-contrib/tree/feature/metrics-to-log/connector/logsconnector

⚠️ Its super ugly and hacky at the moment. I was just trying to do a PoC and see what is possible, while also getting familiar with the codebase.
@andrzej-stencel I understand from the contrib docs that before I even send a PR we should have a sponsor of the feature, and a use-case definition. So I could use some help on kickstarting that process, and likely some advice on the approach.

@andrzej-stencel
Copy link
Member

andrzej-stencel commented Jul 3, 2024

If you can join a Collector SIG meeting (there's one in about one hour), discussing this synchronously might help. Otherwise, create a new issue of type New component proposal and describe your use case with details.

I want to emphasize that you don't need to add such component to Contrib for it to be useful to you. You can create the component in your own repository and build a custom collector distro including the component using the Builder.

@pecigonzalo
Copy link

Thank, ill try and join the meeting.

I want to emphasize that you don't need to add such component to Contrib for it to be useful to you

True, thanks for the reminder. I think this would be useful to other as well, so maybe Contrib is a good final target.

@justinbwood
Copy link

justinbwood commented Aug 23, 2024

Adding my own use case here, we're ingesting frontend traces using opentelemetry-js. The OTLP receiver has metadata enabled, and we upsert certain cloudfront viewer location headers from the metadata. Then potentially, I'd like to somehow generate spanmetrics that have the latitude/longitude so I can graph a geomap of frontend latency.

The issue I'd run in to when shipping the metrics is the potential cardinality of a latitude/longitude label. Even when using Mimir, the cardinality would be tough to handle.

Grafana's Faro SDK overcomes this for web-vitals measurements by converting them to log lines instead of metrics, which are shipped to Loki. Example of logfmt measurement for Cumulative Layout Shift:

timestamp="2024-08-23 15:07:02.039 +0000 UTC" kind=measurement type=web-vitals cls=0.435783 value_cls=0.4357828227896602 sdk_version=1.7.0 app_name=<redacted> app_version=1.0.0 session_id=<redacted> page_url="<redacted>" browser_name=Edge browser_version=127.0.0.0 browser_os="Mac OS 10.15.7" browser_mobile=false view_name=default

By converting the spanmetrics into logs, we can insert the latitude/longitude into the log line and ship to Loki. Then using LogQL metrics queries, query an average latency summed by geographic location, and graph that data on Grafana's GeoMap panel.

Copy link
Contributor

This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping @open-telemetry/collector-contrib-triagers. If this issue is still relevant, please ping the code owners or leave a comment explaining why it is still relevant. Otherwise, please close it.

@github-actions github-actions bot added the Stale label Oct 23, 2024
Copy link
Contributor

This issue has been closed as inactive because it has been stale for 120 days with no activity.

@github-actions github-actions bot closed this as not planned Won't fix, can't repro, duplicate, stale Dec 22, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
closed as inactive question Further information is requested Stale
Projects
None yet
Development

No branches or pull requests

6 participants