-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Monitoring docs seem outdated #5038
Comments
I'll add this to my queue to investigate, but I might not be able to look into this for the next couple of weeks. |
Thank you very much @jpkrohling ! |
I just realized that you are talking about a metric related to tracing (otelcol_processor_dropped_spans), but you only have metrics pipelines. I tried the simplest scenario that came to my mind and can confirm that the mentioned metrics are indeed available when a tracing pipeline is being used: collector config (from examples/local/otel-config.yaml) extensions:
memory_ballast:
size_mib: 512
zpages:
endpoint: 0.0.0.0:55679
receivers:
otlp:
protocols:
grpc:
http:
processors:
batch:
memory_limiter:
# 75% of maximum memory up to 4G
limit_mib: 1536
# 25% of limit up to 2G
spike_limit_mib: 512
check_interval: 5s
exporters:
logging:
logLevel: debug
service:
pipelines:
traces:
receivers: [otlp]
processors: [memory_limiter, batch]
exporters: [logging]
metrics:
receivers: [otlp]
processors: [memory_limiter, batch]
exporters: [logging]
extensions: [memory_ballast, zpages] Then, I generated a few traces with:
|
I'm closing this, but feel free to reopen if further clarification is needed. |
Hey @jpkrohling , sorry, I missed your answer! The example I posted was just a working example. The real file really has traces pipelines. |
Please, post a reproducer here and I'll reopen this issue. The reproducer would be a configuration file that demonstrates the issue, plus a client sending traces. If you can use tracegen for that, even better. Based on my previous test, I have reasons to believe that this is working, but if you give me a way to reproduce the problem, I'll gladly work on this. |
I will create a better example @jpkrohling , sorry for that! Just to confirm what is the expected behavior: if I have a trace pipeline with traces going through I should be able to see the metric being emitted even if it is 0, right? |
From what I remember, metrics show up only after the first time they are reported: if you reported a gauge as 0, it will show up. If you never recorded a value for a given metric, it won't show up. |
I know this is a pretty old issue, but I was looking at this. From my testing, it seems the metrics @gfonseca-tc example does not use memory limiter, while on @jpkrohling example, he was using the memory_limiter, I believe that's why the metric was available:
|
Describe the bug
I'm trying to configure some monitor around our collector gateway but I can't see some of the metrics listed in this docs. I've configured the collector to scrape it own metrics and can see a list of them in my backend, but I can't see
otelcol_processor_dropped_spans
for instance. Not sure if the docs are outdated or if there is any configuration missing.Steps to reproduce
Configure the collector to scrape it own metrics using a prometheus receiver, like this docs and send them to your favorite backend.
What did you expect to see?
I expect to see a list of the metrics exposed by the collector and an explanation on how to use them.
What did you see instead?
This recommendations show some metrics but not all the metrics are being sent to my backend. I've also checked sending the metrics to a log exporter and they are not being sent.
What version did you use?
Version: 0.46.0
What config did you use?
Config:
The text was updated successfully, but these errors were encountered: