-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kubernetes Attributes Processor adds wrong k8s.container.name value #34835
Comments
Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
Hi @martinohansen! I am currently trying to reproduce this - Would you mind posting also the configuration of the prometheus receiver and other components within the metrics pipeline?
|
Hi @bacherfl! Thanks for looking into this, I appreciate it. I will paste the full config at the end of my response.
Ups, I'm sorry about that, it's a typo and yet it isn't. For consistency on the backend, we're renaming k8s_ to kube_ and I forgot to normalize that fact in the results. Sorry for the confusion. transform/rename-to-kube:
error_mode: ignore
metric_statements:
- context: resource
statements:
- replace_all_patterns(attributes, "key", "k8s\\.(.*)", "kube.$$1") Here is the entire config: # Collector
receivers:
otlp:
protocols:
grpc:
endpoint: ${env:POD_IP}:4317
max_recv_msg_size_mib: 64
http:
endpoint: ${env:POD_IP}:4318
processors:
k8sattributes:
extract:
metadata:
- k8s.container.name
- k8s.namespace.name
- k8s.pod.name
- k8s.deployment.name
- k8s.replicaset.name
- k8s.node.name
- k8s.daemonset.name
- k8s.cronjob.name
- k8s.job.name
- k8s.statefulset.name
labels:
- tag_name: k8s.pod.label.app
key: app
from: pod
- tag_name: k8s.pod.label.component
key: component
from: pod
- tag_name: k8s.pod.label.zone
key: zone
from: pod
pod_association:
- sources:
- from: resource_attribute
name: k8s.pod.ip
- sources:
- from: resource_attribute
name: k8s.pod.uid
- sources:
- from: connection
transform/add-workload-label:
metric_statements:
- context: datapoint
statements:
- set(attributes["kube_workload_name"], resource.attributes["k8s.deployment.name"])
- set(attributes["kube_workload_name"], resource.attributes["k8s.statefulset.name"])
- set(attributes["kube_workload_type"], "deployment") where resource.attributes["k8s.deployment.name"] != nil
- set(attributes["kube_workload_type"], "statefulset") where resource.attributes["k8s.statefulset.name"] != nil
transform/rename-to-kube:
error_mode: ignore
metric_statements:
- context: resource
statements:
- replace_all_patterns(attributes, "key", "k8s\\.(.*)", "kube.$$1")
exporters:
otlphttp/pipeline-metrics:
endpoint: ${env:OTLP_PIPELINE_METRICS_ENDPOINT}
headers:
Authorization: ${env:OTLP_PIPELINE_METRICS_TOKEN}
service:
pipelines:
metrics:
receivers: [otlp]
processors:
- k8sattributes
- transform/add-workload-label
- transform/rename-to-kube
exporters: [otlphttp/pipeline-metrics]
# Agent
receivers:
otlp:
protocols:
grpc:
endpoint: ${env:POD_IP}:4317
http:
endpoint: ${env:POD_IP}:4318
prometheus:
config:
scrape_configs:
- job_name: k8s
tls_config:
insecure_skip_verify: true
scrape_interval: 15s
kubernetes_sd_configs:
- role: pod
selectors:
- role: pod
field: spec.nodeName=${env:NODE_NAME}
relabel_configs:
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
regex: "true"
action: keep
- action: replace
source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scheme]
target_label: __scheme__
regex: (https?)
- action: replace
source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
target_label: __metrics_path__
regex: (.+)
- action: replace
source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
regex: ([^:]+)(?::\d+)?;(\d+)
replacement: $$1:$$2
target_label: __address__
# Allow overriding the scrape timeout and interval from pod
# annotation.
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape_timeout]
regex: '(.+)'
target_label: __scrape_timeout__
- source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape_interval]
regex: '(.+)'
target_label: __scrape_interval__
exporters:
otlp:
endpoint: "otel-collector.otel.svc.cluster.local:4317"
tls:
insecure: true
retry_on_failure:
enabled: true
processors:
batch:
k8sattributes:
passthrough: true
service:
pipelines:
metrics:
receivers: [otlp, prometheus]
processors: [batch, k8sattributes]
exporters: [otlp] P.s. I did remove some batching and memory limit config for simplicity since they are unrelated |
Thank you for the config @martinohansen ! I will try to reproduce the issue and will get back to you when I have gained more insights into what could be causing this |
I did some tests now, and I discovered that using the
In the example I was testing this I have a
However, for init containers which mostly do not have a port defined, this rule does not catch this, and a separate target with the same endpoint will be created, so the same endpoint will effectively be called twice during each scrape, yielding the same set of metrics, but for different attribute sets - one including the name of the init container, and the other the correct container - the OTel resource has the same name in both cases, so that might explain why the container name ends up being set incorrectly at the end. What I found as a potential workaround is an additional relabel config to exclude targets created for init containers - the prometheus library internally sets this attribute:
would that be an option for you @martinohansen? |
This issue has been inactive for 60 days. It will be closed in 60 days if there is no activity. To ping code owners by adding a component label, see Adding Labels via Comments, or if you are unsure of which component this issue relates to, please ping Pinging code owners:
See Adding Labels via Comments if you do not have permissions to add labels yourself. |
Component(s)
processor/k8sattributes
What happened?
Description
The Kubernetes Attributes Processor (k8sattributes) adds the wrong container name to pods with init container. I read metrics using the Prometheus receiver.
Steps to Reproduce
Setup processor to associate with pod ip, uid, and lastly connection details and pull k8s.container.name
Expose metrics from container foo with a pod spec like this
Expected Result
Actual Result
Collector version
v0.107.0
The text was updated successfully, but these errors were encountered: