You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
Currently, Loki does not provide a native mechanism to restrict or whitelist labels at ingestion. With larger single-tenant deployments, simple human error or a lack of knowledge about how Loki deals with labels can cause the stream limit to be hit. I am aware of promtail and fluent-bit solutions to control this, but not everything can be covered with such preprocessing pipelines.
Describe the solution you'd like
Add a configuration option in Loki’s limits_config to define a label whitelist, such as:
limits_config:
allowed_labels:
- cluster
- namespace
- instancedenied_labels: # Optional, to explicitly reject certain labels
- kubernetes_pod_name
- some_unique_id
If any other labels are sent, they are rejected.
Additional context
We had a case where one of the engineers with full access to Loki used a Python script with direct logging to Loki and created multiple high cardinality labels. Such cases are tough to predict, and even with various ACLs, this could still happen.
I see there already were minor thoughts about this: #8239 (comment).
The text was updated successfully, but these errors were encountered:
JStickler
added
the
type/docs
Issues related to technical documentation; the Docs Squad uses this label across many repositories
label
Jan 6, 2025
We use those resource_attributes for ingestion from the Otel collector, and yes, it works great. But how can we limit this when it's being shipped from fluent-bit or, as mentioned, from a local Python script (which we have no control of)?
When you say rejected, are you referring to labels being dropped and not the log record, or would you expect these to be added as structured metadata?
I would say that since Loki heavily relies on labels, it does not accept logs completely until the needed labels are added to the configuration.
Is your feature request related to a problem? Please describe.
Currently, Loki does not provide a native mechanism to restrict or whitelist labels at ingestion. With larger single-tenant deployments, simple human error or a lack of knowledge about how Loki deals with labels can cause the stream limit to be hit. I am aware of promtail and fluent-bit solutions to control this, but not everything can be covered with such preprocessing pipelines.
Describe the solution you'd like
Add a configuration option in Loki’s limits_config to define a label whitelist, such as:
If any other labels are sent, they are rejected.
Additional context
We had a case where one of the engineers with full access to Loki used a Python script with direct logging to Loki and created multiple high cardinality labels. Such cases are tough to predict, and even with various ACLs, this could still happen.
I see there already were minor thoughts about this: #8239 (comment).
The text was updated successfully, but these errors were encountered: