-
I have enabled logs to be exported to elasticsearch running in k8s (AKS). Ever since I enabled that, traces stopped showing in Zipkin/Grafana because Zipkin stores trace data in elasticsearch also (same ES cluster, different index). Each ES node has 10Gi allocated on a persistent volume claim (pvc). It looks like the logs are filling up ES and causing it to crash. I see messages like this After some time, I eventually see "all shards failed" error message in Grafana and I get a 503 from my ES cluster. Is there a best practice for handling data retention on ES? Is there a way to purge old data from ES? Here is a snippet from my collector config:
|
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
Elasticsearch ILM Depending on which you are using, you can create an index policy. Managing these backends can be a lot of work to scale them properly. Good luck on your journey. :) |
Beta Was this translation helpful? Give feedback.
Elasticsearch ILM
OpenSearch ISM
Depending on which you are using, you can create an index policy. Managing these backends can be a lot of work to scale them properly. Good luck on your journey. :)