-
Notifications
You must be signed in to change notification settings - Fork 26
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question about recentFiler=1d for getting executions #68
Comments
Hey @davidcpell! That point in the code retrieves the 20 (max => project_executions_limit = > 20 by default) most recent For example, the Don't know if I was clear enough, lemme know if not. |
I'm still struggling to see how this works out in a case where we are scraping every minute but the exporter is looking back 24h for executions. For example, if there were 10 executions of a certain job in the last 24h and Prometheus is scraping every minute, would we not be adding 10 to the counter every time a scrape takes place even if no more executions are happening (until 24h have gone by and the API query no longer finds those 10 executions)? Or a gauge example: let's say we had 100 executions evenly spread out over the course of 24h, and the next day there are 0 executions the whole day. At noon the next day (halfway through the day) wouldn't we still be sending a value of 50 to Prometheus? And it would look like the value went from 100 to 50 when actually there are 0 executions for those times on the 2nd day? Sorry if I'm misunderstanding something either in the exporter or Prom itself! |
@davidcpell You're absolutely right! |
Hi @phsmith, I'm integrating
rundeck_exporter
(thanks for this!) and while looking at the code came across the use of 1d as arecentFilter
when getting executions:rundeck_exporter/rundeck_exporter.py
Lines 232 to 233 in 82b203a
Can you explain the reasoning behind this value? I just want to make sure things will work as expected when Prometheus is scraping every minute. Will I be duplicating metrics?
The text was updated successfully, but these errors were encountered: