-
-
Notifications
You must be signed in to change notification settings - Fork 92
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Missing documentation or integration between resource discovery and scraper #1185
Comments
Glad to see your enthusiasm for resource discovery! This will be released as part of v2.0 which is being tracked here which will include documentation on how to use it which is being added as part of this PR. Later on there will be more high-level docs on how they work together.
Can you elaborate a bit more on this please? You mean it's not scalable to define all the resources and want to use the resource discovery? |
@tomkerkhove thanks for the immediate response :). Yes i meant the same.So it will be ready in couple of days ? |
I'm doing my best but can't commit to a hard deadline, but the alpha version is already available on Docker Hub:
This should allow you to already give it a try based on the docs being added and let me know what you think! |
Thank you @tomkerkhove . After navigating the available docs and the helm charts I was able to connect both the agents. Am also able to get a test resource from Azure Monitor using the resource discovery agent But somehow its not getting scrapped to the /metrics endpoint for utilization. metrics-declaration.yaml
resource-discovery-declaration.yaml
|
Here lies the issue. You still have to tell Promitor Scraper what metric you are interested in and specify the name of the resource discovery group to use. See: https://promitor.io/configuration/v2.x/metrics/virtual-machine |
@tomkerkhove thanks. It started working after adding the below .
Notice that we still need to provide the virtualMachineName. And this is what I was trying to eliminate. |
If you check the link above, you'll see that you can use the resource group that you have defined for discovery. In your case it would be: azureMetadata:
subscriptionId: <some_real_value>
tenantId: <some_real_value>
resourceGroupName: <some_real_value>
metricDefaults:
aggregation:
interval: 00:05:00
scraping:
schedule: 0 * * ? * *
metrics:
- azureMetricConfiguration:
aggregation:
type: Average
metricName: Percentage CPU
description: Average percentage cpu usage on an Azure virtual machine
name: azure_virtual_machine_percentage_cpu
resourceType: VirtualMachine
resourceDiscoveryGroups:
- name: virtual-machines
version: v1 Once you do that, it will pull in all VMs across all RGs in your subscriptions as part of the Azure landscape. |
@tomkerkhove Thanks . This is very helpful. |
Let me know how it works, but I'll close the issue if it's ok? |
@tomkerkhove curious about one more feature. Is there a possibility to have all the aggregations typed defined in one go ?
|
No that's not supported for now. I'm not sure if we would go there as well given one metric would represent different things in the other system which can be misleading. |
@tomkerkhove the filtering and processing of the metrics can be done on the client level which could overcome this scenario. |
The problem is that the more you want, the faster you will be throttled by Azure. For every metric and resource you need to do a few calls and are limited to 12k which is not much but can't work around it. If we take it a step further and pull all aggregations you will even hit it faster. I'm not saying it will never come but just not a prio for now. Feel free to open a seperate issue for it. |
Thanks @tomkerkhove . |
Not perse, we are relying on https://github.com/PrometheusClientNet/Prometheus.Client for this. In theory if you restart the container the data will be removed as we don't persist it. |
Ok for you if this one gets closed? |
Yes Thank you🙂. |
Hi @tomkerkhove ,
Raised this request as I was not able to find out any clear documentation around how to connect resourcediscovery and the scrapper agent.
There are lots of open issues and many of them seems to part of the product already.Am am hoping the feature is already there :)
It would be great if you could help provide some guidelines around it.
As of now I am able to setup both the agents properly and they seem to do a very good job.
The last that that i require is to do is connect them so that I dont not have to mention metric definition for the scraper as its not a scalable and cumbersome.
The text was updated successfully, but these errors were encountered: