-
Notifications
You must be signed in to change notification settings - Fork 133
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support configuring the Prometheus and Grafana address through PD configuration file #751
Comments
Does this mean that user need to manually maintain the location of the Prometheus / Grafana address? What if the pod is moved? |
If users remove the Pods, they should also remove the configuration. |
We can allow user customizing a Prometheus / Grafana address in a friendly way, to fulfill the demand of using users' own Prometheus / Grafana instance. However I think the default behaviour (i.e. reporting Prometheus and Grafana address by default from a deployment tool) should be remained, since it is not user friendly to always ask user to input these address twice (1 in deployment tool, 1 in dashboard UI) to work. |
With TiDB Operator, users only need to configure the address once, which is in the Dashboard config in PD. |
Anyway, when will this be supported? |
Even if with configuration parameters, users do not have to input these parameters twice, as TiUP can configure them automatically with the first input. |
Thanks for the reply! I'm not very familiar with how tidb-operator works. Does it ship with a built-in grafana / prometheus? If so, looks like user need to manually sync the configuration (for example, addresses) between with the one in tidb-operator and the one in TiDB Dashboard? |
I'm familiar with TiUP though. The story of TiUP is that, user can change the topology from TiUP command line, for example, changing Grafana addresses, scale-in a Grafana node, scale-out a new Prometheus node, etc. With the automatic sync provided by TiUP, user don't need to input or set the Grafana / Prometheus address in any UI after any topology change. It just works ™. |
Users do not need to sync the configuration for Prometheus and Grafana with TiDB Operator.
When will this be supported? |
In this case, I would like to know why do you want user to configure Prometheus and Grafana in TiDB Dashboard? Looks like without TiDB Dashboard, user doesn't need to configure anything for observability to work. Just deploy and then access. In your proposal however, when it comes to TiDB Dashboard, user need to manually configure the observability, to make this specific component to work. |
It is planned but not started yet. Notice that even if this feature is implemented, is not intended to let user manually configure something that is built-in in TiDB Dashboard, or force user to configure multiple times for the same thing in different components. The core idea is simple: if it just works without TiDB Dashboard, then it should just works in TiDB Dashboard, without any extra effort. |
I am not saying to let users to configure the Prom and Grafana configurations, I am saying that TiDB Operator can configure them via configuration file instead of putting to etcd directly. |
Where would users provide their own Prom/Grafana addresses? Anyway, there should be an entrance for them. |
@DanielZhangQD I'm afraid when customizing Prometheus address feature is implemented (and letting user modify it in UI), it will also be an etcd KV, instead of some configuration file item, since our configuration files does not support changing dynamically. |
Feature Request
Is your feature request related to a problem? Please describe:
Currently, to configure the Prometheus and Grafana address for TiDB Dashboard, users cannot configure them manually but have to rely on TiUP or TiDB Operator to configure them via ETCD client, which is not friendly, especially for TiDB Operator.
TiDB Operator runs in sync loops and will sync the configuration during each loop and put to the ETCD each time, however, this should be only a one-time deal, so I would like to request to configure them through the PD configuration file so TiDB Operator does not need to sync them and this will also reduce the pressure of ETCD.
Describe the feature you'd like:
Describe alternatives you've considered:
Teachability, Documentation, Adoption, Migration Strategy:
The text was updated successfully, but these errors were encountered: