-
Notifications
You must be signed in to change notification settings - Fork 889
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
prometheus is not enabled% #121
Comments
$ git branch
|
while I can get a json format metrics using |
Hi @zhangzheyu2simple, I wasn't able to reproduce this bug with your given values file; the telemetry stanza appears correct for enabling prometheus. Since the json format metrics were accessible, it sounds like the config from your helm values isn't making it into the ConfigMap. I'd suggest double-checking which values are being used in the deployment ( |
#215 Should help you get up and running with Prometheus a little more easily. |
Thanks @cablespaghetti, we'll take a look. Closing this issue for now, let us know if you run into further issues @zhangzheyu2simple. |
Hello, Your issue would be probably solved by adding Really hope that this will help with your issue, cheers! |
@tvoran We're seeing a similar behaviour - thought not to open a new issue since this one is quite recently closed.
The relevant bits from the ha:
enabled: true
replicas: 3
config: |
ui = true
log_format = "json"
listener "tcp" {
tls_disable = 1
address = "[::]:8200"
cluster_address = "[::]:8201"
}
telemetry {
unauthenticated_metrics_access = true
prometheus_retention_time = "24h"
disable_hostname = true
}
storage "consul" {
path = "vault"
address = "vault-consul-server.vault.svc.cluster.local:8500"
} I've checked the configmap and container and the configuration made it ok there: / $ cat /tmp/storageconfig.hcl
...
telemetry {
unauthenticated_metrics_access = true
prometheus_retention_time = "24h"
disable_hostname = true
}
...
ps axf
...
9 vault 0:02 vault server -config=/tmp/storageconfig.hcl
... However, when trying to scrape with prometheus we get curl http://10.4.80.34:8200/v1/sys/metrics?format=prometheus
prometheus is not enabled |
The tricky bit which took me a while to work out is that the unauthenticated_metrics_access needs to be within your listener config e.g.
|
@one1zero1one can you try `curl -X GET 'http://$YOUR_VAULT_INSTANCE/v1/sys/metrics?format=prometheus' -H "X-Vault-Token:$YOUR_TOKEN" ? Ditch the 8200 in curl. |
@cablespaghetti yeah, the docs are a little bit misleading in this case. As described here, the |
@cablespaghetti thanks, that solved it.
@damianfedeczko thanks - but didn't get a chance to try it as my colleague ran the new config faster than I could check - but I would assume would have worked to use the service and token. Issue was that we aimed for unauthenticated scrape from the get go. +1 for ultimately having the |
@one1zero1one cool, no worries - @cablespaghetti answer nailed it |
thanks basically this is the answer, create separate section for every telemetry, i have had wasted almost 5 hours fixing this issue, thanks |
the above solution works. After adding the above configuration, if you are running vault on Prometheus you will have to restart the pods. |
Since this post helped with the undocumented telemetry settings, I wanted to share this: I deployed the Prometheus Helm Chart This automatically imports the endpoints and sets them as targets in Prometheus.
|
Heads up for anyone else to come across this... a restart is needed for the settings to take effect. Simple reload was not sufficient. Maybe someone more familiar with the code base can confirm as well. Was trying to avoid needing to unlock vault again. Ah well. |
@june07 restart is not related to vault, it is the default kubernetes functionality to not restart pods when the config-map or secret changes (they are loaded into the container but most applications read config only start in the lifecycle. To bypass this limitation, have a look at https://github.com/stakater/Reloader. |
have translated the prom config to a pod monitor ---
apiVersion: monitoring.coreos.com/v1
kind: PodMonitor
metadata:
name: vault
namespace: secops
labels:
app.kubernetes.io/instance: vault
app.kubernetes.io/managed-by: fluxcd
app.kubernetes.io/name: vault
spec:
namespaceSelector:
matchNames:
- secops
selector:
matchLabels:
app.kubernetes.io/instance: vault
app.kubernetes.io/name: vault
vault-active: "true"
podMetricsEndpoints:
- path: /v1/sys/metrics
params:
format: ["prometheus"]
port: http
relabelings:
- action: keep
sourceLabels: ["__meta_kubernetes_namespace", "__meta_kubernetes_pod_container_port_number"]
regex: secops;8200
assure your namesapce matches when you copy it ;) |
That worked for me, tks! |
Also if you are working with the helm chart, there are 3 separate config sections (for different mode/storage - i.e. standalone, ha, raft). Make sure you set the entire config under the mode/storage that you are using. In case you got confused by the comments in the values.yaml like me. |
after deploying ha vault in a k8s cluster, I started to try to scrape prometheus metrics of vault following the regular guide.
but get this error when
curl -X GET "http://localhost:8236/v1/sys/metrics?format="prometheus"" -H "X-Vault-Token: <root_token>"
prometheus is not enabled%
you can reproduce this err following this step
helm install ./
then port forward the svc to localhost:8236 ,and unseal vault in web ui.
then the curl metrics return
prometheus is not enabled%
The text was updated successfully, but these errors were encountered: