Utility for scraping Prometheus metrics from a Prometheus client endpoint and publishing them to CloudWatch
This project is part of our comprehensive "SweetOps" approach towards DevOps.
It's 100% Open Source and licensed under the APACHE2.
kube-state-metrics to CloudWatch
NOTE: The module accepts parameters as command-line arguments or as ENV variables (or any combination of command-line arguments and ENV vars). Command-line arguments take precedence over ENV vars
Command-line argument | ENV var | Description |
---|---|---|
aws_access_key_id | AWS_ACCESS_KEY_ID | AWS access key Id with permissions to publish CloudWatch metrics |
aws_secret_access_key | AWS_SECRET_ACCESS_KEY | AWS secret access key with permissions to publish CloudWatch metrics |
cloudwatch_namespace | CLOUDWATCH_NAMESPACE | CloudWatch Namespace |
cloudwatch_region | CLOUDWATCH_REGION | CloudWatch AWS Region |
cloudwatch_publish_timeout | CLOUDWATCH_PUBLISH_TIMEOUT | CloudWatch publish timeout in seconds |
prometheus_scrape_interval | PROMETHEUS_SCRAPE_INTERVAL | Prometheus scrape interval in seconds |
prometheus_scrape_url | PROMETHEUS_SCRAPE_URL | The URL to scrape Prometheus metrics from |
cert_path | CERT_PATH | Path to SSL Certificate file (when using SSL for prometheus_scrape_url ) |
keyPath | KEY_PATH | Path to Key file (when using SSL for prometheus_scrape_url ) |
accept_invalid_cert | ACCEPT_INVALID_CERT | Accept any certificate during TLS handshake. Insecure, use only for testing |
additional_dimension | ADDITIONAL_DIMENSION | Additional dimension specified by NAME=VALUE |
replace_dimensions | REPLACE_DIMENSIONS | Replace dimensions specified by NAME=VALUE,... |
include_metrics | INCLUDE_METRICS | Only publish the specified metrics (comma-separated list of glob patterns) |
exclude_metrics | EXCLUDE_METRICS | Never publish the specified metrics (comma-separated list of glob patterns) |
include_dimensions_for_metrics | INCLUDE_DIMENSIONS_FOR_METRICS | Only publish the specified dimensions for metrics (semi-colon-separated key values of comma-separated dimensions of METRIC=dim1,dim2;, e.g. 'flink_jobmanager=job_id') |
exclude_dimensions_for_metrics | EXCLUDE_DIMENSIONS_FOR_METRICS | Never publish the specified dimensions for metrics (semi-colon-separated key values of comma-separated dimensions of METRIC=dim1,dim2;, e.g. 'flink_jobmanager=job,host;zk_up=host,pod;') |
force_high_res | FORCE_HIGH_RES | Whether publish all metrics with high resolution to Cloudwatch or only those labeled with __cw_high_res . |
NOTE: If AWS credentials are not provided in the command-line arguments (aws_access_key_id
and aws_secret_access_key
)
or ENV variables (AWS_ACCESS_KEY_ID
and AWS_SECRET_ACCESS_KEY
),
the chain of credential providers will search for credentials in the shared credential file and EC2 Instance Roles.
This is useful when deploying the module in AWS on Kubernetes with kube2iam
,
which will provide IAM credentials to containers running inside a Kubernetes cluster, allowing the module to assume an IAM Role with permissions
to publish metrics to CloudWatch.
go get
CGO_ENABLED=0 go build -v -o "./dist/bin/prometheus-to-cloudwatch" *.go
export AWS_ACCESS_KEY_ID=XXXXXXXXXXXXXXXXXXXXXXX
export AWS_SECRET_ACCESS_KEY=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
export CLOUDWATCH_NAMESPACE=kube-state-metrics
export CLOUDWATCH_REGION=us-east-1
export CLOUDWATCH_PUBLISH_TIMEOUT=5
export PROMETHEUS_SCRAPE_INTERVAL=30
export PROMETHEUS_SCRAPE_URL=http://xxxxxxxxxxxx:8080/metrics
export CERT_PATH=""
export KEY_PATH=""
export ACCEPT_INVALID_CERT=true
# Optionally, restrict the subset of metrics to be exported to CloudWatch
# export INCLUDE_METRICS='jvm_*'
# export EXCLUDE_METRICS='jvm_memory_*,jvm_buffer_*'
# export INCLUDE_DIMENSIONS_FOR_METRICS='jvm_memory_*=pod_id'
# export EXCLUDE_DIMENSIONS_FOR_METRICS='jvm_memory_*=pod;jvm_buffer=job,pod'
./dist/bin/prometheus-to-cloudwatch
NOTE: it will download all Go
dependencies and then build the program inside the container (see Dockerfile
)
docker build --tag prometheus-to-cloudwatch --no-cache=true .
docker run -i --rm \
-e AWS_ACCESS_KEY_ID=XXXXXXXXXXXXXXXXXXXXXXX \
-e AWS_SECRET_ACCESS_KEY=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX \
-e CLOUDWATCH_NAMESPACE=kube-state-metrics \
-e CLOUDWATCH_REGION=us-east-1 \
-e CLOUDWATCH_PUBLISH_TIMEOUT=5 \
-e PROMETHEUS_SCRAPE_INTERVAL=30 \
-e PROMETHEUS_SCRAPE_URL=http://xxxxxxxxxxxx:8080/metrics \
-e CERT_PATH="" \
-e KEY_PATH="" \
-e ACCEPT_INVALID_CERT=true \
-e INCLUDE_METRICS="" \
-e EXCLUDE_METRICS="" \
-e INCLUDE_DIMENSIONS_FOR_METRICS="" \
-e EXCLUDE_DIMENSIONS_FOR_METRICS="" \
prometheus-to-cloudwatch
To run on Kubernetes
, we will deploy two Helm
charts
-
kube-state-metrics - to generates metrics about the state of various objects inside the cluster, such as deployments, nodes and pods
-
prometheus-to-cloudwatch - to scrape metrics from
kube-state-metrics
and publish them to CloudWatch
Install kube-state-metrics
chart
helm install stable/kube-state-metrics
Find the running services
kubectl get services
Copy the name of the kube-state-metrics
service (e.g. gauche-turtle-kube-state-metrics
) into the ENV var PROMETHEUS_SCRAPE_URL
in values.yaml.
It should look like this:
PROMETHEUS_SCRAPE_URL: "http://gauche-turtle-kube-state-metrics:8080/metrics"
Deploy prometheus-to-cloudwatch
chart
cd chart
helm install .
prometheus-to-cloudwatch
will start scraping the /metrics
endpoint of the kube-state-metrics
service and send the Prometheus metrics to CloudWatch
Like this project? Please give it a ★ on our GitHub! (it helps us a lot)
Are you using this project or any of our other projects? Consider leaving a testimonial. =)
Check out these related projects.
- Prometheus Operator - Prometheus Operator creates/configures/manages Prometheus clusters atop Kubernetes
- terraform-aws-cloudwatch-logs - Terraform Module to Provide a CloudWatch Logs Endpoint
- terraform-aws-ecs-web-app - Terraform module that implements a web app on ECS and supports autoscaling, CI/CD, monitoring, ALB integration, and much more.
Got a question? We got answers.
File a GitHub issue, send us an email or join our Slack Community.
We are a DevOps Accelerator. We'll help you build your cloud infrastructure from the ground up so you can own it. Then we'll show you how to operate it and stick around for as long as you need us.
Work directly with our team of DevOps experts via email, slack, and video conferencing.
We deliver 10x the value for a fraction of the cost of a full-time engineer. Our track record is not even funny. If you want things done right and you need it done FAST, then we're your best bet.
- Reference Architecture. You'll get everything you need from the ground up built using 100% infrastructure as code.
- Release Engineering. You'll have end-to-end CI/CD with unlimited staging environments.
- Site Reliability Engineering. You'll have total visibility into your apps and microservices.
- Security Baseline. You'll have built-in governance with accountability and audit logs for all changes.
- GitOps. You'll be able to operate your infrastructure via Pull Requests.
- Training. You'll receive hands-on training so your team can operate what we build.
- Questions. You'll have a direct line of communication between our teams via a Shared Slack channel.
- Troubleshooting. You'll get help to triage when things aren't working.
- Code Reviews. You'll receive constructive feedback on Pull Requests.
- Bug Fixes. We'll rapidly work with you to fix any bugs in our projects.
Join our Open Source Community on Slack. It's FREE for everyone! Our "SweetOps" community is where you get to talk with others who share a similar vision for how to rollout and manage infrastructure. This is the best place to talk shop, ask questions, solicit feedback, and work together as a community to build totally sweet infrastructure.
Sign up for our newsletter that covers everything on our technology radar. Receive updates on what we're up to on GitHub as well as awesome new projects we discover.
Join us every Wednesday via Zoom for our weekly "Lunch & Learn" sessions. It's FREE for everyone!
Please use the issue tracker to report any bugs or file feature requests.
If you are interested in being a contributor and want to get involved in developing this project or help out with our other projects, we would love to hear from you! Shoot us an email.
In general, PRs are welcome. We follow the typical "fork-and-pull" Git workflow.
- Fork the repo on GitHub
- Clone the project to your own machine
- Commit changes to your own branch
- Push your work back up to your fork
- Submit a Pull Request so that we can review your changes
NOTE: Be sure to merge the latest changes from "upstream" before making a pull request!
Copyright © 2017-2020 Cloud Posse, LLC
See LICENSE for full details.
Licensed to the Apache Software Foundation (ASF) under one
or more contributor license agreements. See the NOTICE file
distributed with this work for additional information
regarding copyright ownership. The ASF licenses this file
to you under the Apache License, Version 2.0 (the
"License"); you may not use this file except in compliance
with the License. You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing,
software distributed under the License is distributed on an
"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
KIND, either express or implied. See the License for the
specific language governing permissions and limitations
under the License.
All other trademarks referenced herein are the property of their respective owners.
This project is maintained and funded by Cloud Posse, LLC. Like it? Please let us know by leaving a testimonial!
We're a DevOps Professional Services company based in Los Angeles, CA. We ❤️ Open Source Software.
We offer paid support on all of our projects.
Check out our other projects, follow us on twitter, apply for a job, or hire us to help with your cloud strategy and implementation.
Erik Osterman |
Andriy Knysh |
Igor Rodionov |
yufukui-m |
Satadru Biswas |
Austin ce |
---|