From 6c091220395818f7fdd2cd0642c1bbba214d2311 Mon Sep 17 00:00:00 2001 From: Antoine Pultier <45740+fungiboletus@users.noreply.github.com> Date: Tue, 24 Sep 2024 23:34:58 +0200 Subject: [PATCH] =?UTF-8?q?feat:=20=F0=9F=93=9D=20Forgot=20to=20say=20some?= =?UTF-8?q?thing.?= MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit --- docs/blog-post.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/blog-post.md b/docs/blog-post.md index d08e649..b1dc119 100644 --- a/docs/blog-post.md +++ b/docs/blog-post.md @@ -201,7 +201,7 @@ The [2023 Survey of the Cloud Native Computing Foundation (CNCF)](https://www.cn [A Standford magazine article](https://stanfordmag.org/contents/carbon-and-the-cloud) estimates the environmental footprint of 100GB stored in the cloud at about 200kg of CO₂ per year. -That's also a conservative number. We know that [CloudFlare has about 5 billion time series](https://blog.cloudflare.com/how-cloudflare-runs-prometheus-at-scale/), and the fix may save them about 50 gigabytes. +That's also a conservative number. We know that [CloudFlare has about 5 billion time series](https://blog.cloudflare.com/how-cloudflare-runs-prometheus-at-scale/), and the fix may save them about 50 gigabytes. They are likely not alone to use Prometheus at large scale. It's also a wild guess with many assumptions and uncertainties. We didn't consider that many chunks are read at frequent intervals, potentially over the network. Estimating CO₂ emissions of software in the cloud [is a complex topic](https://www.cloudcarbonfootprint.org/docs/methodology).