From a436e67252044d72c514774b10ec47bcbd06faf1 Mon Sep 17 00:00:00 2001 From: timhard-splunk Date: Tue, 2 Apr 2024 16:20:29 -0400 Subject: [PATCH] disabling draft: optimize cloud monitoring --- .../optimize_monitoring/1-getting-started/1-access-ec2.md | 2 +- .../1-getting-started/2-deploy-application.md | 2 +- .../scenarios/optimize_monitoring/1-getting-started/_index.md | 2 +- .../2-standardize-data-collection/1-what-are-tags.md | 2 +- .../2-standardize-data-collection/2-adding-context-with-tags.md | 2 +- .../optimize_monitoring/2-standardize-data-collection/_index.md | 2 +- .../3-reuse-content-across-teams/1-infrastructure-dashboards.md | 2 +- .../3-reuse-content-across-teams/2-clone-dashboards.md | 2 +- .../3-reuse-content-across-teams/3-mirror-dashboards.md | 2 +- .../optimize_monitoring/3-reuse-content-across-teams/_index.md | 2 +- .../4-correlate-metrics-logs/1-correlate-metrics-and-logs.md | 2 +- .../4-correlate-metrics-logs/2-create-log-based-chart.md | 2 +- .../optimize_monitoring/4-correlate-metrics-logs/_index.md | 2 +- .../5-improve-alert-timeliness/1-create-custom-detector.md | 2 +- .../optimize_monitoring/5-improve-alert-timeliness/_index.md | 2 +- .../optimize_monitoring/6-workshop-conclusion/_index.md | 2 +- content/en/scenarios/optimize_monitoring/_index.md | 2 +- 17 files changed, 17 insertions(+), 17 deletions(-) diff --git a/content/en/scenarios/optimize_monitoring/1-getting-started/1-access-ec2.md b/content/en/scenarios/optimize_monitoring/1-getting-started/1-access-ec2.md index 8009d0d572..00180b9962 100644 --- a/content/en/scenarios/optimize_monitoring/1-getting-started/1-access-ec2.md +++ b/content/en/scenarios/optimize_monitoring/1-getting-started/1-access-ec2.md @@ -4,7 +4,7 @@ linkTitle: 1.1 Access AWS/EC2 Instance weight: 2 authors: ["Tim Hard"] time: 5 minutes -draft: true +draft: false --- 1. How to retrieve the IP address of the AWS/EC2 instance assigned to you. diff --git a/content/en/scenarios/optimize_monitoring/1-getting-started/2-deploy-application.md b/content/en/scenarios/optimize_monitoring/1-getting-started/2-deploy-application.md index d2415f480a..363920f0fe 100644 --- a/content/en/scenarios/optimize_monitoring/1-getting-started/2-deploy-application.md +++ b/content/en/scenarios/optimize_monitoring/1-getting-started/2-deploy-application.md @@ -4,7 +4,7 @@ linkTitle: 1.2 Deploy OpenTelemetry Demo Application weight: 3 authors: ["Tim Hard"] time: 10 minutes -draft: true +draft: false --- ## Introduction diff --git a/content/en/scenarios/optimize_monitoring/1-getting-started/_index.md b/content/en/scenarios/optimize_monitoring/1-getting-started/_index.md index ec2a4f567e..67a2b65a8c 100644 --- a/content/en/scenarios/optimize_monitoring/1-getting-started/_index.md +++ b/content/en/scenarios/optimize_monitoring/1-getting-started/_index.md @@ -4,7 +4,7 @@ linkTitle: 1. Getting Started weight: 1 authors: ["Tim Hard"] time: 3 minutes -draft: true +draft: false --- During this _**technical**_ Optimize Cloud Monitoring Workshop, you will build out an environment based on a [lightweight](https://k3s.io/) Kubernetes[^1] cluster. diff --git a/content/en/scenarios/optimize_monitoring/2-standardize-data-collection/1-what-are-tags.md b/content/en/scenarios/optimize_monitoring/2-standardize-data-collection/1-what-are-tags.md index 5e404eef95..7fdf29d884 100644 --- a/content/en/scenarios/optimize_monitoring/2-standardize-data-collection/1-what-are-tags.md +++ b/content/en/scenarios/optimize_monitoring/2-standardize-data-collection/1-what-are-tags.md @@ -4,7 +4,7 @@ linkTitle: 2.1 What Are Tags? weight: 2 authors: ["Tim Hard"] time: 3 minutes -draft: true +draft: false --- Tags are key-value pairs that provide additional metadata about metrics, spans in a trace, or logs allowing you to enrich the context of the data you send to **Splunk Observability Cloud**. There are many tags that are collected by default such as hostname or OS type. Custom tags can be used to provide environment or application specific context. Examples of custom tags include: diff --git a/content/en/scenarios/optimize_monitoring/2-standardize-data-collection/2-adding-context-with-tags.md b/content/en/scenarios/optimize_monitoring/2-standardize-data-collection/2-adding-context-with-tags.md index 8b34c3302a..67544447db 100644 --- a/content/en/scenarios/optimize_monitoring/2-standardize-data-collection/2-adding-context-with-tags.md +++ b/content/en/scenarios/optimize_monitoring/2-standardize-data-collection/2-adding-context-with-tags.md @@ -4,7 +4,7 @@ linkTitle: 2.2 Adding Context With Tags weight: 3 authors: ["Tim Hard"] time: 3 minutes -draft: true +draft: false --- When you [deployed the OpenTelemetry Demo Application](../getting_started/2-deploy-application/) in the [Getting Started](../getting_started/) section of this workshop, you were asked to enter your favorite city. For this workshop, we'll be using that to show the value of custom tags. diff --git a/content/en/scenarios/optimize_monitoring/2-standardize-data-collection/_index.md b/content/en/scenarios/optimize_monitoring/2-standardize-data-collection/_index.md index eddb28def0..be212eb955 100644 --- a/content/en/scenarios/optimize_monitoring/2-standardize-data-collection/_index.md +++ b/content/en/scenarios/optimize_monitoring/2-standardize-data-collection/_index.md @@ -4,7 +4,7 @@ linkTitle: 2. Standardize Data Collection weight: 1 authors: ["Tim Hard"] time: 2 minutes -draft: true +draft: false --- ## Why Standards Matter diff --git a/content/en/scenarios/optimize_monitoring/3-reuse-content-across-teams/1-infrastructure-dashboards.md b/content/en/scenarios/optimize_monitoring/3-reuse-content-across-teams/1-infrastructure-dashboards.md index 637fb3297d..10ad8fbfe4 100644 --- a/content/en/scenarios/optimize_monitoring/3-reuse-content-across-teams/1-infrastructure-dashboards.md +++ b/content/en/scenarios/optimize_monitoring/3-reuse-content-across-teams/1-infrastructure-dashboards.md @@ -4,7 +4,7 @@ linkTitle: 3.1 Infrastructure Navigators weight: 2 authors: ["Tim Hard"] time: 5 minutes -draft: true +draft: false --- Splunk Infrastructure Monitoring (IM) is a market-leading monitoring and observability service for hybrid cloud environments. Built on a patented streaming architecture, it provides a **real-time** solution for engineering teams to visualize and analyze performance across infrastructure, services, and applications in a fraction of the time and with greater accuracy than traditional solutions. diff --git a/content/en/scenarios/optimize_monitoring/3-reuse-content-across-teams/2-clone-dashboards.md b/content/en/scenarios/optimize_monitoring/3-reuse-content-across-teams/2-clone-dashboards.md index 66e5e09620..bbf8deaa06 100644 --- a/content/en/scenarios/optimize_monitoring/3-reuse-content-across-teams/2-clone-dashboards.md +++ b/content/en/scenarios/optimize_monitoring/3-reuse-content-across-teams/2-clone-dashboards.md @@ -4,7 +4,7 @@ linkTitle: 3.2 Dashboard Cloning weight: 3 authors: ["Tim Hard"] time: 5 minutes -draft: true +draft: false --- ITOps teams responsible for monitoring fleets of infrastructure frequently find themselves manually creating dashboards to visualize and analyze metrics, traces, and log data emanating from rapidly changing cloud-native workloads hosted in Kubernetes and serverless architectures, alongside existing on-premises systems. Moreover, due to the absence of a standardized troubleshooting workflow, teams often resort to creating numerous custom dashboards, each resembling the other in structure and content. As a result, administrative overhead skyrockets and MTTR slows. diff --git a/content/en/scenarios/optimize_monitoring/3-reuse-content-across-teams/3-mirror-dashboards.md b/content/en/scenarios/optimize_monitoring/3-reuse-content-across-teams/3-mirror-dashboards.md index bc78345a59..d68a1e021d 100644 --- a/content/en/scenarios/optimize_monitoring/3-reuse-content-across-teams/3-mirror-dashboards.md +++ b/content/en/scenarios/optimize_monitoring/3-reuse-content-across-teams/3-mirror-dashboards.md @@ -4,7 +4,7 @@ linkTitle: 3.3 Dashboard Mirroring weight: 3 authors: ["Tim Hard"] time: 5 minutes -draft: true +draft: false --- Not only do the out-of-the-box dashboards provide rich visibility into the infrastructure that is being monitored, they can also be mirrored. This is important because it enables you to create standard dashboards for use by teams throughout your organization. This allows all teams to see any changes to the charts in the dashboard, and members of each team can set dashboard variables and filter customizations relevant to their requirements. diff --git a/content/en/scenarios/optimize_monitoring/3-reuse-content-across-teams/_index.md b/content/en/scenarios/optimize_monitoring/3-reuse-content-across-teams/_index.md index 0a1146bcd9..e66fce80f7 100644 --- a/content/en/scenarios/optimize_monitoring/3-reuse-content-across-teams/_index.md +++ b/content/en/scenarios/optimize_monitoring/3-reuse-content-across-teams/_index.md @@ -4,7 +4,7 @@ linkTitle: 3. Reuse Content Across Teams weight: 1 authors: ["Tim Hard"] time: 3 minutes -draft: true +draft: false --- In today's rapidly evolving technological landscape, where hybrid and cloud environments are becoming the norm, the need for effective monitoring and troubleshooting solutions has never been more critical. However, managing the elasticity and complexity of these modern infrastructures poses a significant challenge for teams across various industries. One of the primary pain points encountered in this endeavor is the inadequacy of existing monitoring and troubleshooting experiences. diff --git a/content/en/scenarios/optimize_monitoring/4-correlate-metrics-logs/1-correlate-metrics-and-logs.md b/content/en/scenarios/optimize_monitoring/4-correlate-metrics-logs/1-correlate-metrics-and-logs.md index 0793c7347b..cece18e39e 100644 --- a/content/en/scenarios/optimize_monitoring/4-correlate-metrics-logs/1-correlate-metrics-and-logs.md +++ b/content/en/scenarios/optimize_monitoring/4-correlate-metrics-logs/1-correlate-metrics-and-logs.md @@ -4,7 +4,7 @@ linkTitle: 4.1 Correlate Metrics and Logs weight: 3 authors: ["Tim Hard"] time: 5 minutes -draft: true +draft: false --- In this section, we'll dive into the seamless correlation of metrics and logs facilitated by the robust naming standards offered by **OpenTelemetry**. By harnessing the power of OpenTelemetry within **Splunk Observability Cloud**, we'll demonstrate how troubleshooting issues becomes significantly more efficient for Site Reliability Engineers (SREs) and operators. With this integration, contextualizing data across various telemetry sources no longer demands manual effort to correlate information. Instead, SREs and operators gain immediate access to the pertinent context they need, allowing them to swiftly pinpoint and resolve issues, improving system reliability and performance. diff --git a/content/en/scenarios/optimize_monitoring/4-correlate-metrics-logs/2-create-log-based-chart.md b/content/en/scenarios/optimize_monitoring/4-correlate-metrics-logs/2-create-log-based-chart.md index 1c5978c67b..23e0f15f6c 100644 --- a/content/en/scenarios/optimize_monitoring/4-correlate-metrics-logs/2-create-log-based-chart.md +++ b/content/en/scenarios/optimize_monitoring/4-correlate-metrics-logs/2-create-log-based-chart.md @@ -4,7 +4,7 @@ linkTitle: 4.2 Create Log-based Chart weight: 3 authors: ["Tim Hard"] time: 5 minutes -draft: true +draft: false --- In Log Observer, you can perform codeless queries on logs to detect the source of problems in your systems. You can also extract fields from logs to set up log processing rules and transform your data as it arrives or send data to Infinite Logging S3 buckets for future use. See [What can I do with Log Observer?](https://docs.splunk.com/observability/en/logs/get-started-logs.html#logobserverfeatures) to learn more about Log Observer capabilities. diff --git a/content/en/scenarios/optimize_monitoring/4-correlate-metrics-logs/_index.md b/content/en/scenarios/optimize_monitoring/4-correlate-metrics-logs/_index.md index 88099355cc..8130671f95 100644 --- a/content/en/scenarios/optimize_monitoring/4-correlate-metrics-logs/_index.md +++ b/content/en/scenarios/optimize_monitoring/4-correlate-metrics-logs/_index.md @@ -4,7 +4,7 @@ linkTitle: 4. Correlate Metrics and Logs weight: 1 authors: ["Tim Hard"] time: 1 minutes -draft: true +draft: false --- Correlating infrastructure metrics and logs is often a challenging task, primarily due to inconsistencies in naming conventions across various data sources, including hosts operating on different systems. However, leveraging the capabilities of **OpenTelemetry** can significantly simplify this process. With OpenTelemetry's robust framework, which offers rich metadata and attribution, metrics, traces, and logs can seamlessly correlate using standardized field names. This automated correlation not only alleviates the burden of manual effort but also enhances the overall observability of the system. diff --git a/content/en/scenarios/optimize_monitoring/5-improve-alert-timeliness/1-create-custom-detector.md b/content/en/scenarios/optimize_monitoring/5-improve-alert-timeliness/1-create-custom-detector.md index 41fd6bf6cd..adbb13e754 100644 --- a/content/en/scenarios/optimize_monitoring/5-improve-alert-timeliness/1-create-custom-detector.md +++ b/content/en/scenarios/optimize_monitoring/5-improve-alert-timeliness/1-create-custom-detector.md @@ -4,7 +4,7 @@ linkTitle: 5.1 Create Custom Detector weight: 5 authors: ["Tim Hard"] time: 10 minutes -draft: true +draft: false --- Splunk Observability Cloud provides detectors, events, alerts, and notifications to keep you informed when certain criteria are met. There are a number of pre-built **AutoDetect Detectors** which automatically surface when common problem patterns occur, such as when an EC2 instance’s CPU utilization is expected to reach its limit. Additionally, you can also create custom detectors if you want something more optimized or specific. For example, you want a message sent to a Slack channel or to an email address for the Ops team that manages this kubernetes cluster when Memory Utilization on their pods has reached 85%. diff --git a/content/en/scenarios/optimize_monitoring/5-improve-alert-timeliness/_index.md b/content/en/scenarios/optimize_monitoring/5-improve-alert-timeliness/_index.md index 3a24312860..1cd72d1f94 100644 --- a/content/en/scenarios/optimize_monitoring/5-improve-alert-timeliness/_index.md +++ b/content/en/scenarios/optimize_monitoring/5-improve-alert-timeliness/_index.md @@ -4,7 +4,7 @@ linkTitle: 5. Improve Timeliness of Alerts weight: 1 authors: ["Tim Hard"] time: 1 minutes -draft: true +draft: false --- When monitoring hybrid and cloud environments, ensuring timely alerts for critical infrastructure and applications poses a significant challenge. Typically, this involves crafting intricate queries, meticulously scheduling searches, and managing alerts across various monitoring solutions. Moreover, the proliferation of disparate alerts generated from identical data sources often results in unnecessary duplication, contributing to alert fatigue and noise within the monitoring ecosystem. diff --git a/content/en/scenarios/optimize_monitoring/6-workshop-conclusion/_index.md b/content/en/scenarios/optimize_monitoring/6-workshop-conclusion/_index.md index 4b8a4a4a61..172b707765 100644 --- a/content/en/scenarios/optimize_monitoring/6-workshop-conclusion/_index.md +++ b/content/en/scenarios/optimize_monitoring/6-workshop-conclusion/_index.md @@ -3,7 +3,7 @@ title: Conclusion linkTitle: 6. Conclusion weight: 1 time: 1 minutes -draft: true +draft: false --- Today you’ve seen how Splunk Observability Cloud can help you overcome many of the challenges you face monitoring hybrid and cloud environments. You’ve demonstrated how **Splunk Observability Cloud** streamlines operations with standardized data collection and tags, ensuring consistency across all IT infrastructure. The Unified Service Telemetry has been a game-changer, providing in-context metrics, logs, and trace data that make troubleshooting swift and efficient. By enabling the reuse of content across teams, you’re minimizing technical debt and bolstering the performance of our monitoring systems. diff --git a/content/en/scenarios/optimize_monitoring/_index.md b/content/en/scenarios/optimize_monitoring/_index.md index 0080600dd3..9346544a73 100644 --- a/content/en/scenarios/optimize_monitoring/_index.md +++ b/content/en/scenarios/optimize_monitoring/_index.md @@ -5,7 +5,7 @@ weight: 1 archetype: chapter authors: ["Tim Hard"] time: 3 minutes -draft: true +draft: false --- The elasticity of cloud architectures means that monitoring artifacts must scale elastically as well, breaking the paradigm of purpose-built monitoring assets. As a result, administrative overhead, visibility gaps, and tech debt skyrocket while MTTR slows. This typically happens for three reasons: