Skip to content

Commit

Permalink
Merge pull request #80 from grafana/staging
Browse files Browse the repository at this point in the history
  • Loading branch information
jdbaldry authored Jul 25, 2024
2 parents 6311eca + 8797765 commit 6e79375
Show file tree
Hide file tree
Showing 11 changed files with 74 additions and 46 deletions.
20 changes: 19 additions & 1 deletion .github/workflows/regenerate-tutorials.yml
Original file line number Diff line number Diff line change
Expand Up @@ -8,12 +8,18 @@ jobs:
if: github.repository == 'grafana/killercoda'
runs-on: ubuntu-latest
steps:
# Check out all the repositories that contain documentation sources from which we generate tutorials.
- uses: actions/checkout@v4
with:
repository: grafana/loki
# Change to `main` after this branch is merged.
ref: jdb/2024-06-killercoda-migration
path: loki-alt
- uses: actions/checkout@v4
with:
repository: grafana/loki
path: loki

- uses: actions/checkout@v4
with:
path: killercoda
Expand All @@ -25,11 +31,23 @@ jobs:
- run: ./scripts/check-out-branch.bash
shell: bash
working-directory: killercoda
# Run the transformer on all documentation sources.
- run: >
./transformer
"${GITHUB_WORKSPACE}/loki/docs/sources/get-started/quick-start.md"
"${GITHUB_WORKSPACE}/loki-alt/docs/sources/get-started/quick-start.md"
"${GITHUB_WORKSPACE}/killercoda/loki/loki-quickstart"
working-directory: killercoda/tools/transformer
- run: >
./transformer
"${GITHUB_WORKSPACE}/loki/docs/sources/send-data/alloy/examples/alloy-kafka-logs.md"
"${GITHUB_WORKSPACE}/killercoda/loki/alloy-kafka-logs"
working-directory: killercoda/tools/transformer
- run: >
./transformer
"${GITHUB_WORKSPACE}/loki/docs/sources/send-data/alloy/examples/alloy-otel-logs.md"
"${GITHUB_WORKSPACE}/killercoda/loki/alloy-otel-logs"
working-directory: killercoda/tools/transformer
- run: ./scripts/manage-pr.bash
env:
GH_TOKEN: ${{ github.token }}
Expand Down
2 changes: 1 addition & 1 deletion loki/alloy-kafka-logs/finish.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ In this example, we configured Alloy to ingest logs via Kafka. We configured All

## Back to Docs

Head back to wear you started from to continue with the Loki documentation: [Loki documentation](https://grafana.com/docs/loki/latest/send-data/alloy)
Head back to where you started from to continue with the Loki documentation: [Loki documentation](https://grafana.com/docs/loki/latest/send-data/alloy)

# Further reading

Expand Down
16 changes: 4 additions & 12 deletions loki/alloy-kafka-logs/intro.md
Original file line number Diff line number Diff line change
@@ -1,26 +1,18 @@
# Sending Logs to Loki via Kafka using Alloy

Alloy nativley supports receiving logs via Kafka. In this example, we will configure Alloy to recive logs via kafka using two different methods:
Alloy natively supports receiving logs via Kafka. In this example, we will configure Alloy to receive logs via Kafka using two different methods:

- [loki.source.kafka](https://grafana.com/docs/alloy/latest/reference/components/loki.source.kafka): reads messages from Kafka using a consumer group and forwards them to other `loki.*`{{copy}} components.

- [otelcol.receiver.kafka](https://grafana.com/docs/alloy/latest/reference/components/otelcol.receiver.kafka/): accepts telemetry data from a Kafka broker and forwards it to other `otelcol.*`{{copy}} components.

## Dependencies

Before you begin, ensure you have the following to run the demo:

- Docker

- Docker Compose

## Scenario

In this scenario, we have a microservices application called the Carnivourse Greenhouse. This application consists of the following services:
In this scenario, we have a microservices application called the Carnivorous Greenhouse. This application consists of the following services:

- **User Service:** Mangages user data and authentication for the application. Such as creating users and logging in.
- **User Service:** Manages user data and authentication for the application. Such as creating users and logging in.

- **plant Service:** Manges the creation of new plants and updates other services when a new plant is created.
- **Plant Service:** Manages the creation of new plants and updates other services when a new plant is created.

- **Simulation Service:** Generates sensor data for each plant.

Expand Down
2 changes: 1 addition & 1 deletion loki/alloy-kafka-logs/step1.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ In this step, we will set up our environment by cloning the repository that cont
This will spin up the following services:
```bash
```console
✔ Container loki-fundamentals-grafana-1 Started
✔ Container loki-fundamentals-loki-1 Started
✔ Container loki-fundamentals-alloy-1 Started
Expand Down
22 changes: 17 additions & 5 deletions loki/alloy-kafka-logs/step2.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,15 +2,27 @@

In this first step, we will configure Alloy to ingest raw Kafka logs. To do this, we will update the `config.alloy`{{copy}} file to include the Kafka logs configuration.

## Open your Code Editor and Locate the `config.alloy`{{copy}} file

Grafana Alloy requires a configuration file to define the components and their relationships. The configuration file is written using Alloy configuration syntax. We will build the entire observability pipeline within this configuration file. To start, we will open the `config.alloy`{{copy}} file in the code editor:

**Note: Killercoda has an inbuilt Code editor which can be accessed via the `Editor`{{copy}} tab.**

1. Expand the `loki-fundamentals`{{copy}} directory in the file explorer of the `Editor`{{copy}} tab.

1. Locate the `config.alloy`{{copy}} file in the `loki-fundamentals`{{copy}} directory (Top level directory).

1. Click on the `config.alloy`{{copy}} file to open it in the code editor.

You will copy all three of the following configuration snippets into the `config.alloy`{{copy}} file.

## Source logs from kafka

First, we will configure the Loki Kafka source. `loki.source.kafka`{{copy}} reads messages from Kafka using a consumer group and forwards them to other `loki.*`{{copy}} components.

The component starts a new Kafka consumer group for the given arguments and fans out incoming entries to the list of receivers in forward_to.
The component starts a new Kafka consumer group for the given arguments and fans out incoming entries to the list of receivers in `forward_to`{{copy}}.

Open the `config.alloy`{{copy}} file in the `loki-fundamentals`{{copy}} directory and copy the following configuration:
Add the following configuration to the `config.alloy`{{copy}} file:

```alloy
loki.source.kafka "raw" {
Expand Down Expand Up @@ -43,7 +55,7 @@ For more information on the `loki.source.kafka`{{copy}} configuration, see the [
Next, we will configure the Loki relabel rules. The `loki.relabel`{{copy}} component rewrites the label set of each log entry passed to its receiver by applying one or more relabeling rules and forwards the results to the list of receivers in the component’s arguments. In our case we are directly calling the rule from the `loki.source.kafka`{{copy}} component.
Open the `config.alloy`{{copy}} file in the `loki-fundamentals`{{copy}} directory and copy the following configuration:
Now add the following configuration to the `config.alloy`{{copy}} file:
```alloy
loki.relabel "kafka" {
Expand All @@ -67,7 +79,7 @@ For more information on the `loki.relabel`{{copy}} configuration, see the [Loki
Lastly, we will configure the Loki write component. `loki.write`{{copy}} receives log entries from other loki components and sends them over the network using the Loki logproto format.
Open the `config.alloy`{{copy}} file in the `loki-fundamentals`{{copy}} directory and copy the following configuration:
And finally, add the following configuration to the `config.alloy`{{copy}} file:
```alloy
loki.write "http" {
Expand All @@ -91,7 +103,7 @@ Once added, save the file. Then run the following command to request Alloy to re
curl -X POST http://localhost:12345/-/reload
```{{exec}}
The new configuration will be loaded this can be verified by checking the Alloy UI: [http://localhost:12345]({{TRAFFIC_HOST1_12345}}).
The new configuration will be loaded. You can verify this by checking the Alloy UI: [http://localhost:12345]({{TRAFFIC_HOST1_12345}}).
# Stuck? Need help?
Expand Down
16 changes: 9 additions & 7 deletions loki/alloy-kafka-logs/step3.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,13 +2,15 @@

Next we will configure Alloy to also ingest OpenTelemetry logs via Kafka, we need to update the Alloy configuration file once again. We will add the new components to the `config.alloy`{{copy}} file along with the existing components.

**Note: Killercoda has an inbuilt Code editor which can be accessed via the `Editor`{{copy}} tab.**
## Open your Code Editor and Locate the `config.alloy`{{copy}} file

Like before, we generate our next pipeline configuration within the same `config.alloy`{{copy}} file. You will add the following configuration snippets to the file **in addition** to the existing configuration. Essentially, we are configuring two pipelines within the same Alloy configuration file.

## Source OpenTelemetry logs from Kafka

First, we will configure the OpenTelemetry Kafaka receiver. `otelcol.receiver.kafka`{{copy}} accepts telemetry data from a Kafka broker and forwards it to other `otelcol.*`{{copy}} components.

Open the `config.alloy`{{copy}} file in the `loki-fundamentals`{{copy}} directory and copy the following configuration:
Now add the following configuration to the `config.alloy`{{copy}} file:

```alloy
otelcol.receiver.kafka "default" {
Expand Down Expand Up @@ -41,7 +43,7 @@ For more information on the `otelcol.receiver.kafka`{{copy}} configuration, see
Next, we will configure a OpenTelemetry processor. `otelcol.processor.batch`{{copy}} accepts telemetry data from other otelcol components and places them into batches. Batching improves the compression of data and reduces the number of outgoing network requests required to transmit data. This processor supports both size and time based batching.
Open the `config.alloy`{{copy}} file in the `loki-fundamentals`{{copy}} directory and copy the following configuration:
Now add the following configuration to the `config.alloy`{{copy}} file:
```alloy
otelcol.processor.batch "default" {
Expand All @@ -61,7 +63,7 @@ For more information on the `otelcol.processor.batch`{{copy}} configuration, see
Lastly, we will configure the OpenTelemetry exporter. `otelcol.exporter.otlphttp`{{copy}} accepts telemetry data from other otelcol components and writes them over the network using the OTLP HTTP protocol. We will use this exporter to send the logs to Loki’s native OTLP endpoint.
Open the `config.alloy`{{copy}} file in the `loki-fundamentals`{{copy}} directory and copy the following configuration:
Finally, add the following configuration to the `config.alloy`{{copy}} file:
```alloy
otelcol.exporter.otlphttp "default" {
Expand All @@ -85,11 +87,11 @@ Once added, save the file. Then run the following command to request Alloy to re
curl -X POST http://localhost:12345/-/reload
```{{exec}}
The new configuration will be loaded this can be verified by checking the Alloy UI: [http://localhost:12345]({{TRAFFIC_HOST1_12345}}).
The new configuration will be loaded. You can verify this by checking the Alloy UI: [http://localhost:12345]({{TRAFFIC_HOST1_12345}}).
# Stuck? Need help?
# Stuck? Need help (Full Configuration)?
If you get stuck or need help creating the configuration, you can copy and replace the entire `config.alloy`{{copy}} using the completed configuration file:
If you get stuck or need help creating the configuration, you can copy and replace the entire `config.alloy`{{copy}}. This differs from the previous `Stuck? Need help`{{copy}} section as we are replacing the entire configuration file with the completed configuration file. Rather than just adding the first Loki Raw Pipeline configuration.
```bash
cp loki-fundamentals/completed/config.alloy loki-fundamentals/config.alloy
Expand Down
2 changes: 1 addition & 1 deletion loki/alloy-kafka-logs/step4.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ docker-compose -f loki-fundamentals/greenhouse/docker-compose-micro.yml up -d --
This will start the following services:
```bash
```console
✔ Container greenhouse-db-1 Started
✔ Container greenhouse-websocket_service-1 Started
✔ Container greenhouse-bug_service-1 Started
Expand Down
2 changes: 1 addition & 1 deletion loki/alloy-otel-logs/finish.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ In this example, we configured Alloy to ingest OpenTelemetry logs and send them

## Back to Docs

Head back to wear you started from to continue with the Loki documentation: [Loki documentation](https://grafana.com/docs/loki/latest/send-data/alloy)
Head back to where you started from to continue with the Loki documentation: [Loki documentation](https://grafana.com/docs/loki/latest/send-data/alloy)

# Further reading

Expand Down
12 changes: 2 additions & 10 deletions loki/alloy-otel-logs/intro.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,21 +8,13 @@ Alloy natively supports receiving logs in the OpenTelemetry format. This allows

- **OpenTelemetry Exporter:** This component will accept telemetry data from other `otelcol.*`{{copy}} components and write them over the network using the OTLP HTTP protocol. We will use this exporter to send the logs to Loki’s native OTLP endpoint.

## Dependencies

Before you begin, ensure you have the following to run the demo:

- Docker

- Docker Compose

## Scenario

In this scenario, we have a microservices application called the Carnivourse Greenhouse. This application consists of the following services:

- **User Service:** Mangages user data and authentication for the application. Such as creating users and logging in.
- **User Service:** Manages user data and authentication for the application. Such as creating users and logging in.

- **plant Service:** Manges the creation of new plants and updates other services when a new plant is created.
- **Plant Service:** Manages the creation of new plants and updates other services when a new plant is created.

- **Simulation Service:** Generates sensor data for each plant.

Expand Down
2 changes: 1 addition & 1 deletion loki/alloy-otel-logs/step1.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ In this step, we will set up our environment by cloning the repository that cont
This will spin up the following services:
```bash
```console
✔ Container loki-fundamentals-grafana-1 Started
✔ Container loki-fundamentals-loki-1 Started
✔ Container loki-fundamentals-alloy-1 Started
Expand Down
24 changes: 18 additions & 6 deletions loki/alloy-otel-logs/step2.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,13 +2,25 @@

To configure Alloy to ingest OpenTelemetry logs, we need to update the Alloy configuration file. To start, we will update the `config.alloy`{{copy}} file to include the OpenTelemetry logs configuration.

## Open your Code Editor and Locate the `config.alloy`{{copy}} file

Grafana Alloy requires a configuration file to define the components and their relationships. The configuration file is written using Alloy configuration syntax. We will build the entire observability pipeline within this configuration file. To start, we will open the `config.alloy`{{copy}} file in the code editor:

**Note: Killercoda has an inbuilt Code editor which can be accessed via the `Editor`{{copy}} tab.**

1. Expand the `loki-fundamentals`{{copy}} directory in the file explorer of the `Editor`{{copy}} tab.

1. Locate the `config.alloy`{{copy}} file in the top level directory, `loki-fundamentals'.

1. Click on the `config.alloy`{{copy}} file to open it in the code editor.

You will copy all three of the following configuration snippets into the `config.alloy`{{copy}} file.

## Recive OpenTelemetry logs via gRPC and HTTP

First, we will configure the OpenTelemetry receiver. `otelcol.receiver.otlp`{{copy}} accepts logs in the OpenTelemetry format via HTTP and gRPC. We will use this receiver to receive logs from the Carnivorous Greenhouse application.

Open the `config.alloy`{{copy}} file in the `loki-fundamentals`{{copy}} directory and copy the following configuration:
Now add the following configuration to the `config.alloy`{{copy}} file:

```alloy
otelcol.receiver.otlp "default" {
Expand All @@ -33,9 +45,9 @@ For more information on the `otelcol.receiver.otlp`{{copy}} configuration, see t
## Create batches of logs using a OpenTelemetry Processor
Next, we will configure a OpenTelemetry processor. `otelcol.processor.batch`{{copy}} accepts telemetry data from other otelcol components and places them into batches. Batching improves the compression of data and reduces the number of outgoing network requests required to transmit data. This processor supports both size and time based batching.
Next, we will configure a OpenTelemetry processor. `otelcol.processor.batch`{{copy}} accepts telemetry data from other `otelcol`{{copy}} components and places them into batches. Batching improves the compression of data and reduces the number of outgoing network requests required to transmit data. This processor supports both size and time based batching.
Open the `config.alloy`{{copy}} file in the `loki-fundamentals`{{copy}} directory and copy the following configuration:
Now add the following configuration to the `config.alloy`{{copy}} file:
```alloy
otelcol.processor.batch "default" {
Expand All @@ -53,9 +65,9 @@ For more information on the `otelcol.processor.batch`{{copy}} configuration, see
## Export logs to Loki using a OpenTelemetry Exporter
Lastly, we will configure the OpenTelemetry exporter. `otelcol.exporter.otlphttp`{{copy}} accepts telemetry data from other otelcol components and writes them over the network using the OTLP HTTP protocol. We will use this exporter to send the logs to Loki’s native OTLP endpoint.
Lastly, we will configure the OpenTelemetry exporter. `otelcol.exporter.otlphttp`{{copy}} accepts telemetry data from other `otelcol`{{copy}} components and writes them over the network using the OTLP HTTP protocol. We will use this exporter to send the logs to Loki’s native OTLP endpoint.
Open the `config.alloy`{{copy}} file in the `loki-fundamentals`{{copy}} directory and copy the following configuration:
Now add the following configuration to the `config.alloy`{{copy}} file:
```alloy
otelcol.exporter.otlphttp "default" {
Expand All @@ -75,7 +87,7 @@ Once added, save the file. Then run the following command to request Alloy to re
curl -X POST http://localhost:12345/-/reload
```{{exec}}
The new configuration will be loaded this can be verified by checking the Alloy UI: [http://localhost:12345]({{TRAFFIC_HOST1_12345}}).
The new configuration will be loaded. You can verify this by checking the Alloy UI: [http://localhost:12345]({{TRAFFIC_HOST1_12345}}).
# Stuck? Need help?
Expand Down

0 comments on commit 6e79375

Please sign in to comment.