Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[kibana] Simplify ECS formatted logs ingest pipelines #4175

Merged
Merged
Show file tree
Hide file tree
Changes from 6 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion packages/kibana/_dev/build/build.yml
Original file line number Diff line number Diff line change
@@ -1,3 +1,3 @@
dependencies:
ecs:
reference: git@1.12
reference: git@8.4
2 changes: 1 addition & 1 deletion packages/kibana/_dev/build/docs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@ If the Kibana instance is using a basepath in its URL, you must set the `basepat

## Compatibility

The `kibana` package works with Kibana 6.7.0 and later.
The `kibana` package works with Kibana 8.5.0 and later.

## Usage for Stack Monitoring

Expand Down
3 changes: 3 additions & 0 deletions packages/kibana/_dev/deploy/docker/.env
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
ELASTIC_PASSWORD=changeme
ELASTIC_VERSION=8.5.0-SNAPSHOT
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nice, we should export this variable in logstash/es package as well that'll make our life easier for #4013

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Logstash is done and elasticsearch is pending https://github.com/elastic/integrations/pull/4255/files

KIBANA_PASSWORD=changeme
6 changes: 6 additions & 0 deletions packages/kibana/_dev/deploy/docker/config/elasticsearch.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,6 @@
network.host: ""
transport.host: "127.0.0.1"
http.host: "0.0.0.0"
indices.id_field_data.enabled: true
xpack.license.self_generated.type: "trial"
xpack.security.enabled: true
30 changes: 30 additions & 0 deletions packages/kibana/_dev/deploy/docker/config/kibana.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,30 @@
server.name: kibana
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh, I think I forgot to mention this (maybe lost in the github review cycle somewhere), but I opened crespocarlos#2 for an idea at simplifying the setup here.

Copy link
Contributor Author

@crespocarlos crespocarlos Sep 15, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Aha. Thanks. There's only one thing missing there. We need to reset the kibana_system user's password. It will still generate logs without that though. nevermind! It works :). I'll just try to get audit logs there

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I had to slightly complicate it in order to get audit logs there.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Kind of surprised audit logging would require use of a particular user, but I guess if it works it works :)

server.host: "0.0.0.0"
elasticsearch.hosts: ["http://elasticsearch:9200"]
elasticsearch.username: "kibana_system"
elasticsearch.password: "changeme"
elasticsearch.ssl.verificationMode: "none"

xpack.security.audit.enabled: true
xpack.security.audit.appender:
type: rolling-file
fileName: ./logs/audit.log
policy:
type: time-interval
interval: 24h
strategy:
type: numeric
max: 10
layout:
type: json

logging:
root:
level: all
appenders: [default, file]
appenders:
file:
type: file
fileName: ./logs/kibana.log
layout:
type: json
48 changes: 48 additions & 0 deletions packages/kibana/_dev/deploy/docker/docker-compose.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,48 @@
version: "2.3"
services:
elasticsearch:
image: "docker.elastic.co/elasticsearch/elasticsearch:${ELASTIC_VERSION}"
environment:
- "ES_JAVA_OPTS=-Xms1g -Xmx1g"
- "ELASTIC_PASSWORD=${ELASTIC_PASSWORD}"
ports:
- "127.0.0.1:9201:9200"
volumes:
- "./config/elasticsearch.yml:/usr/share/elasticsearch/config/elasticsearch.yml"
healthcheck:
test: curl -sfo /dev/null -u elastic:${ELASTIC_PASSWORD} localhost:9200 || exit 1
retries: 300
interval: 1s
setup:
image: "alpine/curl:latest"
environment:
- "ES_SERVICE_HOST=http://elasticsearch:9200"
- "KIBANA_PASSWORD=changeme"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can do this to have it use the .env value

Suggested change
- "KIBANA_PASSWORD=changeme"
- KIBANA_PASSWORD

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

good catch.

command: ["/bin/sh", "./setup.sh"]
volumes:
- ./scripts/setup.sh:/setup.sh
kibana:
image: "docker.elastic.co/kibana/kibana:${ELASTIC_VERSION}"
depends_on:
elasticsearch:
condition: service_healthy
volumes:
- ./config/kibana.yml:/usr/share/kibana/config/kibana.yml
- ${SERVICE_LOGS_DIR}:/usr/share/kibana/logs
ports:
- "127.0.0.1:5602:5601"
healthcheck:
test: curl -sfo /dev/null localhost:5601 || exit 1
retries: 600
interval: 1s
log_generation:
image: "alpine/curl:latest"
depends_on:
kibana:
condition: service_healthy
environment:
- "KIBANA_SERVICE_HOST=http://kibana:5601"
- "KIBANA_PASSWORD=changeme"
command: ["/bin/sh", "./generate-audit-logs.sh"]
volumes:
- ./scripts/generate-audit-logs.sh:/generate-audit-logs.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
#!/bin/sh

# Makes requests to kibana API so that audit logs can be generated

set -e

until curl --request GET \
--user "elastic:$KIBANA_PASSWORD" \
--url $KIBANA_SERVICE_HOST/login \
--header 'Content-Type: application/json'
do sleep 10;
done;

while true
do
echo Generating audit logs

curl --request GET \
--user "elastic:$KIBANA_PASSWORD" \
--url $KIBANA_SERVICE_HOST/api/features \
--header 'Content-Type: application/json'

sleep 10
done;
12 changes: 12 additions & 0 deletions packages/kibana/_dev/deploy/docker/scripts/setup.sh
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
#!/bin/sh

#Sets a password for kibana_system user

set -e

until curl --request POST $ES_SERVICE_HOST/_security/user/kibana_system/_password \
--user elastic:changeme \
--header 'Content-Type: application/json' \
--data "{\"password\":\"$KIBANA_PASSWORD\"}"
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I went looking at how the ES docker docs did this to see if we should copy https://github.com/elastic/elasticsearch/blob/main/docs/reference/setup/install/docker/docker-compose.yml#L55-L56

But I think I like your rendition better :)

do sleep 10;
done;
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
dynamic_fields:
event.ingested: ".*"
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
{"event":{"action":"http_request","category":["web"],"outcome":"unknown"},"http":{"request":{"method":"get"}},"url":{"domain":"kibana","path":"/api/fleet/enrollment_api_keys/e0a0d409-b22f-4cd0-b417-c934c621ce07","port":5601,"scheme":"https"},"user":{"name":"elastic","roles":["superuser"]},"kibana":{"space_id":"default"},"trace":{"id":"2eeefc09-26a2-4aea-840d-9170b0a9c95f"},"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.4.0"},"@timestamp":"2022-09-09T13:15:24.041+00:00","message":"User is requesting [/api/fleet/enrollment_api_keys/e0a0d409-b22f-4cd0-b417-c934c621ce07] endpoint","log":{"level":"INFO","logger":"plugins.security.audit.ecs"},"process":{"pid":7},"transaction":{"id":"858c45e94edb1814"}}
{"event":{"action":"user_login","category":["authentication"],"outcome":"success"},"user":{"name":"elastic","roles":["superuser"]},"kibana":{"session_id":"Bv1bSOJ7dMYoppdDmlCiPfP1v8Q7JHXDpc2mOrfoxJs=","authentication_provider":"basic","authentication_type":"basic","authentication_realm":"reserved","lookup_realm":"reserved"},"trace":{"id":"5233d304-16b6-479b-9e45-a906107a5f53"},"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.4.0"},"@timestamp":"2022-09-09T13:16:57.990+00:00","message":"User [elastic] has logged in using basic provider [name=basic]","log":{"level":"INFO","logger":"plugins.security.audit.ecs"},"process":{"pid":7},"transaction":{"id":"2fd40f9b4f4ca767"}}
{"event":{"action":"space_get","category":["database"],"type":["access"],"outcome":"success"},"kibana":{"space_id":"default","session_id":"Bv1bSOJ7dMYoppdDmlCiPfP1v8Q7JHXDpc2mOrfoxJs=","saved_object":{"type":"space","id":"default"}},"user":{"name":"elastic","roles":["superuser"]},"trace":{"id":"fbed6b3b-4e1c-4525-9597-391d5f718e89"},"service":{"node":{"roles":["background_tasks","ui"]}},"ecs":{"version":"8.4.0"},"@timestamp":"2022-09-09T13:16:58.044+00:00","message":"User has accessed space [id=default]","log":{"level":"INFO","logger":"plugins.security.audit.ecs"},"process":{"pid":7},"transaction":{"id":"256b899bfffe8b3f"}}
Original file line number Diff line number Diff line change
@@ -0,0 +1,168 @@
{
"expected": [
{
"@timestamp": "2022-09-09T13:15:24.041+00:00",
"ecs": {
"version": "8.4.0"
},
"event": {
"action": "http_request",
"category": [
"web"
],
"created": "2022-09-09T13:15:24.041+00:00",
"ingested": "2022-09-09T13:44:23.397431381Z",
"kind": "event",
"outcome": "unknown"
},
"http": {
"request": {
"method": "get"
}
},
"kibana": {
"space_id": "default"
},
"log": {
"level": "INFO",
"logger": "plugins.security.audit.ecs"
},
"message": "User is requesting [/api/fleet/enrollment_api_keys/e0a0d409-b22f-4cd0-b417-c934c621ce07] endpoint",
"process": {
"pid": 7
},
"service": {
"node": {
"roles": [
"background_tasks",
"ui"
]
}
},
"trace": {
"id": "2eeefc09-26a2-4aea-840d-9170b0a9c95f"
},
"transaction": {
"id": "858c45e94edb1814"
},
"url": {
"domain": "kibana",
"path": "/api/fleet/enrollment_api_keys/e0a0d409-b22f-4cd0-b417-c934c621ce07",
"port": 5601,
"scheme": "https"
},
"user": {
"name": "elastic",
"roles": [
"superuser"
]
}
},
{
"@timestamp": "2022-09-09T13:16:57.990+00:00",
"ecs": {
"version": "8.4.0"
},
"event": {
"action": "user_login",
"category": [
"authentication"
],
"created": "2022-09-09T13:16:57.990+00:00",
"ingested": "2022-09-09T13:44:23.397439256Z",
"kind": "event",
"outcome": "success"
},
"kibana": {
"authentication_provider": "basic",
"authentication_realm": "reserved",
"authentication_type": "basic",
"lookup_realm": "reserved",
"session_id": "Bv1bSOJ7dMYoppdDmlCiPfP1v8Q7JHXDpc2mOrfoxJs="
},
"log": {
"level": "INFO",
"logger": "plugins.security.audit.ecs"
},
"message": "User [elastic] has logged in using basic provider [name=basic]",
"process": {
"pid": 7
},
"service": {
"node": {
"roles": [
"background_tasks",
"ui"
]
}
},
"trace": {
"id": "5233d304-16b6-479b-9e45-a906107a5f53"
},
"transaction": {
"id": "2fd40f9b4f4ca767"
},
"user": {
"name": "elastic",
"roles": [
"superuser"
]
}
},
{
"@timestamp": "2022-09-09T13:16:58.044+00:00",
"ecs": {
"version": "8.4.0"
},
"event": {
"action": "space_get",
"category": [
"database"
],
"created": "2022-09-09T13:16:58.044+00:00",
"ingested": "2022-09-09T13:44:23.397440547Z",
"kind": "event",
"outcome": "success",
"type": [
"access"
]
},
"kibana": {
"saved_object": {
"id": "default",
"type": "space"
},
"session_id": "Bv1bSOJ7dMYoppdDmlCiPfP1v8Q7JHXDpc2mOrfoxJs=",
"space_id": "default"
},
"log": {
"level": "INFO",
"logger": "plugins.security.audit.ecs"
},
"message": "User has accessed space [id=default]",
"process": {
"pid": 7
},
"service": {
"node": {
"roles": [
"background_tasks",
"ui"
]
}
},
"trace": {
"id": "fbed6b3b-4e1c-4525-9597-391d5f718e89"
},
"transaction": {
"id": "256b899bfffe8b3f"
},
"user": {
"name": "elastic",
"roles": [
"superuser"
]
}
}
]
}
11 changes: 3 additions & 8 deletions packages/kibana/data_stream/audit/agent/stream/log.yml.hbs
Original file line number Diff line number Diff line change
Expand Up @@ -3,12 +3,7 @@ paths:
- {{path}}
{{/each}}
exclude_files: [".gz$"]
{{#if processors}}
processors:
- add_locale: ~
- add_fields:
target: ''
fields:
ecs.version: 1.10.0
- decode_json_fields:
fields: [message]
target: kibana._audit_temp
{{processors}}
{{/if}}
Original file line number Diff line number Diff line change
@@ -1,22 +1,24 @@
---
description: Pipeline for parsing Kibana audit logs
processors:
- set:
field: event.ingested
value: '{{_ingest.timestamp}}'
- rename:
field: '@timestamp'
target_field: event.created
- pipeline:
name: '{{ IngestPipeline "pipeline-json" }}'
- set:
field: event.kind
value: event
- append:
field: related.user
value: "{{user.name}}"
if: "ctx?.user?.name != null"
- pipeline:
name: '{{ IngestPipeline "pipeline-json" }}'
if: |-
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This conditional was copied from platform-observability package. It does seem like an overkill

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or if there's a reason for it, that's fine. But it'd be good to be consistent about how we say "this is a json message"

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Now looking at both approaches, here we need 4 processors in total to achieve what this does with 1. Besides, on elasticserach pipeline, apparently the document is dropped if message is not a json whereas here, the document is still ingested.

def message = ctx.message;
return message != null
&& message.startsWith('{')
&& message.endsWith('}')
&& message.contains('"@timestamp"')
- set:
copy_from: "@timestamp"
field: event.created
- set:
field: event.ingested
value: "{{_ingest.timestamp}}"
- set:
field: event.kind
value: event
on_failure:
- set:
field: error.message
value: '{{ _ingest.on_failure_message }}'
- set:
field: error.message
value: "{{ _ingest.on_failure_message }}"
Loading