Skip to content

Commit

Permalink
Merge pull request #1387 from gravwell/merge/main_to_next-patch
Browse files Browse the repository at this point in the history
chore: Merge main into next-patch
  • Loading branch information
ashnwade authored Jan 29, 2025
2 parents 43d0603 + f89d155 commit bfb333f
Show file tree
Hide file tree
Showing 66 changed files with 5,009 additions and 4,176 deletions.
8 changes: 6 additions & 2 deletions _static/versions.json
Original file line number Diff line number Diff line change
@@ -1,10 +1,14 @@
[
{
"name": "v5.6.10 (latest)",
"version": "v5.6.10",
"name": "v5.7.0 (latest)",
"version": "v5.7.0",
"url": "/",
"preferred": true
},
{
"version": "v5.6.10",
"url": "/v5.6.10/"
},
{
"version": "v5.6.9",
"url": "/v5.6.9/"
Expand Down
1 change: 1 addition & 0 deletions architecture/architecture.md
Original file line number Diff line number Diff line change
Expand Up @@ -29,6 +29,7 @@ Compression </configuration/compression>
Data Ageout </configuration/ageout>
Cloud Archive </configuration/archive>
Data Replication </configuration/replication>
Ditto Data Duplication </configuration/ditto>
Gravwell Searchagent </scripting/searchagent>
Gravwell Accelerators </configuration/accelerators>
Performance Tuning </tuning/tuning>
Expand Down
89 changes: 45 additions & 44 deletions cbac/cbac.md
Original file line number Diff line number Diff line change
Expand Up @@ -39,65 +39,66 @@ In practice, it is less common to grant capabilities to individual users; instea

| Capability Name | Description |
|--------|-------|
| Search | Search data and execute queries. |
| Download | Download search results. |
| SaveSearch | Save a search and add notes. |
| AttachSearch | Load a search by search ID. |
| BackgroundSearch | Execute a search in the background. |
| SetSearchGroup | Assign a default group to searches. |
| SearchHistory | View search history of authenticated user |
| SearchAllHistory | View search history of items user has access to. |
| SearchGroupHistory | View a group search history |
| DashboardRead | View and launch dashboard |
| SearchHistory | View search history of the authenticated user. |
| SearchAllHistory | View search history of items the user has access to. |
| SearchGroupHistory | View group search history. |
| DashboardRead | View and launch a dashboard. |
| DashboardRead | View and launch dashboard |
| DashboardWrite | Create and edit a dashboard's searches and settings. |
| ResourceRead | View a resource and use it in a query. |
| ResourceWrite | Create and edit a resource. |
| TemplateRead | View and execute a template. |
| TemplateWrite | Create and edit a template. |
| PivotRead | View and click on an actionable. |
| PivotWrite | Create and edit an actionable. |
| MacroRead | View and use a macro in a query. |
| MacroRead | View a macro and use it in a query. |
| MacroWrite | Create and edit a macro. |
| LibraryRead | View and execute a saved query. |
| LibraryWrite | Create and edit a saved query. |
| ExtractorRead | View and use an extractor in a query. |
| Download | Download search results. |
| ExtractorRead | View an extractor and use it in a query. |
| ExtractorRead | View and use an extractor in a query. |
| ExtractorWrite | Create and edit an extractor. |
| UserFileRead | View a file. |
| UserFileWrite | Create and edit a file. |
| KitRead | View a kit and its contents. |
| KitWrite | Create and edit a kit. |
| Ingest | Ingest data. |
| KitBuild | Build a kit. |
| KitDownload | Download a kit. |
| ScheduleRead | View a flow, script or scheduled search and its results. |
| ScheduleWrite | Create and edit a flow, script or scheduled search. |
| ScheduleRead | View a flow, script, or scheduled search and its results. |
| ScheduleWrite | Create and edit a flow, script, or scheduled search. |
| SOARLibs | Import an external library into a script. |
| SOAREmail | Send an email in a script or a flow. |
| PlaybookRead | View a playbook. |
| PlaybookWrite | Create and edit a playbook. |
| KitRead | View a kit and its contents. |
| KitWrite | Create and edit a kit. |
| LibraryRead | View and execute a saved query. |
| LibraryWrite | Create and edit a saved query. |
| LicenseRead | View the license. |
| Stats | View health statistics. |
| Ingest | Ingest data. |
| ListUsers | View the list of users. |
| ListGroups | View the list of groups. |
| ListGroupMembers | View the members of a group. |
| ListGroups | View the list of groups. |
| ListUsers | View the list of users. |
| MacroRead | View a macro and use it in a query. |
| MacroRead | View and use a macro in a query. |
| MacroWrite | Create and edit a macro. |
| NotificationRead | View notifications. |
| NotificationWrite | Create and edit notifications. |
| SystemInfoRead | View systems. |
| PivotRead | View and click on an actionable. |
| PivotWrite | Create and edit an actionable. |
| PlaybookRead | View a playbook. |
| PlaybookWrite | Create and edit a playbook. |
| RemoteAIService | Interact with remotely hosted Logbot AI. |
| ResourceRead | View a resource and use it in a query. |
| ResourceWrite | Create and edit a resource. |
| SOAREmail | Send an email in a script or a flow. |
| SOARLibs | Import an external library into a script. |
| SaveSearch | Save a search and add notes. |
| ScheduleRead | View a flow, script or scheduled search and its results. |
| ScheduleRead | View a flow, script, or scheduled search and its results. |
| ScheduleWrite | Create and edit a flow, script or scheduled search. |
| ScheduleWrite | Create and edit a flow, script, or scheduled search. |
| Search | Search data and execute queries. |
| SearchAllHistory | View search history of items the user has access to. |
| SearchAllHistory | View search history of items user has access to. |
| SearchGroupHistory | View a group search history |
| SearchGroupHistory | View group search history. |
| SearchHistory | View search history of authenticated user |
| SearchHistory | View search history of the authenticated user. |
| SecretRead | Read and access secrets. |
| SecretRead | User can read and access secrets. |
| SecretWrite | Create, update, and delete secrets. |
| SecretWrite | User can create, update, and delete secrets. |
| SetSearchGroup | Assign a default group to searches. |
| Stats | View health statistics. |
| SystemInfoRead | View systems info. |
| SystemInfoRead | View systems. |
| TemplateRead | View and execute a template. |
| TemplateWrite | Create and edit a template. |
| TokenRead | View API tokens. |
| TokenWrite | Create and edit an API token. |
| SecretRead | User can read and access secrets. |
| SecretWrite | User can create, update, and delete secrets. |
| SecretRead | Read and access secrets. |
| SecretWrite | Create, update, and delete secrets. |
| UserFileRead | View a file. |
| UserFileWrite | Create and edit a file. |

### Determining a CBAC Grant

Expand Down
31 changes: 31 additions & 0 deletions changelog/5.7.0.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,31 @@
# Changelog for version 5.7.0

## Released 29 January 2024

## Gravwell

### Additions

* Added [AI](/search/ai/ai) to help users explain search entries.
* Added a new streaming migration tool called [Ditto](/configuration/ditto) to move from one Gravwell instance to another.
* Added 24-hour clock format to [Preferences](#preferences).
* Added support for switch statements in [eval](/search/eval/eval).
* Added a "Run as" field to [Alert Consumers](#define-a-consumer) to make it more obvious which permissions will be used when the consumer is triggered.

### Improvements

* Improved logging for automation executions missed due to backfill being disabled.
* Improved logging for time-based ageout to include shard size and working time.
* Improved the order of timeframe selection options in the picker.
* Improved the right side pane in Query Studio to better manage history, query, library, and notes. These changes also support the addition of the AI Logbot chat.

### Bug Fixes

* Fixed an issue with grep providing hints on more than one inner word.
* Fixed an issue where forgotten ingesters would not be immediately removed from the list.
* Fixed an issue where storage-based ageout would migrate to cold even if the shard was outside cold's retention period. Now jump straight to deleting instead.
* Fixed an issue where a single indexer could block all ingest.
* Fixed an issue where an ingest muxer sync could stall indefinitely.
* Fixed an issue where intermediate tag translation would quietly roll over to tag 0.
* Fixed an issue with dangling GIDs that could cause problems with group sharing.

3 changes: 2 additions & 1 deletion changelog/list.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
maxdepth: 1
caption: Current Release
---
5.6.10 <5.6.10>
5.7.0 <5.7.0>
```

## Previous Versions
Expand All @@ -18,6 +18,7 @@ maxdepth: 1
caption: Previous Releases
---
5.6.10 <5.6.10>
5.6.9 <5.6.9>
5.6.8 <5.6.8>
5.6.7 <5.6.7>
Expand Down
2 changes: 1 addition & 1 deletion conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@
project = "Gravwell"
copyright = f"Gravwell, Inc. {date.today().year}"
author = "Gravwell, Inc."
release = "v5.6.10"
release = "v5.7.0"

# Default to localhost:8000, so the version switcher looks OK on livehtml
version_list_url = os.environ.get(
Expand Down
78 changes: 78 additions & 0 deletions configuration/ditto.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,78 @@
# Ditto Indexer Mirroring

Ditto is Gravwell's system for mirroring data from one cluster to another. Unlike replication, it copies data on the source system directly to the destination (or "target") system's *live* storage. This is useful when migrating to an entirely new cluster, or as a means to duplicate some data from one cluster to another.

Data is duplicated at the well level. A given well can be duplicated to one or more destinations. Entries are read from the source well and shipped to the destination, which files them into the appropriate well or wells depending on its own well configuration.

The following terminology will be used in this document:

* **Source cluster**: The indexer or indexers from which data will be cloned.
* **Target cluster**: The indexer or indexers to which data will be cloned.

In order to configure Ditto, you must first define one or more targets, then configure the desired wells to clone their data to the targets.

```{warning}
Entries from the source system will be incorporated directly into the destination/target system's live storage. Once the entries arrive on the destination, they will be indistinguishable from entries ingested to that system in the usual fashion, and it is essentially not possible to "undo" a ditto cloning without deleting the entire well on the destination side.
```

## Target Configuration

Ditto targets are defined in `/opt/gravwell/etc/gravwell.conf` (or a file in `/opt/gravwell/etc/gravwell.conf.d`). A Ditto target uses the same basic configuration block as an [ingester](ingesters_global_configuration_parameters), specifying indexer targets and ingest secrets. There are a few Ditto-specific options, too:

* **Start-Time**: If set to a timestamp (we recommend Unix epoch timestamps or RFC3339 format), this Ditto target will only be sent data from after that timestamp. Specifically, we will find the shard containing that timestamp and duplicate that shard and all following shards.
* **Unresolvable-Tag-Destination**: In some rare cases, the Ditto system may find entries in a shard whose tags do not correspond to any known tag (this can happen if you manually edited `tags.dat`, which is highly discouraged!). By default, these entries are dropped, but if `Unresolvable-Tag-Destination` is set, they will instead be re-tagged with the specified tag.

Here's an example of a simple target definition:

```
[Ditto-Target "new-cluster"]
Encrypted-Backend-Target=newidx1.example.org
Encrypted-Backend-Target=newidx2.example.org
Ingest-Secret=xyzzy
Start-Time="2024-01-01T00:00:00"
```
```{note}
If you are cloning data across a public network or any metered network connection we highly suggest enabling transport compression by setting `Enable-Compression=true` inside the `Ditto-Target` configuration block.
## Well Configuration
To enable Ditto duplication for a given well, add the `Ditto-Target` parameter to the well's config block, e.g.:
```
[Default-Well]
Location=/opt/gravwell/storage/default/
Cold-Location=/opt/gravwell/cold_storage/default/
Hot-Duration=7d
Ditto-Target="new-cluster"
```
## Worker Configuration
By default, Ditto will only work on one well at a time. If you wish to duplicate multiple wells in parallel, set the `Ditto-Max-Workers` parameter in the `[Global]` section of your `gravwell.conf`. For example, to duplicate up to 4 wells at a time:
```
[Global]
Ditto-Max-Workers=4
```
## Ditto Stats
The Ditto subsystem will periodically emit stats messages into the `gravwell` tag. You can find these stats by running the following query:
```
tag=gravwell syslog Message=="ditto client stats"
```
Each message contains statistics about data transferred for a particular well to a particular Ditto target cluster. The following fields are populated:
* `well`: The well to which the stats apply.
* `entries`: The number of entries transferred for this well since the last stats update.
* `bytes`: The number of bytes transferred for this well since the last stats update.
* `duration`: The elapsed time since the last stats update.
* `Bps`: The approximate transfer rate, in *bytes* per second, over the duration.
* `target-name`: The target cluster which received the data.
You can monitor your transfer rates with a query like this:
```
tag=gravwell syslog Message=="ditto client stats" Bps well "target-name" as target | stats mean(Bps) by well target | chart mean by well target
```
76 changes: 68 additions & 8 deletions configuration/parameters.md
Original file line number Diff line number Diff line change
Expand Up @@ -677,17 +677,42 @@ Example: `Disable-Indexer-Overload-Warning=true`
Description: If this parameter is set, the indexer will not send notifications when it considers itself 'overloaded'.

### **MFA-Required**
Applies to: Webserver
Default Value: false
Example: `MFA-Required=true`
Description: If set to true, all local (non-SSO) users will be required to configure and use MFA for authentication.
Applies to: Webserver
Default Value: false
Example: `MFA-Required=true`
Description: If set to true, all local (non-SSO) users will be required to configure and use MFA for authentication.

### **MFA-Issuer-Name**
Applies to: Webserver
Default Value: Gravwell
Example: `MFA-Issuer-Name="BigCo Gravwell Cluster"`
Applies to: Webserver
Default Value: Gravwell
Example: `MFA-Issuer-Name="BigCo Gravwell Cluster"`
Description: Sets the "issuer" field for TOTP MFA authentication. This controls the name which will appear in your authentication application. The default, "Gravwell", is suitable for most cases.

### **Ditto-Max-Workers**
Applies to: Indexer
Default Value: 1
Example: `Ditto-Max-Workers=8`
Description: Sets the number of parallel worker processes for [Ditto](/configuration/ditto) transfers.

## AI

The `[AI]` configuration section controls [Logbot AI](/search/ai/ai).

```
[AI]
Enable=true
```

### **Enable**
Default Value: false
Example: `Enable=true`
Description: Enable the Logbot AI system.

### **AI-Server-URL**
Default Value: https://api.gravwell.ai/
Example: `AI-Server-URL="https://ai.example.com"`
Description: Alternate path for remote AI requests.

## Password Control

The `[Password-Control]` configuration section can be used to enforce password complexity rules when users are created or passwords are changed. Options set in this block apply only to webservers. These complexity configuration rules do not apply when using Single Sign On.
Expand Down Expand Up @@ -846,7 +871,7 @@ Example: `Accelerator-Name=json`
Description: Setting the `Accelerator-Name` parameter (and the `Accelerator-Args` parameter) enables acceleration on the well. See [the acceleration documentation](/configuration/accelerators) for more information.

#### **Accelerator-Args**
Default Value:
Default Value:
Example: `Accelerator-Args="username hostname \"strange-field.with.specials\".subfield"`
Description: Setting the `Accelerator-Args` parameter (and the `Accelerator-Name` parameter) enables acceleration on the well. See [the acceleration documentation](/configuration/accelerators) for more information.

Expand All @@ -872,6 +897,41 @@ Default Value: false
Example: `Enable-Quarantine-Corrupted-Shards=true`
Description: If set, corrupted shards which cannot be recovered will be copied to a quarantine location for later analysis. By default, badly corrupted shards may be deleted.

#### **Ditto-Target**
Default Value:
Example: `Ditto-Target=new-cluster`
Description: Specifies that this well should be duplicated to the given [Ditto](/configuration/ditto) target. Can be specified multiple times to duplicate to multiple targets.

## Ditto Target Configuration

The `[Ditto-Target]` section configures [Ditto](/configuration/ditto) target clusters, Gravwell clusters which will receive data duplicated from the local indexers' wells. This section is only applicable to indexers.

Here's an example of a simple target definition:

```
[Ditto-Target "new-cluster"]
Encrypted-Backend-Target=newidx1.example.org
Encrypted-Backend-Target=newidx2.example.org
Ingest-Secret=xyzzy
Start-Time="2024-01-01T00:00:00"
```

A Ditto target uses the same basic configuration block as an [ingester](ingesters_global_configuration_parameters), specifying indexer targets and ingest secrets.

The following parameters are specific to Ditto.

### **Start-Time**
Default Value:
Example: `Start-Time=2025-01-01T00:00:00`
Example: `Start-Time=1738100446`
Description: If set to a timestamp (we recommend Unix epoch timestamps or RFC3339 format), this Ditto target will only be sent data from after that timestamp. Specifically, we will find the shard containing that timestamp and duplicate that shard and all following shards.

### **Unresolvable-Tag-Destination**
Default Value:
Example: `Unresolvable-Tag-Destination=unresolvable_ditto_tag`
Description: In some rare cases, the Ditto system may find entries in a shard whose tags do not correspond to any known tag (this can happen if you manually edited `tags.dat`, which is highly discouraged!). By default, these entries are dropped, but if `Unresolvable-Tag-Destination` is set, they will instead be re-tagged with the specified tag.


## Replication Configuration

The `[Replication]` section configures [Gravwell's replication capability](/configuration/replication). An example configuration might look like this:
Expand Down
Loading

0 comments on commit bfb333f

Please sign in to comment.