Skip to content

Commit

Permalink
Merge pull request #1310 from ashnwade/patch-to-main
Browse files Browse the repository at this point in the history
Release/v5.6.8: Merge next-patch to main
  • Loading branch information
kris-watts-gravwell authored Dec 16, 2024
2 parents 02d8f43 + 8228632 commit e262c0d
Show file tree
Hide file tree
Showing 16 changed files with 595 additions and 31 deletions.
8 changes: 6 additions & 2 deletions _static/versions.json
Original file line number Diff line number Diff line change
@@ -1,10 +1,14 @@
[
{
"name": "v5.6.7 (latest)",
"version": "v5.6.7",
"name": "v5.6.8 (latest)",
"version": "v5.6.8",
"url": "/",
"preferred": true
},
{
"version": "v5.6.7",
"url": "/v5.6.7/"
},
{
"version": "v5.6.6",
"url": "/v5.6.6/"
Expand Down
16 changes: 16 additions & 0 deletions changelog/5.6.8.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,16 @@
# Changelog for version 5.6.8

## Released 16 December 2024

## Gravwell

### Additions
* Added hotkeys to support auto-closing pairs for `"`, `(`, `[`, and `{` in the query editor.
* Added strict transport security header when running in TLS mode.
* Added support for start/end constraints in inner queries when using compound queries.

### Bug Fixes

* Fixed an issue where JavaScript returning `undefined` could improperly halt execution of a Flow.
* Fixed an issue with macro expansion with invalid macros.
* Improved timestamp processing to truncate subsecond precision when using start/end constraints.
3 changes: 2 additions & 1 deletion changelog/list.md
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
maxdepth: 1
caption: Current Release
---
5.6.7 <5.6.7>
5.6.8 <5.6.8>
```

## Previous Versions
Expand All @@ -18,6 +18,7 @@ maxdepth: 1
caption: Previous Releases
---
5.6.7 <5.6.7>
5.6.6 <5.6.6>
5.6.5 <5.6.5>
5.6.4 <5.6.4>
Expand Down
2 changes: 1 addition & 1 deletion conf.py
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,7 @@
project = "Gravwell"
copyright = f"Gravwell, Inc. {date.today().year}"
author = "Gravwell, Inc."
release = "v5.6.7"
release = "v5.6.8"

# Default to localhost:8000, so the version switcher looks OK on livehtml
version_list_url = os.environ.get(
Expand Down
6 changes: 5 additions & 1 deletion configuration/accelerators.md
Original file line number Diff line number Diff line change
Expand Up @@ -249,6 +249,11 @@ Note that the tag `zeekconn` can be matched against both accelerators, however t
Tags=foo*
```

(intrinsic-acceleration-target)=
## Acceleration with Intrinsic Enumerated Values

When acceleration is enabled, [intrinsic enumerated values](#attach-target) will always be accelerated with the fulltext engine. This enables queries using the [intrinsic](/search/intrinsic/intrinsic) module to be accelerated. No specific configuration is required for acceleration with intrinsic EVs other than having acceleration enabled.

## Fulltext

The fulltext accelerator is designed to index words within text logs and is considered the most flexible acceleration option. Many of the other search modules support invoking the fulltext accelerator when executing queries. However, the primary search module for engaging with the fulltext accelerator is the [grep](/search/grep/grep) module with the `-w` flag. Much like the Unix grep utility, `grep -w` specifies that the provided filter is expected to a word, rather than a subset of bytes. Running a search with `words foo bar baz` will look for the words foo, bar, and baz and engage the fulltext accelerator.
Expand Down Expand Up @@ -686,7 +691,6 @@ The results show why fulltext may often be worth the storage and ingest penalty:
| fulltextindex | 2.99s | 12.49X |
| fulltextbloom | 3.40s | 12.49X |


#### Query AX modules

The AX definition file for all four tags is below, see the [AX](/configuration/autoextractors) documentation for more information:
Expand Down
2 changes: 1 addition & 1 deletion configuration/parameters.md
Original file line number Diff line number Diff line change
Expand Up @@ -902,7 +902,7 @@ Description: Sets the storage location for data replicated from other Gravwell i
### **Max-Replicated-Data-GB**
Default Value:
Example: `Max-Replicated-Data-GB=100`
Description: Sets, in gigabytes, the maximum amount of replicated data to store. When this is exceeded, the indexer will begin walking the replicated data to clean up; it will first remove any shards which have been deleted on the original indexer, then it will begin deleting the oldest shards. Once the storage size is below the limit, deletion will stop.
Description: Sets, in gigabytes, the maximum amount of replicated data to store. When this is exceeded, the indexer will begin walking the replicated data to clean up; it will first remove any shards which have been deleted on the original indexer, then cold shards, then by oldest date. Once the storage size is below the limit, deletion will stop.

### **Replication-Secret-Override**
Default Value:
Expand Down
2 changes: 1 addition & 1 deletion configuration/replication.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ The replication system is logically separated into "Clients" and "Peers", with e

Replication connections are encrypted by default and require that indexers have functioning X509 certificates. If the certificates are not signed by a valid certificate authority (CA) then `Insecure-Skip-TLS-Verify=true` must be added to the Replication configuration section.

Replication storage nodes (nodes which receive replicated data) are allotted a specific amount of storage and will not delete data until that storage is exhausted. If a remote client node deletes data as part of normal ageout, the data shard is marked as deleted and prioritized for deletion when the replication node hits its storage limit. The replication system prioritizes deleted shards first, cold shards second, and oldest shards last. All replicated data is compressed; if a cold storage location is provided it is usually recommended that the replication storage location have the same storage capacity as the cold and hot storage combined.
Replication storage nodes (nodes which receive replicated data) are allotted a specific amount of storage and will not delete data unless the `Max-Replicated-Data-GB` parameter is set. Even with `Max-Replicated-Data-GB` set, the replication system will not delete replicated shards until the storage limit has been reached. If a remote client node deletes data as part of normal ageout, the data shard is marked as deleted and prioritized for deletion when the replication node hits its storage limit. The replication system prioritizes deleted shards first, cold shards second, and oldest shards last. All replicated data is compressed; if a cold storage location is provided it is usually recommended that the replication storage location have the same storage capacity as the cold and hot storage combined.

```{note}
By default, the replication engine uses port 9406.
Expand Down
6 changes: 3 additions & 3 deletions eula.md
Original file line number Diff line number Diff line change
Expand Up @@ -89,9 +89,9 @@ Embedded in, or bundled with, this product are open source software (OSS) compo
You may receive a copy of,distribute and/or modify any open source code for the OSS component under the terms of their respective licenses, which may be Apache License Version 2.0, the modified BSD license and the MIT license. In the event of conflicts between Gravwell license conditions and the Open Source Software license conditions, the Open Source Software conditions shall prevail with respect to the Open Source Software portions of the software.
On written request within three years from the date of product purchase and against payment of our expenses, Gravwell will supply source code for any OSS component identified below inline with the terms of the applicable license. For this, please contact us at:

Gravwell Inc
Gravwell, Inc.
OSS Component Division
P.O. box 2819
Coeur d’Alene, ID 83814-2819
PO Box 51534
Idaho Falls, ID 83405-1534

Generally, the identified OSS components are distributed in the hope that they will be useful, but WITHOUT ANY WARRANTY, without even implied warranty such as for MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE,and without liability for any Gravwell entity other than as explicitely documented in your purchase contract.
Expand Down
4 changes: 4 additions & 0 deletions gui/queries/queries.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,6 +43,10 @@ The queries stored in the query library are also available through the right-han
(timeframe_selector)=
## Selecting a Timeframe

```{note}
Timeframes are always aligned to one second boundaries. Sub-second timeframes will be automatically rounded down to the second.
```

By default, queries run over the last hour of data. This is easily changed by clicking on the calendar icon or timeframe above the query and selecting a timeframe from the dropdown:

![](timeframe-icon.png)
Expand Down
1 change: 1 addition & 0 deletions ingesters/ingesters.md
Original file line number Diff line number Diff line change
Expand Up @@ -326,6 +326,7 @@ Log-Source-Override=DEAD:BEEF::FEED:FEBE
Log-Source-Override=::1
```

(attach-target)=
### Attach

All ingesters support the `Attach` global configuration stanza, which allows [intrinsic enumerated values](intrinsic_enumerated_values) to be attached to entries during ingest. Intrinsic enumerated values can later be accessed with the [intrinsic](/search/intrinsic/intrinsic) search module.
Expand Down
2 changes: 1 addition & 1 deletion quickstart/quickstart.md
Original file line number Diff line number Diff line change
Expand Up @@ -19,7 +19,7 @@ This guide is suitable for Community Edition users as well as users with a paid

You may find the [installation checklist](checklist) and the [glossary](/glossary/glossary) useful companions to this document.

If you are interested in a complete training package, please see the [complete training PDF](https://github.com/gravwell/training/releases/download/v5.6.7/gravwell_training_v5.6.7.pdf). The Gravwell training PDF is the complete training manual which is paired with labs and exercises. The exercises are built from the open source [Gravwell Training](https://github.com/gravwell/training) repository.
If you are interested in a complete training package, please see the [complete training PDF](https://github.com/gravwell/training/releases/download/v5.6.8/gravwell_training_v5.6.8.pdf). The Gravwell training PDF is the complete training manual which is paired with labs and exercises. The exercises are built from the open source [Gravwell Training](https://github.com/gravwell/training) repository.

```{note}
Community Edition users will need to obtain their own license from [https://www.gravwell.io/download](https://www.gravwell.io/download) before beginning installation. Paid users should already have received a license file via email.
Expand Down
2 changes: 1 addition & 1 deletion search/eval/eval.md
Original file line number Diff line number Diff line change
Expand Up @@ -868,7 +868,7 @@ Returns the square root of x.

function math_trunc(x float) float

Returns the integer value of x.
Returns the integer value of x by dropping decimal digits. For example, `3.1` and `3.9` will both return `3`.

#### round

Expand Down
2 changes: 1 addition & 1 deletion search/maclookup/maclookup.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ The maclookup module uses a custom MAC prefix database to extract Manufacturer,

## Setting Up Databases

Before using the maclookup module, you must install a [resource](/resources/resources) containing the macdb database.
Before using the maclookup module, you must have the mac_prefixes database as a Resource in your Gravwell instance. The mac_prefixes resource is included in the Gravwell Network Enrichment Kit, which you can find by browsing the available kits in the Kits section of the UI.

By default, the maclookup module expects the macdb database to be in a resource named "macdb". This will allow you to do extractions without specifying the resource name explicitly.

Expand Down
Loading

0 comments on commit e262c0d

Please sign in to comment.