From 02496a65c9a23ec862563d4f34e6b5b7390c5bbd Mon Sep 17 00:00:00 2001 From: Anton Okolnychyi Date: Thu, 5 Oct 2023 23:19:08 -0700 Subject: [PATCH 01/17] Common docs for 1.4.0 (#276) --- landing-page/content/common/spec.md | 35 ++++++++++++ landing-page/content/common/view-spec.md | 72 +++++++++++------------- 2 files changed, 69 insertions(+), 38 deletions(-) diff --git a/landing-page/content/common/spec.md b/landing-page/content/common/spec.md index 58cfc2291..60c0f99c3 100644 --- a/landing-page/content/common/spec.md +++ b/landing-page/content/common/spec.md @@ -1128,6 +1128,41 @@ Example ] } ] ``` +### Content File (Data and Delete) Serialization + +Content file (data or delete) is serialized as a JSON object according to the following table. + +| Metadata field |JSON representation|Example| +|--------------------------|--- |--- | +| **`spec-id`** |`JSON int`|`1`| +| **`content`** |`JSON string`|`DATA`, `POSITION_DELETES`, `EQUALITY_DELETES`| +| **`file-path`** |`JSON string`|`"s3://b/wh/data.db/table"`| +| **`file-format`** |`JSON string`|`AVRO`, `ORC`, `PARQUET`| +| **`partition`** |`JSON object: Partition data tuple using partition field ids for the struct field ids`|`{"1000":1}`| +| **`record-count`** |`JSON long`|`1`| +| **`file-size-in-bytes`** |`JSON long`|`1024`| +| **`column-sizes`** |`JSON object: Map from column id to the total size on disk of all regions that store the column.`|`{"keys":[3,4],"values":[100,200]}`| +| **`value-counts`** |`JSON object: Map from column id to number of values in the column (including null and NaN values)`|`{"keys":[3,4],"values":[90,180]}`| +| **`null-value-counts`** |`JSON object: Map from column id to number of null values in the column`|`{"keys":[3,4],"values":[10,20]}`| +| **`nan-value-counts`** |`JSON object: Map from column id to number of NaN values in the column`|`{"keys":[3,4],"values":[0,0]}`| +| **`lower-bounds`** |`JSON object: Map from column id to lower bound binary in the column serialized as hexadecimal string`|`{"keys":[3,4],"values":["01000000","02000000"]}`| +| **`upper-bounds`** |`JSON object: Map from column id to upper bound binary in the column serialized as hexadecimal string`|`{"keys":[3,4],"values":["05000000","0A000000"]}`| +| **`key-metadata`** |`JSON string: Encryption key metadata binary serialized as hexadecimal string`|`00000000000000000000000000000000`| +| **`split-offsets`** |`JSON list of long: Split offsets for the data file`|`[128,256]`| +| **`equality-ids`** |`JSON list of int: Field ids used to determine row equality in equality delete files`|`[1]`| +| **`sort-order-id`** |`JSON int`|`1`| + +### File Scan Task Serialization + +File scan task is serialized as a JSON object according to the following table. + +| Metadata field |JSON representation|Example| +|--------------------------|--- |--- | +| **`schema`** |`JSON object`|`See above, read schemas instead`| +| **`spec`** |`JSON object`|`See above, read partition specs instead`| +| **`data-file`** |`JSON object`|`See above, read content file instead`| +| **`delete-files`** |`JSON list of objects`|`See above, read content file instead`| +| **`residual-filter`** |`JSON object: residual filter expression`|`{"type":"eq","term":"id","value":1}`| ## Appendix D: Single-value serialization diff --git a/landing-page/content/common/view-spec.md b/landing-page/content/common/view-spec.md index a9826a32c..26313193a 100644 --- a/landing-page/content/common/view-spec.md +++ b/landing-page/content/common/view-spec.md @@ -58,9 +58,9 @@ The view version metadata file has the following fields: | Requirement | Field name | Description | |-------------|----------------------|-------------| +| _required_ | `view-uuid` | A UUID that identifies the view, generated when the view is created. Implementations must throw an exception if a view's UUID does not match the expected UUID after refreshing metadata | | _required_ | `format-version` | An integer version number for the view format; must be 1 | | _required_ | `location` | The view's base location; used to create metadata file locations | -| _required_ | `current-schema-id` | ID of the current schema of the view, if known | | _required_ | `schemas` | A list of known schemas | | _required_ | `current-version-id` | ID of the current version of the view (`version-id`) | | _required_ | `versions` | A list of known [versions](#versions) of the view [1] | @@ -75,13 +75,17 @@ Notes: Each version in `versions` is a struct with the following fields: -| Requirement | Field name | Description | -|-------------|-------------------|--------------------------------------------------------------------------| -| _required_ | `version-id` | ID for the version | -| _required_ | `schema-id` | ID of the schema for the view version | -| _required_ | `timestamp-ms` | Timestamp when the version was created (ms from epoch) | -| _required_ | `summary` | A string to string map of [summary metadata](#summary) about the version | -| _required_ | `representations` | A list of [representations](#representations) for the view definition | +| Requirement | Field name | Description | +|-------------|---------------------|-------------------------------------------------------------------------------| +| _required_ | `version-id` | ID for the version | +| _required_ | `schema-id` | ID of the schema for the view version | +| _required_ | `timestamp-ms` | Timestamp when the version was created (ms from epoch) | +| _required_ | `summary` | A string to string map of [summary metadata](#summary) about the version | +| _required_ | `representations` | A list of [representations](#representations) for the view definition | +| _optional_ | `default-catalog` | Catalog name to use when a reference in the SELECT does not contain a catalog | +| _required_ | `default-namespace` | Namespace to use when a reference in the SELECT is a single identifier | + +When `default-catalog` is `null` or not set, the catalog in which the view is stored must be used as the default catalog. #### Summary @@ -117,10 +121,6 @@ A view version can have multiple SQL representations of different dialects, but | _required_ | `type` | `string` | Must be `sql` | | _required_ | `sql` | `string` | A SQL SELECT statement | | _required_ | `dialect` | `string` | The dialect of the `sql` SELECT statement (e.g., "trino" or "spark") | -| _optional_ | `default-catalog` | `string` | Catalog name to use when a reference in the SELECT does not contain a catalog | -| _optional_ | `default-namespace` | `list` | Namespace to use when a reference in the SELECT is a single identifier | -| _optional_ | `field-aliases` | `list` | Column names optionally specified in the create statement | -| _optional_ | `field-comments` | `list` | Column descriptions (COMMENT) optionally specified in the create statement | For example: @@ -144,13 +144,11 @@ This create statement would produce the following `sql` representation metadata: | `type` | `"sql"` | | `sql` | `"SELECT\n COUNT(1), CAST(event_ts AS DATE)\nFROM events\nGROUP BY 2"` | | `dialect` | `"spark"` | -| `default-catalog` | `"prod"` | -| `default-namespace` | `["default"]` | -| `field-aliases` | `["event_count", "event_date"]` | -| `field-comments` | `["Count of events", null]` | If a create statement does not include column names or comments before `AS`, the fields should be omitted. +The `event_count` (with the `Count of events` comment) and `event_date` field aliases must be part of the view version's `schema`. + #### Version log The version log tracks changes to the view's current version. This is the view's history and allows reconstructing what version of the view would have been used at some point in time. @@ -195,6 +193,7 @@ s3://bucket/warehouse/default.db/event_agg/metadata/00001-(uuid).metadata.json ``` ``` { + "view-uuid": "fa6506c3-7681-40c8-86dc-e36561f83385", "format-version" : 1, "location" : "s3://bucket/warehouse/default.db/event_agg", "current-version-id" : 1, @@ -205,6 +204,8 @@ s3://bucket/warehouse/default.db/event_agg/metadata/00001-(uuid).metadata.json "version-id" : 1, "timestamp-ms" : 1573518431292, "schema-id" : 1, + "default-catalog" : "prod", + "default-namespace" : [ "default" ], "summary" : { "operation" : "create", "engine-name" : "Spark", @@ -213,25 +214,21 @@ s3://bucket/warehouse/default.db/event_agg/metadata/00001-(uuid).metadata.json "representations" : [ { "type" : "sql", "sql" : "SELECT\n COUNT(1), CAST(event_ts AS DATE)\nFROM events\nGROUP BY 2", - "dialect" : "spark", - "default-catalog" : "prod", - "default-namespace" : [ "default" ], - "field-aliases" : ["event_count", "event_date"], - "field-comments" : ["Count of events", null] + "dialect" : "spark" } ] } ], - "current-schema-id": 1, "schemas": [ { "schema-id": 1, "type" : "struct", "fields" : [ { "id" : 1, - "name" : "col1", + "name" : "event_count", "required" : false, - "type" : "int" + "type" : "int", + "doc" : "Count of events" }, { "id" : 2, - "name" : "col2", + "name" : "event_date", "required" : false, "type" : "date" } ] @@ -264,6 +261,7 @@ s3://bucket/warehouse/default.db/event_agg/metadata/00002-(uuid).metadata.json ``` ``` { + "view-uuid": "fa6506c3-7681-40c8-86dc-e36561f83385", "format-version" : 1, "location" : "s3://bucket/warehouse/default.db/event_agg", "current-version-id" : 1, @@ -274,6 +272,8 @@ s3://bucket/warehouse/default.db/event_agg/metadata/00002-(uuid).metadata.json "version-id" : 1, "timestamp-ms" : 1573518431292, "schema-id" : 1, + "default-catalog" : "prod", + "default-namespace" : [ "default" ], "summary" : { "operation" : "create", "engine-name" : "Spark", @@ -282,15 +282,14 @@ s3://bucket/warehouse/default.db/event_agg/metadata/00002-(uuid).metadata.json "representations" : [ { "type" : "sql", "sql" : "SELECT\n COUNT(1), CAST(event_ts AS DATE)\nFROM events\nGROUP BY 2", - "dialect" : "spark", - "default-catalog" : "prod", - "default-namespace" : [ "default" ], - "field-aliases" : ["event_count", "event_date"], - "field-comments" : ["Count of events", null] + "dialect" : "spark" } ] }, { "version-id" : 2, "timestamp-ms" : 1573518981593, + "schema-id" : 1, + "default-catalog" : "prod", + "default-namespace" : [ "default" ], "summary" : { "operation" : "create", "engine-name" : "Spark", @@ -299,24 +298,21 @@ s3://bucket/warehouse/default.db/event_agg/metadata/00002-(uuid).metadata.json "representations" : [ { "type" : "sql", "sql" : "SELECT\n COUNT(1), CAST(event_ts AS DATE)\nFROM prod.default.events\nGROUP BY 2", - "dialect" : "spark", - "default-catalog" : "prod", - "default-namespace" : [ "default" ], - "field-aliases" : ["event_count", "event_date"] + "dialect" : "spark" } ] } ], - "current-schema-id": 1, "schemas": [ { "schema-id": 1, "type" : "struct", "fields" : [ { "id" : 1, - "name" : "col1", + "name" : "event_count", "required" : false, - "type" : "int" + "type" : "int", + "doc" : "Count of events" }, { "id" : 2, - "name" : "col2", + "name" : "event_date", "required" : false, "type" : "date" } ] From 7724c28425ab68fc11b68b655c558226aca8cb20 Mon Sep 17 00:00:00 2001 From: Anton Okolnychyi Date: Fri, 6 Oct 2023 09:56:47 -0700 Subject: [PATCH 02/17] Add full docs for 1.4.0 (#278) --- docs/config.toml | 5 +- landing-page/config.toml | 3 +- .../content/common/multi-engine-support.md | 5 +- landing-page/content/common/release-notes.md | 108 +++++++++++++++++- 4 files changed, 112 insertions(+), 9 deletions(-) diff --git a/docs/config.toml b/docs/config.toml index 906e0dff1..be2f37f19 100644 --- a/docs/config.toml +++ b/docs/config.toml @@ -9,8 +9,8 @@ theme= "iceberg-theme" siteType = "docs" search = true versions.iceberg = "" # This is populated by the github deploy workflow and is equal to the branch name - versions.nessie = "0.59.0" - latestVersions.iceberg = "1.3.1" # This is used for the version badge on the "latest" site version + versions.nessie = "0.71.0" + latestVersions.iceberg = "1.4.0" # This is used for the version badge on the "latest" site version BookSection='docs' # This determines which directory will inform the left navigation menu disableHome=true @@ -24,6 +24,7 @@ home = [ "HTML", "RSS", "SearchIndex" ] [menu] versions = [ { name = "latest", pre = "relative", url = "../latest", weight = 1 }, + { name = "1.4.0", pre = "relative", url = "../1.4.0", weight = 988 }, { name = "1.3.1", pre = "relative", url = "../1.3.1", weight = 989 }, { name = "1.3.0", pre = "relative", url = "../1.3.0", weight = 990 }, { name = "1.2.1", pre = "relative", url = "../1.2.1", weight = 991 }, diff --git a/landing-page/config.toml b/landing-page/config.toml index 6f858c1c9..3ed460fe7 100644 --- a/landing-page/config.toml +++ b/landing-page/config.toml @@ -8,7 +8,7 @@ sectionPagesMenu = "main" siteType = "landing-page" search = true description = "The open table format for analytic datasets." - latestVersions.iceberg = "1.3.1" + latestVersions.iceberg = "1.4.0" docsBaseURL = "" [[params.social]] @@ -34,6 +34,7 @@ home = [ "HTML", "RSS", "SearchIndex" ] [menu] versions = [ { name = "latest", url = "/docs/latest", weight = 1 }, + { name = "1.4.0", url = "/docs/1.4.0", weight = 988 }, { name = "1.3.1", url = "/docs/1.3.1", weight = 989 }, { name = "1.3.0", url = "/docs/1.3.0", weight = 990 }, { name = "1.2.1", url = "/docs/1.2.1", weight = 991 }, diff --git a/landing-page/content/common/multi-engine-support.md b/landing-page/content/common/multi-engine-support.md index 3cc2206af..a094d995f 100644 --- a/landing-page/content/common/multi-engine-support.md +++ b/landing-page/content/common/multi-engine-support.md @@ -66,10 +66,11 @@ Each engine version undergoes the following lifecycle stages: | ---------- | ------------------ | ----------------------- |------------------------| ------------------ | | 2.4 | End of Life | 0.7.0-incubating | 1.2.1 | [iceberg-spark-runtime-2.4](https://search.maven.org/remotecontent?filepath=org/apache/iceberg/iceberg-spark-runtime-2.4/1.2.1/iceberg-spark-runtime-2.4-1.2.1.jar) | | 3.0 | End of Life | 0.9.0 | 1.0.0 | [iceberg-spark-runtime-3.0_2.12](https://search.maven.org/remotecontent?filepath=org/apache/iceberg/iceberg-spark-runtime-3.0_2.12/1.0.0/iceberg-spark-runtime-3.0_2.12-1.0.0.jar) | -| 3.1 | Deprecated | 0.12.0 | {{% icebergVersion %}} | [iceberg-spark-runtime-3.1_2.12](https://search.maven.org/remotecontent?filepath=org/apache/iceberg/iceberg-spark-runtime-3.1_2.12/{{% icebergVersion %}}/iceberg-spark-runtime-3.1_2.12-{{% icebergVersion %}}.jar) [1] | -| 3.2 | Maintained | 0.13.0 | {{% icebergVersion %}} | [iceberg-spark-runtime-3.2_2.12](https://search.maven.org/remotecontent?filepath=org/apache/iceberg/iceberg-spark-runtime-3.2_2.12/{{% icebergVersion %}}/iceberg-spark-runtime-3.2_2.12-{{% icebergVersion %}}.jar) | +| 3.1 | End of Life | 0.12.0 | 1.3.1 | [iceberg-spark-runtime-3.1_2.12](https://search.maven.org/remotecontent?filepath=org/apache/iceberg/iceberg-spark-runtime-3.1_2.12/1.3.1/iceberg-spark-runtime-3.1_2.12-1.3.1.jar) [1] | +| 3.2 | Deprecated | 0.13.0 | {{% icebergVersion %}} | [iceberg-spark-runtime-3.2_2.12](https://search.maven.org/remotecontent?filepath=org/apache/iceberg/iceberg-spark-runtime-3.2_2.12/{{% icebergVersion %}}/iceberg-spark-runtime-3.2_2.12-{{% icebergVersion %}}.jar) | | 3.3 | Maintained | 0.14.0 | {{% icebergVersion %}} | [iceberg-spark-runtime-3.3_2.12](https://search.maven.org/remotecontent?filepath=org/apache/iceberg/iceberg-spark-runtime-3.3_2.12/{{% icebergVersion %}}/iceberg-spark-runtime-3.3_2.12-{{% icebergVersion %}}.jar) | | 3.4 | Maintained | 1.3.0 | {{% icebergVersion %}} | [iceberg-spark-runtime-3.4_2.12](https://search.maven.org/remotecontent?filepath=org/apache/iceberg/iceberg-spark-runtime-3.4_2.12/{{% icebergVersion %}}/iceberg-spark-runtime-3.4_2.12-{{% icebergVersion %}}.jar) | +| 3.5 | Maintained | 1.4.0 | {{% icebergVersion %}} | [iceberg-spark-runtime-3.5_2.12](https://search.maven.org/remotecontent?filepath=org/apache/iceberg/iceberg-spark-runtime-3.5_2.12/{{% icebergVersion %}}/iceberg-spark-runtime-3.5_2.12-{{% icebergVersion %}}.jar) | * [1] Spark 3.1 shares the same runtime jar `iceberg-spark3-runtime` with Spark 3.0 before Iceberg 0.13.0 diff --git a/landing-page/content/common/release-notes.md b/landing-page/content/common/release-notes.md index 5977129a8..e2e03077a 100644 --- a/landing-page/content/common/release-notes.md +++ b/landing-page/content/common/release-notes.md @@ -26,14 +26,17 @@ disableSidebar: true The latest version of Iceberg is [{{% icebergVersion %}}](https://github.com/apache/iceberg/releases/tag/apache-iceberg-{{% icebergVersion %}}). * [{{% icebergVersion %}} source tar.gz](https://www.apache.org/dyn/closer.cgi/iceberg/apache-iceberg-{{% icebergVersion %}}/apache-iceberg-{{% icebergVersion %}}.tar.gz) -- [signature](https://downloads.apache.org/iceberg/apache-iceberg-{{% icebergVersion %}}/apache-iceberg-{{% icebergVersion %}}.tar.gz.asc) -- [sha512](https://downloads.apache.org/iceberg/apache-iceberg-{{% icebergVersion %}}/apache-iceberg-{{% icebergVersion %}}.tar.gz.sha512) +* [{{% icebergVersion %}} Spark 3.5\_2.12 runtime Jar](https://search.maven.org/remotecontent?filepath=org/apache/iceberg/iceberg-spark-runtime-3.5_2.12/{{% icebergVersion %}}/iceberg-spark-runtime-3.5_2.12-{{% icebergVersion %}}.jar) -- [3.5\_2.13](https://search.maven.org/remotecontent?filepath=org/apache/iceberg/iceberg-spark-runtime-3.5_2.13/{{% icebergVersion %}}/iceberg-spark-runtime-3.5_2.13-{{% icebergVersion %}}.jar) * [{{% icebergVersion %}} Spark 3.4\_2.12 runtime Jar](https://search.maven.org/remotecontent?filepath=org/apache/iceberg/iceberg-spark-runtime-3.4_2.12/{{% icebergVersion %}}/iceberg-spark-runtime-3.4_2.12-{{% icebergVersion %}}.jar) -- [3.4\_2.13](https://search.maven.org/remotecontent?filepath=org/apache/iceberg/iceberg-spark-runtime-3.4_2.13/{{% icebergVersion %}}/iceberg-spark-runtime-3.4_2.13-{{% icebergVersion %}}.jar) * [{{% icebergVersion %}} Spark 3.3\_2.12 runtime Jar](https://search.maven.org/remotecontent?filepath=org/apache/iceberg/iceberg-spark-runtime-3.3_2.12/{{% icebergVersion %}}/iceberg-spark-runtime-3.3_2.12-{{% icebergVersion %}}.jar) -- [3.3\_2.13](https://search.maven.org/remotecontent?filepath=org/apache/iceberg/iceberg-spark-runtime-3.3_2.13/{{% icebergVersion %}}/iceberg-spark-runtime-3.3_2.13-{{% icebergVersion %}}.jar) * [{{% icebergVersion %}} Spark 3.2\_2.12 runtime Jar](https://search.maven.org/remotecontent?filepath=org/apache/iceberg/iceberg-spark-runtime-3.2_2.12/{{% icebergVersion %}}/iceberg-spark-runtime-3.2_2.12-{{% icebergVersion %}}.jar) -- [3.2\_2.13](https://search.maven.org/remotecontent?filepath=org/apache/iceberg/iceberg-spark-runtime-3.2_2.13/{{% icebergVersion %}}/iceberg-spark-runtime-3.2_2.13-{{% icebergVersion %}}.jar) -* [{{% icebergVersion %}} Spark 3.1 runtime Jar](https://search.maven.org/remotecontent?filepath=org/apache/iceberg/iceberg-spark-runtime-3.1_2.12/{{% icebergVersion %}}/iceberg-spark-runtime-3.1_2.12-{{% icebergVersion %}}.jar) * [{{% icebergVersion %}} Flink 1.17 runtime Jar](https://search.maven.org/remotecontent?filepath=org/apache/iceberg/iceberg-flink-runtime-1.17/{{% icebergVersion %}}/iceberg-flink-runtime-1.17-{{% icebergVersion %}}.jar) * [{{% icebergVersion %}} Flink 1.16 runtime Jar](https://search.maven.org/remotecontent?filepath=org/apache/iceberg/iceberg-flink-runtime-1.16/{{% icebergVersion %}}/iceberg-flink-runtime-1.16-{{% icebergVersion %}}.jar) * [{{% icebergVersion %}} Flink 1.15 runtime Jar](https://search.maven.org/remotecontent?filepath=org/apache/iceberg/iceberg-flink-runtime-1.15/{{% icebergVersion %}}/iceberg-flink-runtime-1.15-{{% icebergVersion %}}.jar) * [{{% icebergVersion %}} Hive runtime Jar](https://search.maven.org/remotecontent?filepath=org/apache/iceberg/iceberg-hive-runtime/{{% icebergVersion %}}/iceberg-hive-runtime-{{% icebergVersion %}}.jar) +* [{{% icebergVersion %}} aws-bundle Jar](https://search.maven.org/remotecontent?filepath=org/apache/iceberg/iceberg-aws-bundle/{{% icebergVersion %}}/iceberg-aws-bundle-{{% icebergVersion %}}.jar) +* [{{% icebergVersion %}} gcp-bundle Jar](https://search.maven.org/remotecontent?filepath=org/apache/iceberg/iceberg-gcp-bundle/{{% icebergVersion %}}/iceberg-gcp-bundle-{{% icebergVersion %}}.jar) +* [{{% icebergVersion %}} azure-bundle Jar](https://search.maven.org/remotecontent?filepath=org/apache/iceberg/iceberg-azure-bundle/{{% icebergVersion %}}/iceberg-azure-bundle-{{% icebergVersion %}}.jar) To use Iceberg in Spark or Flink, download the runtime JAR for your engine version and add it to the jars folder of your installation. @@ -67,7 +70,106 @@ To add a dependency on Iceberg in Maven, add the following to your `pom.xml`: ``` -## 1.3.1 release +### 1.4.0 release + +Apache Iceberg 1.4.0 was released on October 4, 2023. +The 1.4.0 release adds a variety of new features and bug fixes. + +* API + - Implement bound expression sanitization ([\#8149](https://github.com/apache/iceberg/pull/8149)) + - Remove overflow checks in `DefaultCounter` causing performance issues ([\#8297](https://github.com/apache/iceberg/pull/8297)) + - Support incremental scanning with branch ([\#5984](https://github.com/apache/iceberg/pull/5984)) + - Add a validation API to `DeleteFiles` which validates files exist ([\#8525](https://github.com/apache/iceberg/pull/8525)) +* Core + - Use V2 format by default in new tables ([\#8381](https://github.com/apache/iceberg/pull/8381)) + - Use `zstd` compression for Parquet by default in new tables ([\#8593](https://github.com/apache/iceberg/pull/8593)) + - Add strict metadata cleanup mode and enable it by default ([\#8397](https://github.com/apache/iceberg/pull/8397)) ([\#8599](https://github.com/apache/iceberg/pull/8599)) + - Avoid generating huge manifests during commits ([\#6335](https://github.com/apache/iceberg/pull/6335)) + - Add a writer for unordered position deletes ([\#7692](https://github.com/apache/iceberg/pull/7692)) + - Optimize `DeleteFileIndex` ([\#8157](https://github.com/apache/iceberg/pull/8157)) + - Optimize lookup in `DeleteFileIndex` without useful bounds ([\#8278](https://github.com/apache/iceberg/pull/8278)) + - Optimize split offsets handling ([\#8336](https://github.com/apache/iceberg/pull/8336)) + - Optimize computing user-facing state in data tasks ([\#8346](https://github.com/apache/iceberg/pull/8346)) + - Don't persist useless file and position bounds for deletes ([\#8360](https://github.com/apache/iceberg/pull/8360)) + - Don't persist counts for paths and positions in position delete files ([\#8590](https://github.com/apache/iceberg/pull/8590)) + - Support setting system-level properties via environmental variables ([\#5659](https://github.com/apache/iceberg/pull/5659)) + - Add JSON parser for `ContentFile` and `FileScanTask` ([\#6934](https://github.com/apache/iceberg/pull/6934)) + - Add REST spec and request for commits to multiple tables ([\#7741](https://github.com/apache/iceberg/pull/7741)) + - Add REST API for committing changes against multiple tables ([\#7569](https://github.com/apache/iceberg/pull/7569)) + - Default to exponential retry strategy in REST client ([\#8366](https://github.com/apache/iceberg/pull/8366)) + - Support registering tables with REST session catalog ([\#6512](https://github.com/apache/iceberg/pull/6512)) + - Add last updated timestamp and snapshot ID to partitions metadata table ([\#7581](https://github.com/apache/iceberg/pull/7581)) + - Add total data size to partitions metadata table ([\#7920](https://github.com/apache/iceberg/pull/7920)) + - Extend `ResolvingFileIO` to support bulk operations ([\#7976](https://github.com/apache/iceberg/pull/7976)) + - Key metadata in Avro format ([\#6450](https://github.com/apache/iceberg/pull/6450)) + - Add AES GCM encryption stream ([\#3231](https://github.com/apache/iceberg/pull/3231)) + - Fix a connection leak in streaming delete filters ([\#8132](https://github.com/apache/iceberg/pull/8132)) + - Fix lazy snapshot loading history ([\#8470](https://github.com/apache/iceberg/pull/8470)) + - Fix unicode handling in HTTPClient ([\#8046](https://github.com/apache/iceberg/pull/8046)) + - Fix paths for unpartitioned specs in writers ([\#7685](https://github.com/apache/iceberg/pull/7685)) + - Fix OOM caused by Avro decoder caching ([\#7791](https://github.com/apache/iceberg/pull/7791)) +* Spark + - Added support for Spark 3.5 + - Code for DELETE, UPDATE, and MERGE commands has moved to Spark, and all related extensions have been dropped from Iceberg. + - Support for WHEN NOT MATCHED BY SOURCE clause in MERGE. + - Column pruning in merge-on-read operations. + - Ability to request a bigger advisory partition size for the final write to produce well-sized output files without harming the job parallelism. + - Dropped support for Spark 3.1 + - Deprecated support for Spark 3.2 + - Support vectorized reads for merge-on-read operations in Spark 3.4 and 3.5 ([\#8466](https://github.com/apache/iceberg/pull/8466)) + - Increase default advisory partition size for writes in Spark 3.5 ([\#8660](https://github.com/apache/iceberg/pull/8660)) + - Support distributed planning in Spark 3.4 and 3.5 ([\#8123](https://github.com/apache/iceberg/pull/8123)) + - Support pushing down system functions by V2 filters in Spark 3.4 and 3.5 ([\#7886](https://github.com/apache/iceberg/pull/7886)) + - Support fanout position delta writers in Spark 3.4 and 3.5 ([\#7703](https://github.com/apache/iceberg/pull/7703)) + - Use fanout writers for unsorted tables by default in Spark 3.5 ([\#8621](https://github.com/apache/iceberg/pull/8621)) + - Support multiple shuffle partitions per file in compaction in Spark 3.4 and 3.5 ([\#7897](https://github.com/apache/iceberg/pull/7897)) + - Output net changes across snapshots for carryover rows in CDC ([\#7326](https://github.com/apache/iceberg/pull/7326)) + - Display read metrics on Spark SQL UI ([\#7447](https://github.com/apache/iceberg/pull/7447)) ([\#8445](https://github.com/apache/iceberg/pull/8445)) + - Adjust split size to benefit from cluster parallelism in Spark 3.4 and 3.5 ([\#7714](https://github.com/apache/iceberg/pull/7714)) + - Add `fast_forward` procedure ([\#8081](https://github.com/apache/iceberg/pull/8081)) + - Support filters when rewriting position deletes ([\#7582](https://github.com/apache/iceberg/pull/7582)) + - Support setting current snapshot with ref ([\#8163](https://github.com/apache/iceberg/pull/8163)) + - Make backup table name configurable during migration ([\#8227](https://github.com/apache/iceberg/pull/8227)) + - Add write and SQL options to override compression config ([\#8313](https://github.com/apache/iceberg/pull/8313)) + - Correct partition transform functions to match the spec ([\#8192](https://github.com/apache/iceberg/pull/8192)) + - Enable extra commit properties with metadata delete ([\#7649](https://github.com/apache/iceberg/pull/7649)) +* Flink + - Add possibility of ordering the splits based on the file sequence number ([\#7661](https://github.com/apache/iceberg/pull/7661)) + - Fix serialization in `TableSink` with anonymous object ([\#7866](https://github.com/apache/iceberg/pull/7866)) + - Switch to `FileScanTaskParser` for JSON serialization of `IcebergSourceSplit` ([\#7978](https://github.com/apache/iceberg/pull/7978)) + - Custom partitioner for bucket partitions ([\#7161](https://github.com/apache/iceberg/pull/7161)) + - Implement data statistics coordinator to aggregate data statistics from operator subtasks ([\#7360](https://github.com/apache/iceberg/pull/7360)) + - Support alter table column ([\#7628](https://github.com/apache/iceberg/pull/7628)) +* Parquet + - Add encryption config to read and write builders ([\#2639](https://github.com/apache/iceberg/pull/2639)) + - Skip writing bloom filters for deletes ([\#7617](https://github.com/apache/iceberg/pull/7617)) + - Cache codecs by name and level ([\#8182](https://github.com/apache/iceberg/pull/8182)) + - Fix decimal data reading from `ParquetAvroValueReaders` ([\#8246](https://github.com/apache/iceberg/pull/8246)) + - Handle filters with transforms by assuming data must be scanned ([\#8243](https://github.com/apache/iceberg/pull/8243)) +* ORC + - Handle filters with transforms by assuming the filter matches ([\#8244](https://github.com/apache/iceberg/pull/8244)) +* Vendor Integrations + - GCP: Fix single byte read in `GCSInputStream` ([\#8071](https://github.com/apache/iceberg/pull/8071)) + - GCP: Add properties for OAtuh2 and update library ([\#8073](https://github.com/apache/iceberg/pull/8073)) + - GCP: Add prefix and bulk operations to `GCSFileIO` ([\#8168](https://github.com/apache/iceberg/pull/8168)) + - GCP: Add bundle jar for GCP-related dependencies ([\#8231](https://github.com/apache/iceberg/pull/8231)) + - GCP: Add range reads to `GCSInputStream` ([\#8301](https://github.com/apache/iceberg/pull/8301)) + - AWS: Add bundle jar for AWS-related dependencies ([\#8261](https://github.com/apache/iceberg/pull/8261)) + - AWS: support config storage class for `S3FileIO` ([\#8154](https://github.com/apache/iceberg/pull/8154)) + - AWS: Add `FileIO` tracker/closer to Glue catalog ([\#8315](https://github.com/apache/iceberg/pull/8315)) + - AWS: Update S3 signer spec to allow an optional string body in `S3SignRequest` ([\#8361](https://github.com/apache/iceberg/pull/8361)) + - Azure: Add `FileIO` that supports ADLSv2 storage ([\#8303](https://github.com/apache/iceberg/pull/8303)) + - Azure: Make `ADLSFileIO` implement `DelegateFileIO` ([\#8563](https://github.com/apache/iceberg/pull/8563)) + - Nessie: Provide better commit message on table registration ([\#8385](https://github.com/apache/iceberg/pull/8385)) +* Dependencies + - Bump Nessie to 0.71.0 + - Bump ORC to 1.9.1 + - Bump Arrow to 12.0.1 + - Bump AWS Java SDK to 2.20.131 + +## Past releases + +### 1.3.1 release Apache Iceberg 1.3.1 was released on July 25, 2023. The 1.3.1 release addresses various issues identified in the 1.3.0 release. @@ -83,8 +185,6 @@ The 1.3.1 release addresses various issues identified in the 1.3.0 release. * Flink - FlinkCatalog creation no longer creates the default database ([\#8039](https://github.com/apache/iceberg/pull/8039)) -## Past releases - ### 1.3.0 release Apache Iceberg 1.3.0 was released on May 30th, 2023. From 8cf690ce6d24b8a513fc5dd39da72d54c29e7698 Mon Sep 17 00:00:00 2001 From: Ayush Saxena Date: Thu, 12 Oct 2023 18:05:34 +0530 Subject: [PATCH 03/17] Add Blogs Related to Hive & Iceberg. --- landing-page/content/common/blogs.md | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/landing-page/content/common/blogs.md b/landing-page/content/common/blogs.md index 2e00b2062..80a7222c0 100644 --- a/landing-page/content/common/blogs.md +++ b/landing-page/content/common/blogs.md @@ -24,6 +24,15 @@ disableSidebar: true Here is a list of company blogs that talk about Iceberg. The blogs are ordered from most recent to oldest. +### [Apache Hive-4.x with Iceberg Branches & Tags](https://medium.com/@ayushtkn/apache-hive-4-x-with-iceberg-branches-tags-3d52293ac0bf/) +**Date**: October 12th, 2023, **Company**: Cloudera + +**Authors**: [Ayush Saxena](https://www.linkedin.com/in/ayush151/) + +### [Apache Hive 4.x With Apache Iceberg](https://medium.com/@ayushtkn/apache-hive-4-x-with-apache-iceberg-part-i-355e7a380725/) +**Date**: October 12th, 2023, **Company**: Cloudera + +**Authors**: [Ayush Saxena](https://www.linkedin.com/in/ayush151/) ### [From Hive Tables to Iceberg Tables: Hassle-Free](https://blog.cloudera.com/from-hive-tables-to-iceberg-tables-hassle-free/) **Date**: July 14th, 2023, **Company**: Cloudera From c7e0593a2382c9471b9bbea9fa79dca828acb598 Mon Sep 17 00:00:00 2001 From: Fokko Driesprong Date: Fri, 20 Oct 2023 10:10:23 +0200 Subject: [PATCH 04/17] Add section on Github releases (#280) --- landing-page/content/common/how-to-release.md | 5 +++-- 1 file changed, 3 insertions(+), 2 deletions(-) diff --git a/landing-page/content/common/how-to-release.md b/landing-page/content/common/how-to-release.md index 9c445ccab..7418dde59 100644 --- a/landing-page/content/common/how-to-release.md +++ b/landing-page/content/common/how-to-release.md @@ -306,9 +306,10 @@ Thanks to everyone for contributing! Create a PR in the `iceberg` repo to make revapi run on the new release. For an example see [this PR](https://github.com/apache/iceberg/pull/6275). -#### Update github issue template +#### Update Github -Create a PR in the `iceberg` repo to add the new version to the github issue template. For an example see [this PR](https://github.com/apache/iceberg/pull/6287). +- Create a PR in the `iceberg` repository to add the new version to the Github issue template. For an example see [this PR](https://github.com/apache/iceberg/pull/6287). +- Draft [a new release to update Github](https://github.com/apache/iceberg/releases/new) to show the latest release. A changelog can be generated automatically using Github. ### Documentation Release From 004f82b17e47c283b5901956a15d535399417607 Mon Sep 17 00:00:00 2001 From: Fokko Driesprong Date: Fri, 20 Oct 2023 10:47:54 +0200 Subject: [PATCH 05/17] Update Slack URL (#286) --- iceberg-theme/layouts/partials/header.html | 2 +- landing-page/config.toml | 2 +- landing-page/content/common/join.md | 2 +- 3 files changed, 3 insertions(+), 3 deletions(-) diff --git a/iceberg-theme/layouts/partials/header.html b/iceberg-theme/layouts/partials/header.html index 16fa91ad2..a735a04ac 100644 --- a/iceberg-theme/layouts/partials/header.html +++ b/iceberg-theme/layouts/partials/header.html @@ -67,7 +67,7 @@ diff --git a/landing-page/config.toml b/landing-page/config.toml index 3ed460fe7..fa5117fc5 100644 --- a/landing-page/config.toml +++ b/landing-page/config.toml @@ -22,7 +22,7 @@ sectionPagesMenu = "main" [[params.social]] title = "slack" icon = "slack" - url = "https://join.slack.com/t/apache-iceberg/shared_invite/zt-1jyaasx2a-TxE4z_ubxDkTFS7UFDHnjw" + url = "https://join.slack.com/t/apache-iceberg/shared_invite/zt-1znkcg5zm-7_FE~pcox347XwZE3GNfPg" [outputFormats.SearchIndex] baseName = "landingpagesearch" diff --git a/landing-page/content/common/join.md b/landing-page/content/common/join.md index e09f7dc10..fd556abd7 100644 --- a/landing-page/content/common/join.md +++ b/landing-page/content/common/join.md @@ -42,7 +42,7 @@ Issues are tracked in GitHub: ## Slack -We use the [Apache Iceberg workspace](https://apache-iceberg.slack.com/) on Slack. To be invited, follow [this invite link](https://join.slack.com/t/apache-iceberg/shared_invite/zt-1znkcg5zm-7_FE~pcox347XwZE3GNfPg). +We use the [Apache Iceberg workspace](https://apache-iceberg.slack.com/) on Slack. To be invited, follow [this invite link](https://join.slack.com/t/apache-iceberg/shared_invite/zt-2561tq9qr-UtISlHgsdY3Virs3Z2_btQ). Please note that this link may occasionally break when Slack does an upgrade. If you encounter problems using it, please let us know by sending an email to . From 4b98185aab72c3d437e4e3a925fda400ef67df9c Mon Sep 17 00:00:00 2001 From: Renjie Liu Date: Mon, 23 Oct 2023 17:17:45 +0800 Subject: [PATCH 06/17] Docs: Add catalog page (#284) --- docs/config.toml | 2 ++ iceberg-theme/static/css/iceberg-theme.css | 2 +- landing-page/config.toml | 2 ++ landing-page/content/common/catalog.md | 2 +- 4 files changed, 6 insertions(+), 2 deletions(-) diff --git a/docs/config.toml b/docs/config.toml index be2f37f19..214ea09d8 100644 --- a/docs/config.toml +++ b/docs/config.toml @@ -55,6 +55,8 @@ home = [ "HTML", "RSS", "SearchIndex" ] { name = "Multi-Engine Support", parent = "Project", pre = "relative", url = "../../multi-engine-support", weight = 450 }, { name = "How To Release", parent = "Project", pre = "relative", url = "../../how-to-release", weight = 500 }, { name = "Terms", parent = "Project", pre = "relative", url = "../../terms", weight = 600 }, + { name = "Concepts", weight = 1150 }, + { name = "Catalogs", parent = "Concepts", pre = "relative", url = "../../catalog" }, { name = "ASF", weight = 1200 }, { name = "License", identifier = "_license", parent = "ASF", url = "https://www.apache.org/licenses/" }, { name = "Security", identifier = "_security", parent = "ASF", url = "https://www.apache.org/security/" }, diff --git a/iceberg-theme/static/css/iceberg-theme.css b/iceberg-theme/static/css/iceberg-theme.css index ec58c7fc8..9a31db317 100644 --- a/iceberg-theme/static/css/iceberg-theme.css +++ b/iceberg-theme/static/css/iceberg-theme.css @@ -439,7 +439,7 @@ i.fa.fa-chevron-down { .navbar-pages-group { justify-content: end; margin-left: auto; - max-width: 720px; + max-width: 1080px; position: relative; } diff --git a/landing-page/config.toml b/landing-page/config.toml index fa5117fc5..7907d7e52 100644 --- a/landing-page/config.toml +++ b/landing-page/config.toml @@ -66,6 +66,8 @@ home = [ "HTML", "RSS", "SearchIndex" ] { name = "Multi-Engine Support", url = "/multi-engine-support", parent = "Project", weight = 450 }, { name = "How To Release", parent = "Project", url = "/how-to-release", weight = 500 }, { name = "Terms", url = "/terms", parent = "Project", weight = 600 }, + { name = "Concepts", weight = 1150 }, + { name = "Catalogs", parent = "Concepts", pre = "relative", url = "/catalog" }, { name = "ASF", weight = 1200 }, { name = "License", identifier = "_license", parent = "ASF", url = "https://www.apache.org/licenses/" }, { name = "Security", identifier = "_security", parent = "ASF", url = "https://www.apache.org/security/" }, diff --git a/landing-page/content/common/catalog.md b/landing-page/content/common/catalog.md index c900479e4..e573ede1b 100644 --- a/landing-page/content/common/catalog.md +++ b/landing-page/content/common/catalog.md @@ -1,6 +1,6 @@ --- title: "Iceberg Catalogs" -url: concepts/catalog +url: catalog disableSidebar: true --- + +# Metrics Reporting + +As of 1.1.0 Iceberg supports the [`MetricsReporter`](../../../javadoc/{{% icebergVersion %}}/org/apache/iceberg/metrics/MetricsReporter.html) and the [`MetricsReport`](../../../javadoc/{{% icebergVersion %}}/org/apache/iceberg/metrics/MetricsReport.html) APIs. These two APIs allow expressing different metrics reports while supporting a pluggable way of reporting these reports. + +## Type of Reports + +### ScanReport +A [`ScanReport`](../../../javadoc/{{% icebergVersion %}}/org/apache/iceberg/metrics/ScanReport.html) carries metrics being collected during scan planning against a given table. Amongst some general information about the involved table, such as the snapshot id or the table name, it includes metrics like: +* total scan planning duration +* number of data/delete files included in the result +* number of data/delete manifests scanned/skipped +* number of data/delete files scanned/skipped +* number of equality/positional delete files scanned + + +### CommitReport +A [`CommitReport`](../../../javadoc/{{% icebergVersion %}}/org/apache/iceberg/metrics/CommitReport.html) carries metrics being collected after committing changes to a table (aka producing a snapshot). Amongst some general information about the involved table, such as the snapshot id or the table name, it includes metrics like: +* total duration +* number of attempts required for the commit to succeed +* number of added/removed data/delete files +* number of added/removed equality/positional delete files +* number of added/removed equality/positional deletes + + +## Available Metrics Reporters + +### [`LoggingMetricsReporter`](../../../javadoc/{{% icebergVersion %}}/org/apache/iceberg/metrics/LoggingMetricsReporter.html) + +This is the default metrics reporter when nothing else is configured and its purpose is to log results to the log file. Example output would look as shown below: + +``` +INFO org.apache.iceberg.metrics.LoggingMetricsReporter - Received metrics report: +ScanReport{ + tableName=scan-planning-with-eq-and-pos-delete-files, + snapshotId=2, + filter=ref(name="data") == "(hash-27fa7cc0)", + schemaId=0, + projectedFieldIds=[1, 2], + projectedFieldNames=[id, data], + scanMetrics=ScanMetricsResult{ + totalPlanningDuration=TimerResult{timeUnit=NANOSECONDS, totalDuration=PT0.026569404S, count=1}, + resultDataFiles=CounterResult{unit=COUNT, value=1}, + resultDeleteFiles=CounterResult{unit=COUNT, value=2}, + totalDataManifests=CounterResult{unit=COUNT, value=1}, + totalDeleteManifests=CounterResult{unit=COUNT, value=1}, + scannedDataManifests=CounterResult{unit=COUNT, value=1}, + skippedDataManifests=CounterResult{unit=COUNT, value=0}, + totalFileSizeInBytes=CounterResult{unit=BYTES, value=10}, + totalDeleteFileSizeInBytes=CounterResult{unit=BYTES, value=20}, + skippedDataFiles=CounterResult{unit=COUNT, value=0}, + skippedDeleteFiles=CounterResult{unit=COUNT, value=0}, + scannedDeleteManifests=CounterResult{unit=COUNT, value=1}, + skippedDeleteManifests=CounterResult{unit=COUNT, value=0}, + indexedDeleteFiles=CounterResult{unit=COUNT, value=2}, + equalityDeleteFiles=CounterResult{unit=COUNT, value=1}, + positionalDeleteFiles=CounterResult{unit=COUNT, value=1}}, + metadata={ + iceberg-version=Apache Iceberg 1.4.0-SNAPSHOT (commit 4868d2823004c8c256a50ea7c25cff94314cc135)}} +``` + +``` +INFO org.apache.iceberg.metrics.LoggingMetricsReporter - Received metrics report: +CommitReport{ + tableName=scan-planning-with-eq-and-pos-delete-files, + snapshotId=1, + sequenceNumber=1, + operation=append, + commitMetrics=CommitMetricsResult{ + totalDuration=TimerResult{timeUnit=NANOSECONDS, totalDuration=PT0.098429626S, count=1}, + attempts=CounterResult{unit=COUNT, value=1}, + addedDataFiles=CounterResult{unit=COUNT, value=1}, + removedDataFiles=null, + totalDataFiles=CounterResult{unit=COUNT, value=1}, + addedDeleteFiles=null, + addedEqualityDeleteFiles=null, + addedPositionalDeleteFiles=null, + removedDeleteFiles=null, + removedEqualityDeleteFiles=null, + removedPositionalDeleteFiles=null, + totalDeleteFiles=CounterResult{unit=COUNT, value=0}, + addedRecords=CounterResult{unit=COUNT, value=1}, + removedRecords=null, + totalRecords=CounterResult{unit=COUNT, value=1}, + addedFilesSizeInBytes=CounterResult{unit=BYTES, value=10}, + removedFilesSizeInBytes=null, + totalFilesSizeInBytes=CounterResult{unit=BYTES, value=10}, + addedPositionalDeletes=null, + removedPositionalDeletes=null, + totalPositionalDeletes=CounterResult{unit=COUNT, value=0}, + addedEqualityDeletes=null, + removedEqualityDeletes=null, + totalEqualityDeletes=CounterResult{unit=COUNT, value=0}}, + metadata={ + iceberg-version=Apache Iceberg 1.4.0-SNAPSHOT (commit 4868d2823004c8c256a50ea7c25cff94314cc135)}} +``` + + +### [`RESTMetricsReporter`](../../../javadoc/{{% icebergVersion %}}/org/apache/iceberg/rest/RESTMetricsReporter.html) + +This is the default when using the [`RESTCatalog`](../../../javadoc/{{% icebergVersion %}}/org/apache/iceberg/rest/RESTCatalog.html) and its purpose is to send metrics to a REST server at the `/v1/{prefix}/namespaces/{namespace}/tables/{table}/metrics` endpoint as defined in the [REST OpenAPI spec](https://github.com/apache/iceberg/blob/master/open-api/rest-catalog-open-api.yaml). + +Sending metrics via REST can be controlled with the `rest-metrics-reporting-enabled` (defaults to `true`) property. + + +## Implementing a custom Metrics Reporter + +Implementing the [`MetricsReporter`](../../../javadoc/{{% icebergVersion %}}/org/apache/iceberg/metrics/MetricsReporter.html) API gives full flexibility in dealing with incoming [`MetricsReport`](../../../javadoc/{{% icebergVersion %}}/org/apache/iceberg/metrics/MetricsReport.html) instances. For example, it would be possible to send results to a Prometheus endpoint or any other observability framework/system. + +Below is a short example illustrating an `InMemoryMetricsReporter` that stores reports in a list and makes them available: +```java +public class InMemoryMetricsReporter implements MetricsReporter { + + private List metricsReports = Lists.newArrayList(); + + @Override + public void report(MetricsReport report) { + metricsReports.add(report); + } + + public List reports() { + return metricsReports; + } +} +``` + +## Registering a custom Metrics Reporter + +### Via Catalog Configuration + +The [catalog property](../configuration#catalog-properties) `metrics-reporter-impl` allows registering a given [`MetricsReporter`](../../../javadoc/{{% icebergVersion %}}/org/apache/iceberg/metrics/MetricsReporter.html) by specifying its fully-qualified class name, e.g. `metrics-reporter-impl=org.apache.iceberg.metrics.InMemoryMetricsReporter`. + +### Via the Java API during Scan planning + +Independently of the [`MetricsReporter`](../../../javadoc/{{% icebergVersion %}}/org/apache/iceberg/metrics/MetricsReporter.html) being registered at the catalog level via the `metrics-reporter-impl` property, it is also possible to supply additional reporters during scan planning as shown below: + +```java +TableScan tableScan = + table + .newScan() + .metricsReporter(customReporterOne) + .metricsReporter(customReporterTwo); + +try (CloseableIterable fileScanTasks = tableScan.planFiles()) { + // ... +} +``` \ No newline at end of file diff --git a/docs/content/nessie.md b/docs/content/nessie.md index b64847f72..47b91c891 100644 --- a/docs/content/nessie.md +++ b/docs/content/nessie.md @@ -4,6 +4,7 @@ url: nessie menu: main: parent: Integrations + identifier: nessie_integration weight: 0 ---