Skip to content

Commit

Permalink
Merge branch 'master' into fix/dns_network_query
Browse files Browse the repository at this point in the history
  • Loading branch information
kibanamachine committed Nov 6, 2020
2 parents d0b282b + d831676 commit 79f6917
Show file tree
Hide file tree
Showing 120 changed files with 2,244 additions and 1,139 deletions.
4 changes: 1 addition & 3 deletions docs/developer/plugin-list.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -28,9 +28,7 @@ allowing users to configure their advanced settings, also known
as uiSettings within the code.
|{kib-repo}blob/{branch}/src/plugins/apm_oss[apmOss]
|WARNING: Missing README.
|{kib-repo}blob/{branch}/src/plugins/apm_oss/README.asciidoc[apmOss]
|{kib-repo}blob/{branch}/src/plugins/bfetch/README.md[bfetch]
|bfetch allows to batch HTTP requests and streams responses back.
Expand Down
Binary file not shown.
83 changes: 19 additions & 64 deletions docs/management/managing-fields.asciidoc
Original file line number Diff line number Diff line change
@@ -1,70 +1,29 @@
[[managing-fields]]
== Index patterns and fields
== Field management

The *Index patterns* UI helps you create and manage
the index patterns that retrieve your data from {es}.
Whenever possible,
{kib} uses the same field type for display as {es}. However, a few field types
{es} supports are not available in {kib}. Use field formatters to customize how your
fields are displayed in Kibana, regardless of how they are stored in {es}.

[role="screenshot"]
image::images/management-index-patterns.png[]

[float]
=== Required permissions

The `Index Pattern Management` {kib} privilege is required to access the *Index patterns* UI.

To add the privilege, open the menu, then click *Stack Management > Roles*.

[float]
=== Create an index pattern

An index pattern is the glue that connects {kib} to your {es} data. Create an
index pattern whenever you load your own data into {kib}. To get started,
click *Create index pattern*, and then follow the guided steps. Refer to
<<index-patterns, Creating an index pattern>> for the types of index patterns
that you can create.

[float]
=== Manage your index pattern

To view the fields and associated data types in an index pattern, click its name in
the *Index patterns* overview.

[role="screenshot"]
image::management/index-patterns/images/new-index-pattern.png["Index files and data types"]

Use the icons to perform the following actions:
Kibana provides these field formatters:

* [[set-default-pattern]]*Set the default index pattern.* {kib} uses a badge to make users
aware of which index pattern is the default. The first pattern
you create is automatically designated as the default pattern. The default
index pattern is loaded when you open *Discover*.
* <<field-formatters-string, Strings>>
* <<field-formatters-date, Dates>>
* <<field-formatters-geopoint, Geopoints>>
* <<field-formatters-numeric, Numbers>>

* *Refresh the index fields list.* You can refresh the index fields list to
pick up any newly-added fields. Doing so also resets the {kib} popularity counters
for the fields. The popularity counters are used in *Discover* to sort fields in lists.
To format a field:

* [[delete-pattern]]*Delete the index pattern.* This action removes the pattern from the list of
Saved Objects in {kib}. You will not be able to recover field formatters,
scripted fields, source filters, and field popularity data associated with the index pattern.
Deleting an index pattern does
not remove any indices or data documents from {es}.
. Open the main menu, and click *Stack Management > Index Patterns*.
. Click the index pattern that contains the field you want to format.
. Find the field you want to format and click the edit icon (image:management/index-patterns/images/edit_icon.png[]).
. Select a format and fill in the details.
+
WARNING: Deleting an index pattern breaks all visualizations, saved searches, and
other saved objects that reference the pattern.

[float]
=== Edit a field

To edit a field's properties, click the edit icon
image:management/index-patterns/images/edit_icon.png[] in the detail view.
You can set the field's format and popularity value.
[role="screenshot"]
image:management/index-patterns/images/edit-field-format.png["Edit field format"]

Kibana has field formatters for the following field types:

* <<field-formatters-string, Strings>>
* <<field-formatters-date, Dates>>
* <<field-formatters-geopoint, Geopoints>>
* <<field-formatters-numeric, Numbers>>

[[field-formatters-string]]
=== String field formatters
Expand Down Expand Up @@ -121,12 +80,8 @@ WARNING: Computing data on the fly with scripted fields can be very resource int
{kib} performance. Keep in mind that there's no built-in validation of a scripted field. If your scripts are
buggy, you'll get exceptions whenever you try to view the dynamically generated data.

When you define a scripted field in {kib}, you have a choice of scripting languages. In 5.0 and later, the default
options are {ref}/modules-scripting-expression.html[Lucene expressions] and {ref}/modules-scripting-painless.html[Painless].
While you can use other scripting languages if you enable dynamic scripting for them in {es}, this is not recommended
because they cannot be sufficiently {ref}/modules-scripting-security.html[sandboxed].

WARNING: In 5.0 and later, Groovy, JavaScript, and Python scripting are deprecated and unsupported.
When you define a scripted field in {kib}, you have a choice of the {ref}/modules-scripting-expression.html[Lucene expressions] or the
{ref}/modules-scripting-painless.html[Painless] scripting language.

You can reference any single value numeric field in your expressions, for example:

Expand Down
2 changes: 1 addition & 1 deletion package.json
Original file line number Diff line number Diff line change
Expand Up @@ -567,7 +567,7 @@
"@types/zen-observable": "^0.8.0",
"@typescript-eslint/eslint-plugin": "^3.10.0",
"@typescript-eslint/parser": "^3.10.0",
"@welldone-software/why-did-you-render": "^4.0.0",
"@welldone-software/why-did-you-render": "^5.0.0",
"@yarnpkg/lockfile": "^1.1.0",
"abab": "^1.0.4",
"angular-aria": "^1.8.0",
Expand Down
53 changes: 41 additions & 12 deletions rfcs/text/0013_saved_object_migrations.md
Original file line number Diff line number Diff line change
Expand Up @@ -212,39 +212,68 @@ Note:
If none of the aliases exists, this is a new Elasticsearch cluster and no
migrations are necessary. Create the `.kibana_7.10.0_001` index with the
following aliases: `.kibana_current` and `.kibana_7.10.0`.
2. If `.kibana_current` and `.kibana_7.10.0` both exists and are pointing to the same index this version's migration has already been completed.
2. If the source is a < v6.5 `.kibana` index or < 7.4 `.kibana_task_manager`
index prepare the legacy index for a migration:
1. Mark the legacy index as read-only and wait for all in-flight operations to drain (requires https://github.com/elastic/elasticsearch/pull/58094). This prevents any further writes from outdated nodes. Assuming this API is similar to the existing `/<index>/_close` API, we expect to receive `"acknowledged" : true` and `"shards_acknowledged" : true`. If all shards don’t acknowledge within the timeout, retry the operation until it succeeds.
2. Clone the legacy index into a new index which has writes enabled. Use a fixed index name i.e `.kibana_pre6.5.0_001` or `.kibana_task_manager_pre7.4.0_001`. `POST /.kibana/_clone/.kibana_pre6.5.0_001?wait_for_active_shards=all {"settings": {"index.blocks.write": false}}`. Ignore errors if the clone already exists. Ignore errors if the legacy source doesn't exist.
3. Wait for the cloning to complete `GET /_cluster/health/.kibana_pre6.5.0_001?wait_for_status=green&timeout=60s` If cloning doesn’t complete within the 60s timeout, log a warning for visibility and poll again.
4. Apply the `convertToAlias` script if defined `POST /.kibana_pre6.5.0_001/_update_by_query?conflicts=proceed {"script": {...}}`. The `convertToAlias` script will have to be idempotent, preferably setting `ctx.op="noop"` on subsequent runs to avoid unecessary writes.
5. Delete the legacy index and replace it with an alias of the same name
```
POST /_aliases
{
"actions" : [
{ "add": { "index": ".kibana_pre6.5.0_001", "alias": ".kibana" } },
{ "remove_index": { "index": ".kibana" } }
]
}
```.
Unlike the delete index API, the `remove_index` action will fail if
provided with an _alias_. Ignore "The provided expression [.kibana]
matches an alias, specify the corresponding concrete indices instead."
or "index_not_found_exception" errors. These actions are applied
atomically so that other Kibana instances will always see either a
`.kibana` index or an alias, but never neither.
6. Use the cloned `.kibana_pre6.5.0_001` as the source for the rest of the migration algorithm.
3. If `.kibana_current` and `.kibana_7.10.0` both exists and are pointing to the same index this version's migration has already been completed.
1. Because the same version can have plugins enabled at any point in time,
perform the mappings update in step (6) and migrate outdated documents
with step (7).
2. Skip to step (9) to start serving traffic.
3. Fail the migration if:
4. Fail the migration if:
1. `.kibana_current` is pointing to an index that belongs to a later version of Kibana .e.g. `.kibana_7.12.0_001`
2. (Only in 8.x) The source index contains documents that belong to an unknown Saved Object type (from a disabled plugin). Log an error explaining that the plugin that created these documents needs to be enabled again or that these objects should be deleted. See section (4.2.1.4).
4. Mark the source index as read-only and wait for all in-flight operations to drain (requires https://github.com/elastic/elasticsearch/pull/58094). This prevents any further writes from outdated nodes. Assuming this API is similar to the existing `/<index>/_close` API, we expect to receive `"acknowledged" : true` and `"shards_acknowledged" : true`. If all shards don’t acknowledge within the timeout, retry the operation until it succeeds.
5. Clone the source index into a new target index which has writes enabled. All nodes on the same version will use the same fixed index name e.g. `.kibana_7.10.0_001`. The `001` postfix isn't used by Kibana, but allows for re-indexing an index should this be required by an Elasticsearch upgrade. E.g. re-index `.kibana_7.10.0_001` into `.kibana_7.10.0_002` and point the `.kibana_7.10.0` alias to `.kibana_7.10.0_002`.
5. Mark the source index as read-only and wait for all in-flight operations to drain (requires https://github.com/elastic/elasticsearch/pull/58094). This prevents any further writes from outdated nodes. Assuming this API is similar to the existing `/<index>/_close` API, we expect to receive `"acknowledged" : true` and `"shards_acknowledged" : true`. If all shards don’t acknowledge within the timeout, retry the operation until it succeeds.
6. Clone the source index into a new target index which has writes enabled. All nodes on the same version will use the same fixed index name e.g. `.kibana_7.10.0_001`. The `001` postfix isn't used by Kibana, but allows for re-indexing an index should this be required by an Elasticsearch upgrade. E.g. re-index `.kibana_7.10.0_001` into `.kibana_7.10.0_002` and point the `.kibana_7.10.0` alias to `.kibana_7.10.0_002`.
1. `POST /.kibana_n/_clone/.kibana_7.10.0_001?wait_for_active_shards=all {"settings": {"index.blocks.write": false}}`. Ignore errors if the clone already exists.
2. Wait for the cloning to complete `GET /_cluster/health/.kibana_7.10.0_001?wait_for_status=green&timeout=60s` If cloning doesn’t complete within the 60s timeout, log a warning for visibility and poll again.
6. Update the mappings of the target index
7. Update the mappings of the target index
1. Retrieve the existing mappings including the `migrationMappingPropertyHashes` metadata.
2. Update the mappings with `PUT /.kibana_7.10.0_001/_mapping`. The API deeply merges any updates so this won't remove the mappings of any plugins that were enabled in a previous version but are now disabled.
3. Ensure that fields are correctly indexed using the target index's latest mappings `POST /.kibana_7.10.0_001/_update_by_query?conflicts=proceed`. In the future we could optimize this query by only targeting documents:
1. That belong to a known saved object type.
2. Which don't have outdated migrationVersion numbers since these will be transformed anyway.
3. That belong to a type whose mappings were changed by comparing the `migrationMappingPropertyHashes`. (Metadata, unlike the mappings isn't commutative, so there is a small chance that the metadata hashes do not accurately reflect the latest mappings, however, this will just result in an less efficient query).
7. Transform documents by reading batches of outdated documents from the target index then transforming and updating them with optimistic concurrency control.
8. Transform documents by reading batches of outdated documents from the target index then transforming and updating them with optimistic concurrency control.
1. Ignore any version conflict errors.
2. If a document transform throws an exception, add the document to a failure list and continue trying to transform all other documents. If any failures occured, log the complete list of documents that failed to transform. Fail the migration.
8. Mark the migration as complete by doing a single atomic operation (requires https://github.com/elastic/elasticsearch/pull/58100) that:
1. Checks that `.kibana-current` alias is still pointing to the source index
2. Points the `.kibana-7.10.0` and `.kibana_current` aliases to the target index.
3. If this fails with a "required alias [.kibana_current] does not exist" error fetch `.kibana_current` again:
9. Mark the migration as complete by doing a single atomic operation (requires https://github.com/elastic/elasticsearch/pull/58100) that:
3. Checks that `.kibana_current` alias is still pointing to the source index
4. Points the `.kibana_7.10.0` and `.kibana_current` aliases to the target index.
5. If this fails with a "required alias [.kibana_current] does not exist" error fetch `.kibana_current` again:
1. If `.kibana_current` is _not_ pointing to our target index fail the migration.
2. If `.kibana_current` is pointing to our target index the migration has succeeded and we can proceed to step (9).
9. Start serving traffic.
10. Start serving traffic.
This algorithm shares a weakness with our existing migration algorithm
(since v7.4). When the task manager index gets reindexed a reindex script is
applied. Because we delete the original task manager index there is no way to
rollback a failed task manager migration without a snapshot.
Together with the limitations, this algorithm ensures that migrations are
idempotent. If two nodes are started simultaneously, both of them will start
transforming documents in that version's target index, but because migrations are idempotent, it doesn’t matter which node’s writes win.
transforming documents in that version's target index, but because migrations
are idempotent, it doesn’t matter which node’s writes win.
<details>
<summary>In the future, this algorithm could enable (2.6) "read-only functionality during the downtime window" but this is outside of the scope of this RFC.</summary>
Expand Down
5 changes: 5 additions & 0 deletions src/plugins/apm_oss/README.asciidoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,5 @@
# APM OSS plugin

OSS plugin for APM. Includes index configuration and tutorial resources.

See <<../../x-pack/plugins/apm/readme.md,the X-Pack APM plugin README>> for information about the main APM plugin.
Original file line number Diff line number Diff line change
Expand Up @@ -134,19 +134,15 @@ test('Add to library is not compatible when embeddable is not in a dashboard con
expect(await action.isCompatible({ embeddable: orphanContactCard })).toBe(false);
});

test('Add to library replaces embeddableId but retains panel count', async () => {
test('Add to library replaces embeddableId and retains panel count', async () => {
const dashboard = embeddable.getRoot() as IContainer;
const originalPanelCount = Object.keys(dashboard.getInput().panels).length;
const originalPanelKeySet = new Set(Object.keys(dashboard.getInput().panels));

const action = new AddToLibraryAction({ toasts: coreStart.notifications.toasts });
await action.execute({ embeddable });
expect(Object.keys(container.getInput().panels).length).toEqual(originalPanelCount);

const newPanelId = Object.keys(container.getInput().panels).find(
(key) => !originalPanelKeySet.has(key)
);
expect(newPanelId).toBeDefined();
const newPanel = container.getInput().panels[newPanelId!];
expect(Object.keys(container.getInput().panels)).toContain(embeddable.id);
const newPanel = container.getInput().panels[embeddable.id!];
expect(newPanel.type).toEqual(embeddable.type);
});

Expand All @@ -162,15 +158,10 @@ test('Add to library returns reference type input', async () => {
mockedByReferenceInput: { savedObjectId: 'testSavedObjectId', id: embeddable.id },
mockedByValueInput: { attributes: complicatedAttributes, id: embeddable.id } as EmbeddableInput,
});
const dashboard = embeddable.getRoot() as IContainer;
const originalPanelKeySet = new Set(Object.keys(dashboard.getInput().panels));
const action = new AddToLibraryAction({ toasts: coreStart.notifications.toasts });
await action.execute({ embeddable });
const newPanelId = Object.keys(container.getInput().panels).find(
(key) => !originalPanelKeySet.has(key)
);
expect(newPanelId).toBeDefined();
const newPanel = container.getInput().panels[newPanelId!];
expect(Object.keys(container.getInput().panels)).toContain(embeddable.id);
const newPanel = container.getInput().panels[embeddable.id!];
expect(newPanel.type).toEqual(embeddable.type);
expect(newPanel.explicitInput.attributes).toBeUndefined();
expect(newPanel.explicitInput.savedObjectId).toBe('testSavedObjectId');
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -108,7 +108,12 @@ test('Clone adds a new embeddable', async () => {
);
expect(newPanelId).toBeDefined();
const newPanel = container.getInput().panels[newPanelId!];
expect(newPanel.type).toEqual(embeddable.type);
expect(newPanel.type).toEqual('placeholder');
// let the placeholder load
await dashboard.untilEmbeddableLoaded(newPanelId!);
// now wait for the full embeddable to replace it
const loadedPanel = await dashboard.untilEmbeddableLoaded(newPanelId!);
expect(loadedPanel.type).toEqual(embeddable.type);
});

test('Clones an embeddable without a saved object ID', async () => {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -132,19 +132,14 @@ test('Unlink is not compatible when embeddable is not in a dashboard container',
expect(await action.isCompatible({ embeddable: orphanContactCard })).toBe(false);
});

test('Unlink replaces embeddableId but retains panel count', async () => {
test('Unlink replaces embeddableId and retains panel count', async () => {
const dashboard = embeddable.getRoot() as IContainer;
const originalPanelCount = Object.keys(dashboard.getInput().panels).length;
const originalPanelKeySet = new Set(Object.keys(dashboard.getInput().panels));
const action = new UnlinkFromLibraryAction({ toasts: coreStart.notifications.toasts });
await action.execute({ embeddable });
expect(Object.keys(container.getInput().panels).length).toEqual(originalPanelCount);

const newPanelId = Object.keys(container.getInput().panels).find(
(key) => !originalPanelKeySet.has(key)
);
expect(newPanelId).toBeDefined();
const newPanel = container.getInput().panels[newPanelId!];
expect(Object.keys(container.getInput().panels)).toContain(embeddable.id);
const newPanel = container.getInput().panels[embeddable.id!];
expect(newPanel.type).toEqual(embeddable.type);
});

Expand All @@ -164,15 +159,10 @@ test('Unlink unwraps all attributes from savedObject', async () => {
mockedByReferenceInput: { savedObjectId: 'testSavedObjectId', id: embeddable.id },
mockedByValueInput: { attributes: complicatedAttributes, id: embeddable.id },
});
const dashboard = embeddable.getRoot() as IContainer;
const originalPanelKeySet = new Set(Object.keys(dashboard.getInput().panels));
const action = new UnlinkFromLibraryAction({ toasts: coreStart.notifications.toasts });
await action.execute({ embeddable });
const newPanelId = Object.keys(container.getInput().panels).find(
(key) => !originalPanelKeySet.has(key)
);
expect(newPanelId).toBeDefined();
const newPanel = container.getInput().panels[newPanelId!];
expect(Object.keys(container.getInput().panels)).toContain(embeddable.id);
const newPanel = container.getInput().panels[embeddable.id!];
expect(newPanel.type).toEqual(embeddable.type);
expect(newPanel.explicitInput.attributes).toEqual(complicatedAttributes);
});
Loading

0 comments on commit 79f6917

Please sign in to comment.