Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Sharing saved objects, phase 2.5 #89344

Merged
merged 36 commits into from
Feb 13, 2021

Conversation

jportner
Copy link
Contributor

@jportner jportner commented Jan 26, 2021

Resolves: #85791

Overview

"Share-capable" objects

For the 8.0 release, we want to regenerate single-namespace object IDs without necessarily making them shareable. I call this making the objects "share-capable".

The easiest / best way is to convert the objects using the code we already built, as that will remove the namespace prefix from the serialized object and prevent any additional objects from being created with the same ID in a different space.

However, we also want to provide a way for consumers (Spaces UI) to differentiate which objects should be shareable and which are not. And, IMO, we want to guarantee to plugin owners that Kibana's APIs won't allow their objects to be shared before they are ready to support it.

My approach to this is to actually have a fourth namespaceType, 'multiple-isolated', that is an intermediate between 'single' and 'multiple'. This is treated like a multi-namespace type by Core (serialization, queries, repository methods) but behaves like a single-namespace type (cannot be shared or unshared, either from the UI or by using an API).

When a plugin owner is ready to make their object type shareable -- either in the 8.0 release, or in a subsequent minor -- they can "flip a switch" by changing its namespaceType to 'multiple', at which point the object will be fully shareable.

Reusable UI components

The Spaces plugin needs to provide some reusable UI components to plugin owners to use when their objects are shareable. For example, when dashboards are shareable, you may want to see the list of other spaces the current dashboard is shared to, and you might want to be able to click a button to change that. With these reusable UI components, users can accomplish all that without leaving their dashboard.

Primary Changes

Core

  • f2cb1a4 - Add new saved object namespaceType: 'multiple-isolated'. This will be used to convert object types in the 8.0 release to become "share-capable", regenerating their IDs and creating legacy URL aliases without making the object types actually shareable
  • f5c1001 - Update DocumentMigrator, add some fields to migration context to support ESO plugin

EncryptedSavedObjects

  • Update ESO plugin to support objects that are converted to multi-namespace types

Spaces

  • Add new SpacesContext component, which creates a React Context that allows components in the Spaces plugin to avoid fetching Space-related data multiple times
  • Updated ShareToSpaceFlyout component to make it reusable. It now depends on the SpacesContext. It now has several more options to customize its appearance and behavior.
  • Created new reusable SpaceList component, based on the existing ShareToSpaceSavedObjectsManagementColumn. It now depends on the SpacesContext, and renders the list of spaces as avatars instead of full badges. It also includes a few options to customize its appearance and behavior.
  • Created new reusable LegacyUrlConflict component. It renders a callout that informs the user there are two objects with the same URL.
  • Created redirectLegacyUrl function. It redirects a user to a new URL, and displays a toast that informs the user they used a legacy URL.
  • Updated the public SpacesApi contract to expose the components above to other plugins.

Machine Learning

  • Update ML plugin to utilize new reusable UI components.

Screenshots

Click to expand

Environment:

  • There are 26 spaces added to Kibana (Alpha, Bravo, Charlie, Delta, ...)
  • The user has 'Read' access to the Alpha space
  • The user does not have access to the Bravo space
  • The user has 'All' access in the rest of the spaces
  • The ML plugin is disabled in the Charlie space

Anywhere that you see 'object', it is the default objectNoun value that can be replaced with anything else by the consumer ('dashboard', 'visualization', etc.)

SpaceList component

Screenshot 1: Saved Objects Management page

spaces-list

Screenshot 2: Machine Learning Jobs management page

The job is shared to Alpha, Bravo, Charlie, and Delta. The avatar for Charlie is at the end (because ML is disabled in Charlie), and Bravo is not shown (because the user does not have any privileges in the Bravo space).

image

ShareToSpaceFlyout component

Screenshot 1: Saved Objects Management page, object that is shared to all spaces

image

Screenshot 2: Saved Objects Management page, object that is shared to explicit spaces

image

Screenshot 3: Machine Learning Jobs management page, job that is shared to explicit spaces

share-ml

LegacyUrlConflict component

image

redirectLegacyUrl toast

image

Conversion Examples

These examples demonstrate usage of the new 'multiple-isolated' namespaceType.

Example 1: Convert an existing regular object type

A consumer has an existing single-namespace saved object type in 7.12 that needs to be converted to become "share-capable" in 8.0, and fully shareable in 8.1.

Click to see code

Example of a single-namespace type in 7.12:

core.savedObjects.registerType({
  name: 'foo',
  hidden: false,
  namespaceType: 'single',
  mappings: {...}
});

Example after converting to a multi-namespace (isolated) type in 8.0:

core.savedObjects.registerType({
  name: 'foo',
  hidden: false,
  namespaceType: 'multiple-isolated',
  mappings: {...},
  convertToMultiNamespaceTypeVersion: '8.0.0'
});

Example after converting to a multi-namespace (shareable) type in 8.1:

core.savedObjects.registerType({
  name: 'foo',
  hidden: false,
  namespaceType: 'multiple',
  mappings: {...},
  convertToMultiNamespaceTypeVersion: '8.0.0'
});

Example 2: Convert an existing encrypted object type

A consumer has an existing single-namespace encrypted saved object type in 7.12 that needs to be converted to become "share-capable" in 8.0, and fully shareable in 8.1. To accomplish this, the consumer needs to define an ESO migration in 8.0.0 as well.

Click to see code

Example of a single-namespace encrypted type in 7.12:

encryptedSavedObjects.registerType({
  type: 'foo',
  attributesToEncrypt: new Set(['bar']),
});

core.savedObjects.registerType({
  name: 'foo',
  hidden: false,
  namespaceType: 'single',
  mappings: {...}
});

Example after converting to a multi-namespace (isolated) type in 8.0:

encryptedSavedObjects.registerType({
  type: 'foo',
  attributesToEncrypt: new Set(['bar']),
});

const migration800 = encryptedSavedObjects.createMigration<Foo, Foo>(
  function shouldBeMigrated(doc): doc is SavedObjectUnsanitizedDoc<Foo> {
    return true;
  },
  (doc: SavedObjectUnsanitizedDoc<Foo>): SavedObjectUnsanitizedDoc<Foo> => {
    return doc; // no-op
  }
);

core.savedObjects.registerType({
  name: 'foo',
  hidden: false,
  namespaceType: 'multiple-isolated',
  mappings: {...},
  migrations: {
    '8.0.0': migration800,
  },
  convertToMultiNamespaceTypeVersion: '8.0.0'
});

Example after converting to a multi-namespace (shareable) type in 8.1:

encryptedSavedObjects.registerType({
  type: 'foo',
  attributesToEncrypt: new Set(['bar']),
});

const migration800 = encryptedSavedObjects.createMigration<Foo, Foo>(
  function shouldBeMigrated(doc): doc is SavedObjectUnsanitizedDoc<Foo> {
    return true;
  },
  (doc: SavedObjectUnsanitizedDoc<Foo>): SavedObjectUnsanitizedDoc<Foo> => {
    return doc; // no-op
  }
);

core.savedObjects.registerType({
  name: 'foo',
  hidden: false,
  namespaceType: 'multiple',
  mappings: {...},
  migrations: {
    '8.0.0': migration800,
  },
  convertToMultiNamespaceTypeVersion: '8.0.0'
});

@jportner
Copy link
Contributor Author

Note for reviewers: the UI for the Share to Space flyout can only be accessed if an object's namespaceType is 'multiple'. This PR will not change that, and the existing behavior is intended. Objects with a namespaceType of 'multiple-isolated' should not be treated as shareable in the UI. Relevant code is here:

available: (object: SavedObjectsManagementRecord) => {
const hasCapability =
!this.actionContext ||
!!this.actionContext.capabilities.savedObjectsManagement.shareIntoSpace;
return object.meta.namespaceType === 'multiple' && hasCapability;
},

@jportner jportner force-pushed the issue-85791-ssobjects-phase-2.5 branch from e9a48bd to 7b2415e Compare January 28, 2021 02:33
@jportner jportner force-pushed the issue-85791-ssobjects-phase-2.5 branch 2 times, most recently from c3506dd to bfd5d02 Compare February 4, 2021 14:50
@jportner jportner force-pushed the issue-85791-ssobjects-phase-2.5 branch 2 times, most recently from ba29e31 to 5cc7b56 Compare February 8, 2021 15:48
This will be used to convert saved objects in the 8.0 release. It
will allow us to regenerate object IDs, create aliases, and force
objects to use unique IDs across namespaces. However, objects of
this type are "share-capable" but not shareable across multiple
namespaces.
ESO uses object "descriptors" as part of additionally authenticated
data (AAD) when encrypting and decrypting objects. Historically the
descriptors for single-namespace objects have included the object
namespace, but in a world where saved objects can be shared across
spaces, that no longer makes sense. This commit allows consumers to
define an ESO migration that would allow for flexible decryption of
a saved object using a legacy descriptor that includes a namespace,
then encrypts the object with a new descriptor that omits the
object's namespace.
The saved object migration context now describes what migration
version is currently being run, and the object type's registered
`convertToMultiNamespaceTypeVersion` field (if it exists). This
allows the ESO migration function to more intelligently make
decisions about how to handle object descriptors for additionally
authenticated data (AAD).
The existing component is now called ShareToSpaceFlyoutInternal,
which implies that it should not be used by external plugins.
The ShareToSpaceFlyout depended on NotificationsSetup, when it
already had the ability to access the notifications service via the
KibanaReactContextProvider that it uses.
Includes changes to labels and i18n. Also adds configurable options
for whether or not to display the "create new copy" callout and/or
the "create new space" link text, and adds new test cases
accordingly.
If the user cannot change the object's spaces, a warning callout is
displayed in addition to the tooltip. Also added unit tests to
exercise this functionality and the ShareModeControl in general.
This will allow the flyout to behave in a space-agnostic manner
(instead of the default, which is space-aware). In other words, it
will no longer treat the active space differently -- allowing the
user to freely deselect the active space if they desire. This will
be useful for ML, and for the saved objects management page in the
future when we eventually show objects from all spaces.
This React context fetches Spaces data one time, allowing any
children to consume it without re-fetching. The first such children
to use the SpacesContext are the ShareToSpaceFlyout and the
ShareToSpaceAction.
Previously it rendered spaces as badges with their full names. Now
it renders them as SpaceAvatar components. It also allows consumers
to change the limit on the number of spaces that are displayed, and
to enable space-agnostic behavior (e.g., render the active space).
When a feature ID is specified on a SpaceContext, other Space UI
components will behave accordingly when the feature is disabled in
a given space. In SpacesList, the affected spaces will be moved to
the end of the list. In ShareToSpaceFlyout, the affected spaces
will only be shown if the object already exists in those spaces,
and will be differentiated with a tooltip explaining why.
@jportner jportner force-pushed the issue-85791-ssobjects-phase-2.5 branch from 5cc7b56 to 68018a7 Compare February 8, 2021 23:56
@jportner
Copy link
Contributor Author

jportner commented Feb 9, 2021

@elastic/kibana-core I'm leaving this PR in draft state for the moment because it's not complete, don't want the other affected codeowners to get pinged yet... but this is ready for you to review now. See the PR description for changes, you'll be interested to look at f2cb1a4 and f5c1001.

@jportner jportner requested a review from a team February 9, 2021 03:39
Copy link
Contributor

@pgayvallet pgayvallet left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM on a technical level regarding core changes, just a few NITs

Now, sorry if this was already answered, but what's the goal of this intermediary stage 😅 ?

@jportner
Copy link
Contributor Author

jportner commented Feb 9, 2021

LGTM on a technical level regarding core changes, just a few NITs

Great! I addressed your feedback in 9d1e02e 👍

Now, sorry if this was already answered, but what's the goal of this intermediary stage 😅 ?

Sorry about that, I updated the PR description and added an "Overview" at the top to better describe why we want to make this change.

@legrego legrego self-requested a review February 11, 2021 15:37
Copy link
Member

@azasypkin azasypkin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That's an impressive amount of work, LGTM! I tested locally as much use cases as I could think of and haven't noticed anything obviously wrong.


The feature/API surface is very large though, I think once UI and API stabilizes, and we release sharing capabilities our team will need to gradually cover this functionality with UI tests (thanks for the unit/api-integration tests you've already added!)

Copy link
Member

@legrego legrego left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM - reviewed almost everything, and did cursory testing. I defer to Aleh as the primary reviewer here, and ML's review of the integration.

Nice work, Joe! 🎉

Copy link
Contributor

@gchaps gchaps left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

UI copy LGTM

width: '90px',
});
}
// Note: this code path is commented because it is currently unreachable, it will need to be refactored to use the SpacesApi
Copy link
Member

@jgowdyelastic jgowdyelastic Feb 12, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

looks like this still needs changing. It's not currently possible to assign DFA jobs to spaces.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Whoops, I don't know why I thought the DataFrameAnalyticsList was only used in one place, it's actually used in two places. That's what screwed me up.

Fixed in 265b85c and tested to make sure I could create + share a DFA job.

It's worth noting that I couldn't keep the SpacesContext at the JobsListPage level where it used to be, because that component re-renders a few times and somehow that winds up causing an infinite re-rendering loop when the SpacesContext is introduced there. So, instead, I added the SpacesContext wrapper within each tab section. Works like a charm, and the Spaces get reloaded (along with all the job data) when you switch tabs.

Copy link
Member

@jgowdyelastic jgowdyelastic Feb 12, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

there's an odd issue now on the DFA jobs list in the management page where the search bar loses focus after each keypress. This is the cause of the failing functional test.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the pointer, that helped me figure out what was going on. The SpacesContext wrapper (along with all other components on this page) gets re-rendered multiple times with each key press. I discovered this by adding a console log inside of it 😄 As a side effect of that, the SpacesContext wrapper was creating a brand new context object each time it was rendered, which caused the wonky behavior of losing focus.

This was not a problem on the Saved Objects Management page, which is a bit simpler and only renders the SpacesContext wrapper once.

In d28bbda I changed the SpacesContext wrapper a bit to be more resilient, and because of that I was able to move it back to the top of the JobsListPage where it used to be. So I think it's all working perfectly now, you can take another look!

But FYI I think the JobsListPage has some problems that cause unnecessary re-rendering of components, might want to make a mental note to look into that in the future.

The wrapper would recreate the underlying context object each time
it is re-rendered. That was not a problem for the Saved Objects
Management page, which only rendered it once -- but it turned out
to be a problem for the Machine Learning Jobs management page which
re-renders all of its children multiple times.
@kibanamachine
Copy link
Contributor

💛 Build succeeded, but was flaky


Test Failures

Kibana Pipeline / general / X-Pack API Integration Tests.x-pack/test/api_integration/apis/security_solution/tls·ts.apis SecuritySolution Endpoints Tls Test with Packetbeat Tls Test "before all" hook for "Ensure data is returned for FlowTarget.Source"

Link to Jenkins

Standard Out

Failed Tests Reporter:
  - Test has failed 2 times on tracked branches: https://github.com/elastic/kibana/issues/91181

[00:00:00]       │
[00:00:00]         └-: apis
[00:00:00]           └-> "before all" hook in "apis"
[00:05:35]           └-: SecuritySolution Endpoints
[00:05:35]             └-> "before all" hook in "SecuritySolution Endpoints"
[00:06:49]             └-: Tls Test with Packetbeat
[00:06:49]               └-> "before all" hook in "Tls Test with Packetbeat"
[00:06:49]               └-: Tls Test
[00:06:49]                 └-> "before all" hook for "Ensure data is returned for FlowTarget.Source"
[00:06:49]                 └-> "before all" hook for "Ensure data is returned for FlowTarget.Source"
[00:06:49]                   │ info [packetbeat/tls] Loading "mappings.json"
[00:06:49]                   │ info [packetbeat/tls] Loading "data.json.gz"
[00:06:50]                   │ info Heap dump file created [718848674 bytes in 1.428 secs]
[00:06:50]                   │ info [o.e.b.ElasticsearchUncaughtExceptionHandler] [kibana-ci-immutable-ubuntu-16-tests-xxl-1613163559398603818] fatal error in thread [elasticsearch[kibana-ci-immutable-ubuntu-16-tests-xxl-1613163559398603818][system_write][T#2]], exiting
[00:06:50]                   │      java.lang.OutOfMemoryError: Java heap space
[00:06:50]                   │      	at org.apache.lucene.store.ByteBuffersDataOutput.toArrayCopy(ByteBuffersDataOutput.java:271) ~[lucene-core-8.8.0.jar:8.8.0 b10659f0fc18b58b90929cfdadde94544d202c4a - noble - 2021-01-25 19:07:45]
[00:06:50]                   │      	at org.apache.lucene.codecs.compressing.CompressingStoredFieldsWriter.flush(CompressingStoredFieldsWriter.java:239) ~[lucene-core-8.8.0.jar:8.8.0 b10659f0fc18b58b90929cfdadde94544d202c4a - noble - 2021-01-25 19:07:45]
[00:06:50]                   │      	at org.apache.lucene.codecs.compressing.CompressingStoredFieldsWriter.finishDocument(CompressingStoredFieldsWriter.java:169) ~[lucene-core-8.8.0.jar:8.8.0 b10659f0fc18b58b90929cfdadde94544d202c4a - noble - 2021-01-25 19:07:45]
[00:06:50]                   │      	at org.apache.lucene.index.StoredFieldsConsumer.finishDocument(StoredFieldsConsumer.java:68) ~[lucene-core-8.8.0.jar:8.8.0 b10659f0fc18b58b90929cfdadde94544d202c4a - noble - 2021-01-25 19:07:45]
[00:06:50]                   │      	at org.apache.lucene.index.DefaultIndexingChain.finishStoredFields(DefaultIndexingChain.java:460) ~[lucene-core-8.8.0.jar:8.8.0 b10659f0fc18b58b90929cfdadde94544d202c4a - noble - 2021-01-25 19:07:45]
[00:06:50]                   │      	at org.apache.lucene.index.DefaultIndexingChain.processDocument(DefaultIndexingChain.java:496) ~[lucene-core-8.8.0.jar:8.8.0 b10659f0fc18b58b90929cfdadde94544d202c4a - noble - 2021-01-25 19:07:45]
[00:06:50]                   │      	at org.apache.lucene.index.DocumentsWriterPerThread.updateDocuments(DocumentsWriterPerThread.java:208) ~[lucene-core-8.8.0.jar:8.8.0 b10659f0fc18b58b90929cfdadde94544d202c4a - noble - 2021-01-25 19:07:45]
[00:06:50]                   │      	at org.apache.lucene.index.DocumentsWriter.updateDocuments(DocumentsWriter.java:415) ~[lucene-core-8.8.0.jar:8.8.0 b10659f0fc18b58b90929cfdadde94544d202c4a - noble - 2021-01-25 19:07:45]
[00:06:50]                   │      	at org.apache.lucene.index.IndexWriter.updateDocuments(IndexWriter.java:1471) ~[lucene-core-8.8.0.jar:8.8.0 b10659f0fc18b58b90929cfdadde94544d202c4a - noble - 2021-01-25 19:07:45]
[00:06:50]                   │      	at org.apache.lucene.index.IndexWriter.softUpdateDocument(IndexWriter.java:1799) ~[lucene-core-8.8.0.jar:8.8.0 b10659f0fc18b58b90929cfdadde94544d202c4a - noble - 2021-01-25 19:07:45]
[00:06:50]                   │      	at org.elasticsearch.index.engine.InternalEngine.updateDocs(InternalEngine.java:1243) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:06:50]                   │      	at org.elasticsearch.index.engine.InternalEngine.indexIntoLucene(InternalEngine.java:1073) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:06:50]                   │      	at org.elasticsearch.index.engine.InternalEngine.index(InternalEngine.java:904) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:06:50]                   │      	at org.elasticsearch.index.shard.IndexShard.index(IndexShard.java:867) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:06:50]                   │      	at org.elasticsearch.index.shard.IndexShard.applyIndexOperation(IndexShard.java:839) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:06:50]                   │      	at org.elasticsearch.index.shard.IndexShard.applyIndexOperationOnPrimary(IndexShard.java:804) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:06:50]                   │      	at org.elasticsearch.action.bulk.TransportShardBulkAction.executeBulkItemRequest(TransportShardBulkAction.java:284) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:06:50]                   │      	at org.elasticsearch.action.bulk.TransportShardBulkAction$2.doRun(TransportShardBulkAction.java:173) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:06:50]                   │      	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:06:50]                   │      	at org.elasticsearch.action.bulk.TransportShardBulkAction.performOnPrimary(TransportShardBulkAction.java:219) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:06:50]                   │      	at org.elasticsearch.action.bulk.TransportShardBulkAction.dispatchedShardOperationOnPrimary(TransportShardBulkAction.java:117) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:06:50]                   │      	at org.elasticsearch.action.bulk.TransportShardBulkAction.dispatchedShardOperationOnPrimary(TransportShardBulkAction.java:75) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:06:50]                   │      	at org.elasticsearch.action.support.replication.TransportWriteAction$1.doRun(TransportWriteAction.java:177) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:06:50]                   │      	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:728) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:06:50]                   │      	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:06:50]                   │      	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) ~[?:?]
[00:06:50]                   │      	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) ~[?:?]
[00:06:50]                   │      	at java.lang.Thread.run(Thread.java:832) [?:?]
[00:06:50]                   │ERROR fatal error in thread [elasticsearch[kibana-ci-immutable-ubuntu-16-tests-xxl-1613163559398603818][system_write][T#2]], exiting
[00:06:50]                   │      java.lang.OutOfMemoryError: Java heap space
[00:06:50]                   │      	at org.apache.lucene.store.ByteBuffersDataOutput.toArrayCopy(ByteBuffersDataOutput.java:271)
[00:06:50]                   │      	at org.apache.lucene.codecs.compressing.CompressingStoredFieldsWriter.flush(CompressingStoredFieldsWriter.java:239)
[00:06:50]                   │      	at org.apache.lucene.codecs.compressing.CompressingStoredFieldsWriter.finishDocument(CompressingStoredFieldsWriter.java:169)
[00:06:50]                   │      	at org.apache.lucene.index.StoredFieldsConsumer.finishDocument(StoredFieldsConsumer.java:68)
[00:06:50]                   │      	at org.apache.lucene.index.DefaultIndexingChain.finishStoredFields(DefaultIndexingChain.java:460)
[00:06:50]                   │      	at org.apache.lucene.index.DefaultIndexingChain.processDocument(DefaultIndexingChain.java:496)
[00:06:50]                   │      	at org.apache.lucene.index.DocumentsWriterPerThread.updateDocuments(DocumentsWriterPerThread.java:208)
[00:06:50]                   │      	at org.apache.lucene.index.DocumentsWriter.updateDocuments(DocumentsWriter.java:415)
[00:06:50]                   │      	at org.apache.lucene.index.IndexWriter.updateDocuments(IndexWriter.java:1471)
[00:06:50]                   │      	at org.apache.lucene.index.IndexWriter.softUpdateDocument(IndexWriter.java:1799)
[00:06:50]                   │      	at org.elasticsearch.index.engine.InternalEngine.updateDocs(InternalEngine.java:1243)
[00:06:50]                   │      	at org.elasticsearch.index.engine.InternalEngine.indexIntoLucene(InternalEngine.java:1073)
[00:06:50]                   │      	at org.elasticsearch.index.engine.InternalEngine.index(InternalEngine.java:904)
[00:06:50]                   │      	at org.elasticsearch.index.shard.IndexShard.index(IndexShard.java:867)
[00:06:50]                   │      	at org.elasticsearch.index.shard.IndexShard.applyIndexOperation(IndexShard.java:839)
[00:06:50]                   │      	at org.elasticsearch.index.shard.IndexShard.applyIndexOperationOnPrimary(IndexShard.java:804)
[00:06:50]                   │      	at org.elasticsearch.action.bulk.TransportShardBulkAction.executeBulkItemRequest(TransportShardBulkAction.java:284)
[00:06:50]                   │      	at org.elasticsearch.action.bulk.TransportShardBulkAction$2.doRun(TransportShardBulkAction.java:173)
[00:06:50]                   │      	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26)
[00:06:50]                   │      	at org.elasticsearch.action.bulk.TransportShardBulkAction.performOnPrimary(TransportShardBulkAction.java:219)
[00:06:50]                   │      	at org.elasticsearch.action.bulk.TransportShardBulkAction.dispatchedShardOperationOnPrimary(TransportShardBulkAction.java:117)
[00:06:50]                   │      	at org.elasticsearch.action.bulk.TransportShardBulkAction.dispatchedShardOperationOnPrimary(TransportShardBulkAction.java:75)
[00:06:50]                   │      	at org.elasticsearch.action.support.replication.TransportWriteAction$1.doRun(TransportWriteAction.java:177)
[00:06:50]                   │      	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:728)
[00:06:50]                   │      	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26)
[00:06:50]                   │      	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
[00:06:50]                   │      	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630)
[00:06:50]                   │      	at java.base/java.lang.Thread.run(Thread.java:832)
[00:06:50]                   │      
[00:06:50]                   │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-ubuntu-16-tests-xxl-1613163559398603818] [packetbeat-7.6.0-2020.03.03-000001] creating index, cause [api], templates [], shards [1]/[1]
[00:06:51]                   └- ✖ fail: apis SecuritySolution Endpoints Tls Test with Packetbeat Tls Test "before all" hook for "Ensure data is returned for FlowTarget.Source"
[00:06:51]                   │      ConnectionError: connect ECONNREFUSED 127.0.0.1:6112
[00:06:51]                   │       at ClientRequest.onError (/dev/shm/workspace/kibana/node_modules/@elastic/elasticsearch/lib/Connection.js:115:16)
[00:06:51]                   │       at Socket.socketErrorListener (_http_client.js:469:9)
[00:06:51]                   │       at emitErrorNT (internal/streams/destroy.js:106:8)
[00:06:51]                   │       at emitErrorCloseNT (internal/streams/destroy.js:74:3)
[00:06:51]                   │       at processTicksAndRejections (internal/process/task_queues.js:80:21)
[00:06:51]                   │ 
[00:06:51]                   │ 

Stack Trace

ConnectionError: connect ECONNREFUSED 127.0.0.1:6112
    at ClientRequest.onError (/dev/shm/workspace/kibana/node_modules/@elastic/elasticsearch/lib/Connection.js:115:16)
    at Socket.socketErrorListener (_http_client.js:469:9)
    at emitErrorNT (internal/streams/destroy.js:106:8)
    at emitErrorCloseNT (internal/streams/destroy.js:74:3)
    at processTicksAndRejections (internal/process/task_queues.js:80:21) {
  meta: {
    body: null,
    statusCode: null,
    headers: null,
    meta: {
      context: null,
      request: [Object],
      name: 'elasticsearch-js',
      connection: [Object],
      attempts: 3,
      aborted: false
    }
  }
}

Kibana Pipeline / general / X-Pack API Integration Tests.x-pack/test/api_integration/apis/security_solution/tls·ts.apis SecuritySolution Endpoints Tls Test with Packetbeat Tls Test "after all" hook for "Ensure data is returned for FlowTarget.Destination"

Link to Jenkins

Standard Out

Failed Tests Reporter:
  - Test has failed 2 times on tracked branches: https://github.com/elastic/kibana/issues/91182

[00:00:00]       │
[00:00:00]         └-: apis
[00:00:00]           └-> "before all" hook in "apis"
[00:05:35]           └-: SecuritySolution Endpoints
[00:05:35]             └-> "before all" hook in "SecuritySolution Endpoints"
[00:06:49]             └-: Tls Test with Packetbeat
[00:06:49]               └-> "before all" hook in "Tls Test with Packetbeat"
[00:06:49]               └-: Tls Test
[00:06:49]                 └-> "before all" hook for "Ensure data is returned for FlowTarget.Source"
[00:06:49]                 └-> "before all" hook for "Ensure data is returned for FlowTarget.Source"
[00:06:49]                   │ info [packetbeat/tls] Loading "mappings.json"
[00:06:49]                   │ info [packetbeat/tls] Loading "data.json.gz"
[00:06:50]                   │ info Heap dump file created [718848674 bytes in 1.428 secs]
[00:06:50]                   │ info [o.e.b.ElasticsearchUncaughtExceptionHandler] [kibana-ci-immutable-ubuntu-16-tests-xxl-1613163559398603818] fatal error in thread [elasticsearch[kibana-ci-immutable-ubuntu-16-tests-xxl-1613163559398603818][system_write][T#2]], exiting
[00:06:50]                   │      java.lang.OutOfMemoryError: Java heap space
[00:06:50]                   │      	at org.apache.lucene.store.ByteBuffersDataOutput.toArrayCopy(ByteBuffersDataOutput.java:271) ~[lucene-core-8.8.0.jar:8.8.0 b10659f0fc18b58b90929cfdadde94544d202c4a - noble - 2021-01-25 19:07:45]
[00:06:50]                   │      	at org.apache.lucene.codecs.compressing.CompressingStoredFieldsWriter.flush(CompressingStoredFieldsWriter.java:239) ~[lucene-core-8.8.0.jar:8.8.0 b10659f0fc18b58b90929cfdadde94544d202c4a - noble - 2021-01-25 19:07:45]
[00:06:50]                   │      	at org.apache.lucene.codecs.compressing.CompressingStoredFieldsWriter.finishDocument(CompressingStoredFieldsWriter.java:169) ~[lucene-core-8.8.0.jar:8.8.0 b10659f0fc18b58b90929cfdadde94544d202c4a - noble - 2021-01-25 19:07:45]
[00:06:50]                   │      	at org.apache.lucene.index.StoredFieldsConsumer.finishDocument(StoredFieldsConsumer.java:68) ~[lucene-core-8.8.0.jar:8.8.0 b10659f0fc18b58b90929cfdadde94544d202c4a - noble - 2021-01-25 19:07:45]
[00:06:50]                   │      	at org.apache.lucene.index.DefaultIndexingChain.finishStoredFields(DefaultIndexingChain.java:460) ~[lucene-core-8.8.0.jar:8.8.0 b10659f0fc18b58b90929cfdadde94544d202c4a - noble - 2021-01-25 19:07:45]
[00:06:50]                   │      	at org.apache.lucene.index.DefaultIndexingChain.processDocument(DefaultIndexingChain.java:496) ~[lucene-core-8.8.0.jar:8.8.0 b10659f0fc18b58b90929cfdadde94544d202c4a - noble - 2021-01-25 19:07:45]
[00:06:50]                   │      	at org.apache.lucene.index.DocumentsWriterPerThread.updateDocuments(DocumentsWriterPerThread.java:208) ~[lucene-core-8.8.0.jar:8.8.0 b10659f0fc18b58b90929cfdadde94544d202c4a - noble - 2021-01-25 19:07:45]
[00:06:50]                   │      	at org.apache.lucene.index.DocumentsWriter.updateDocuments(DocumentsWriter.java:415) ~[lucene-core-8.8.0.jar:8.8.0 b10659f0fc18b58b90929cfdadde94544d202c4a - noble - 2021-01-25 19:07:45]
[00:06:50]                   │      	at org.apache.lucene.index.IndexWriter.updateDocuments(IndexWriter.java:1471) ~[lucene-core-8.8.0.jar:8.8.0 b10659f0fc18b58b90929cfdadde94544d202c4a - noble - 2021-01-25 19:07:45]
[00:06:50]                   │      	at org.apache.lucene.index.IndexWriter.softUpdateDocument(IndexWriter.java:1799) ~[lucene-core-8.8.0.jar:8.8.0 b10659f0fc18b58b90929cfdadde94544d202c4a - noble - 2021-01-25 19:07:45]
[00:06:50]                   │      	at org.elasticsearch.index.engine.InternalEngine.updateDocs(InternalEngine.java:1243) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:06:50]                   │      	at org.elasticsearch.index.engine.InternalEngine.indexIntoLucene(InternalEngine.java:1073) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:06:50]                   │      	at org.elasticsearch.index.engine.InternalEngine.index(InternalEngine.java:904) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:06:50]                   │      	at org.elasticsearch.index.shard.IndexShard.index(IndexShard.java:867) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:06:50]                   │      	at org.elasticsearch.index.shard.IndexShard.applyIndexOperation(IndexShard.java:839) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:06:50]                   │      	at org.elasticsearch.index.shard.IndexShard.applyIndexOperationOnPrimary(IndexShard.java:804) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:06:50]                   │      	at org.elasticsearch.action.bulk.TransportShardBulkAction.executeBulkItemRequest(TransportShardBulkAction.java:284) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:06:50]                   │      	at org.elasticsearch.action.bulk.TransportShardBulkAction$2.doRun(TransportShardBulkAction.java:173) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:06:50]                   │      	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:06:50]                   │      	at org.elasticsearch.action.bulk.TransportShardBulkAction.performOnPrimary(TransportShardBulkAction.java:219) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:06:50]                   │      	at org.elasticsearch.action.bulk.TransportShardBulkAction.dispatchedShardOperationOnPrimary(TransportShardBulkAction.java:117) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:06:50]                   │      	at org.elasticsearch.action.bulk.TransportShardBulkAction.dispatchedShardOperationOnPrimary(TransportShardBulkAction.java:75) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:06:50]                   │      	at org.elasticsearch.action.support.replication.TransportWriteAction$1.doRun(TransportWriteAction.java:177) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:06:50]                   │      	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:728) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:06:50]                   │      	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26) ~[elasticsearch-8.0.0-SNAPSHOT.jar:8.0.0-SNAPSHOT]
[00:06:50]                   │      	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130) ~[?:?]
[00:06:50]                   │      	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630) ~[?:?]
[00:06:50]                   │      	at java.lang.Thread.run(Thread.java:832) [?:?]
[00:06:50]                   │ERROR fatal error in thread [elasticsearch[kibana-ci-immutable-ubuntu-16-tests-xxl-1613163559398603818][system_write][T#2]], exiting
[00:06:50]                   │      java.lang.OutOfMemoryError: Java heap space
[00:06:50]                   │      	at org.apache.lucene.store.ByteBuffersDataOutput.toArrayCopy(ByteBuffersDataOutput.java:271)
[00:06:50]                   │      	at org.apache.lucene.codecs.compressing.CompressingStoredFieldsWriter.flush(CompressingStoredFieldsWriter.java:239)
[00:06:50]                   │      	at org.apache.lucene.codecs.compressing.CompressingStoredFieldsWriter.finishDocument(CompressingStoredFieldsWriter.java:169)
[00:06:50]                   │      	at org.apache.lucene.index.StoredFieldsConsumer.finishDocument(StoredFieldsConsumer.java:68)
[00:06:50]                   │      	at org.apache.lucene.index.DefaultIndexingChain.finishStoredFields(DefaultIndexingChain.java:460)
[00:06:50]                   │      	at org.apache.lucene.index.DefaultIndexingChain.processDocument(DefaultIndexingChain.java:496)
[00:06:50]                   │      	at org.apache.lucene.index.DocumentsWriterPerThread.updateDocuments(DocumentsWriterPerThread.java:208)
[00:06:50]                   │      	at org.apache.lucene.index.DocumentsWriter.updateDocuments(DocumentsWriter.java:415)
[00:06:50]                   │      	at org.apache.lucene.index.IndexWriter.updateDocuments(IndexWriter.java:1471)
[00:06:50]                   │      	at org.apache.lucene.index.IndexWriter.softUpdateDocument(IndexWriter.java:1799)
[00:06:50]                   │      	at org.elasticsearch.index.engine.InternalEngine.updateDocs(InternalEngine.java:1243)
[00:06:50]                   │      	at org.elasticsearch.index.engine.InternalEngine.indexIntoLucene(InternalEngine.java:1073)
[00:06:50]                   │      	at org.elasticsearch.index.engine.InternalEngine.index(InternalEngine.java:904)
[00:06:50]                   │      	at org.elasticsearch.index.shard.IndexShard.index(IndexShard.java:867)
[00:06:50]                   │      	at org.elasticsearch.index.shard.IndexShard.applyIndexOperation(IndexShard.java:839)
[00:06:50]                   │      	at org.elasticsearch.index.shard.IndexShard.applyIndexOperationOnPrimary(IndexShard.java:804)
[00:06:50]                   │      	at org.elasticsearch.action.bulk.TransportShardBulkAction.executeBulkItemRequest(TransportShardBulkAction.java:284)
[00:06:50]                   │      	at org.elasticsearch.action.bulk.TransportShardBulkAction$2.doRun(TransportShardBulkAction.java:173)
[00:06:50]                   │      	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26)
[00:06:50]                   │      	at org.elasticsearch.action.bulk.TransportShardBulkAction.performOnPrimary(TransportShardBulkAction.java:219)
[00:06:50]                   │      	at org.elasticsearch.action.bulk.TransportShardBulkAction.dispatchedShardOperationOnPrimary(TransportShardBulkAction.java:117)
[00:06:50]                   │      	at org.elasticsearch.action.bulk.TransportShardBulkAction.dispatchedShardOperationOnPrimary(TransportShardBulkAction.java:75)
[00:06:50]                   │      	at org.elasticsearch.action.support.replication.TransportWriteAction$1.doRun(TransportWriteAction.java:177)
[00:06:50]                   │      	at org.elasticsearch.common.util.concurrent.ThreadContext$ContextPreservingAbstractRunnable.doRun(ThreadContext.java:728)
[00:06:50]                   │      	at org.elasticsearch.common.util.concurrent.AbstractRunnable.run(AbstractRunnable.java:26)
[00:06:50]                   │      	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1130)
[00:06:50]                   │      	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:630)
[00:06:50]                   │      	at java.base/java.lang.Thread.run(Thread.java:832)
[00:06:50]                   │      
[00:06:50]                   │ info [o.e.c.m.MetadataCreateIndexService] [kibana-ci-immutable-ubuntu-16-tests-xxl-1613163559398603818] [packetbeat-7.6.0-2020.03.03-000001] creating index, cause [api], templates [], shards [1]/[1]
[00:06:51]                   └- ✖ fail: apis SecuritySolution Endpoints Tls Test with Packetbeat Tls Test "before all" hook for "Ensure data is returned for FlowTarget.Source"
[00:06:51]                   │      ConnectionError: connect ECONNREFUSED 127.0.0.1:6112
[00:06:51]                   │       at ClientRequest.onError (/dev/shm/workspace/kibana/node_modules/@elastic/elasticsearch/lib/Connection.js:115:16)
[00:06:51]                   │       at Socket.socketErrorListener (_http_client.js:469:9)
[00:06:51]                   │       at emitErrorNT (internal/streams/destroy.js:106:8)
[00:06:51]                   │       at emitErrorCloseNT (internal/streams/destroy.js:74:3)
[00:06:51]                   │       at processTicksAndRejections (internal/process/task_queues.js:80:21)
[00:06:51]                   │ 
[00:06:51]                   │ 
[00:06:51]                   └-> "after all" hook for "Ensure data is returned for FlowTarget.Destination"
[00:06:51]                     │ proc [kibana]   log   [21:58:31.032] [error][savedobjects-service] Unable to retrieve version information from Elasticsearch nodes.
[00:06:51]                     │ info [packetbeat/tls] Unloading indices from "mappings.json"
[00:06:51]                     └- ✖ fail: apis SecuritySolution Endpoints Tls Test with Packetbeat Tls Test "after all" hook for "Ensure data is returned for FlowTarget.Destination"
[00:06:51]                     │      ConnectionError: connect ECONNREFUSED 127.0.0.1:6112
[00:06:51]                     │       at ClientRequest.onError (/dev/shm/workspace/kibana/node_modules/@elastic/elasticsearch/lib/Connection.js:115:16)
[00:06:51]                     │       at Socket.socketErrorListener (_http_client.js:469:9)
[00:06:51]                     │       at emitErrorNT (internal/streams/destroy.js:106:8)
[00:06:51]                     │       at emitErrorCloseNT (internal/streams/destroy.js:74:3)
[00:06:51]                     │       at processTicksAndRejections (internal/process/task_queues.js:80:21)
[00:06:51]                     │ 
[00:06:51]                     │ 

Stack Trace

ConnectionError: connect ECONNREFUSED 127.0.0.1:6112
    at ClientRequest.onError (/dev/shm/workspace/kibana/node_modules/@elastic/elasticsearch/lib/Connection.js:115:16)
    at Socket.socketErrorListener (_http_client.js:469:9)
    at emitErrorNT (internal/streams/destroy.js:106:8)
    at emitErrorCloseNT (internal/streams/destroy.js:74:3)
    at processTicksAndRejections (internal/process/task_queues.js:80:21) {
  meta: {
    body: null,
    statusCode: null,
    headers: null,
    meta: {
      context: null,
      request: [Object],
      name: 'elasticsearch-js',
      connection: [Object],
      attempts: 3,
      aborted: false
    }
  }
}

Metrics [docs]

Module Count

Fewer modules leads to a faster build time

id before after diff
ml 1750 1735 -15
spaces 236 270 +34
total +19

Async chunks

Total size of all lazy-loaded chunks that will be downloaded as the user navigates the app

id before after diff
ml 6.4MB 6.3MB -14.4KB
savedObjectsManagement 163.7KB 164.1KB +356.0B
total -14.1KB

Page load bundle

Size of the bundles that are downloaded on every page load. Target size is below 100kb

id before after diff
ml 68.4KB 68.2KB -233.0B
spaces 231.3KB 274.9KB +43.6KB
spacesOss 3.5KB 4.5KB +1.0KB
total +44.5KB

History

To update your PR or re-run it, just comment with:
@elasticmachine merge upstream

Copy link
Member

@jgowdyelastic jgowdyelastic left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ML changes LGTM!

@jportner jportner added the auto-backport Deprecated - use backport:version if exact versions are needed label Feb 13, 2021
@jportner jportner merged commit 5c3c3ef into elastic:master Feb 13, 2021
@jportner jportner deleted the issue-85791-ssobjects-phase-2.5 branch February 13, 2021 09:28
@kibanamachine
Copy link
Contributor

Backport result

{"level":"info","message":"POST https://api.github.com/graphql (status: 200)"}
{"level":"info","message":"POST https://api.github.com/graphql (status: 200)"}
{"meta":{"labels":["auto-backport","release_note:skip","v7.12.0","v8.0.0"],"branchLabelMapping":{"^v8.0.0$":"master","^v7.12.0$":"7.x","^v(\\d+).(\\d+).\\d+$":"$1.$2"},"existingTargetPullRequests":[]},"level":"info","message":"Inputs when calculating target branches:"}
{"meta":["7.x"],"level":"info","message":"Target branches inferred from labels:"}
{"meta":{"killed":false,"code":2,"signal":null,"cmd":"git remote rm kibanamachine","stdout":"","stderr":"error: No such remote: 'kibanamachine'\n"},"level":"info","message":"exec error 'git remote rm kibanamachine':"}
{"meta":{"killed":false,"code":2,"signal":null,"cmd":"git remote rm elastic","stdout":"","stderr":"error: No such remote: 'elastic'\n"},"level":"info","message":"exec error 'git remote rm elastic':"}
{"level":"info","message":"Backporting [{\"sourceBranch\":\"master\",\"targetBranchesFromLabels\":[\"7.x\"],\"sha\":\"5c3c3efdd87089fb1a326854c83397a7253bd7c6\",\"formattedMessage\":\"Sharing saved objects, phase 2.5 (#89344)\",\"originalMessage\":\"Sharing saved objects, phase 2.5 (#89344)\",\"pullNumber\":89344,\"existingTargetPullRequests\":[]}] to 7.x"}

Backporting to 7.x:
{"level":"info","message":"Backporting via filesystem"}
{"meta":{"killed":false,"code":1,"signal":null,"cmd":"git cherry-pick 5c3c3efdd87089fb1a326854c83397a7253bd7c6","stdout":"Auto-merging x-pack/test/saved_object_api_integration/common/fixtures/es_archiver/saved_objects/spaces/mappings.json\nCONFLICT (content): Merge conflict in x-pack/test/saved_object_api_integration/common/fixtures/es_archiver/saved_objects/spaces/mappings.json\nAuto-merging x-pack/plugins/translations/translations/zh-CN.json\nAuto-merging x-pack/plugins/translations/translations/ja-JP.json\nRemoving x-pack/plugins/spaces/public/share_saved_objects_to_space/share_saved_objects_to_space_column.test.tsx\nRemoving x-pack/plugins/spaces/public/share_saved_objects_to_space/components/share_to_space_flyout.test.tsx\nRemoving x-pack/plugins/spaces/public/share_saved_objects_to_space/components/context_wrapper.tsx\nAuto-merging x-pack/plugins/ml/public/application/jobs/jobs_list/components/jobs_list/jobs_list.js\nRemoving x-pack/plugins/ml/public/application/contexts/spaces/spaces_context.ts\nRemoving x-pack/plugins/ml/public/application/components/job_spaces_selector/spaces_selectors.tsx\nRemoving x-pack/plugins/ml/public/application/components/job_spaces_selector/spaces_selector.scss\nRemoving x-pack/plugins/ml/public/application/components/job_spaces_selector/jobs_spaces_flyout.tsx\nRemoving x-pack/plugins/ml/public/application/components/job_spaces_selector/cannot_edit_callout.tsx\nAuto-merging src/core/server/server.api.md\nAuto-merging src/core/server/saved_objects/service/lib/repository.ts\nAuto-merging src/core/server/saved_objects/service/lib/repository.test.js\nAuto-merging src/core/public/public.api.md\n","stderr":"Performing inexact rename detection:  56% (52628/93692)\rPerforming inexact rename detection:  58% (54988/93692)\rPerforming inexact rename detection:  59% (55696/93692)\rPerforming inexact rename detection:  61% (57348/93692)\rPerforming inexact rename detection:  62% (58292/93692)\rPerforming inexact rename detection:  63% (59236/93692)\rPerforming inexact rename detection:  64% (60652/93692)\rPerforming inexact rename detection:  65% (61124/93692)\rPerforming inexact rename detection:  66% (62068/93692)\rPerforming inexact rename detection:  67% (62776/93692)\rPerforming inexact rename detection:  68% (63720/93692)\rPerforming inexact rename detection:  69% (64664/93692)\rPerforming inexact rename detection:  70% (65608/93692)\rPerforming inexact rename detection:  71% (66552/93692)\rPerforming inexact rename detection:  72% (67496/93692)\rPerforming inexact rename detection:  73% (68440/93692)\rPerforming inexact rename detection:  74% (69384/93692)\rPerforming inexact rename detection:  75% (70328/93692)\rPerforming inexact rename detection:  76% (71272/93692)\rPerforming inexact rename detection:  77% (72216/93692)\rPerforming inexact rename detection:  78% (73160/93692)\rPerforming inexact rename detection:  79% (74104/93692)\rPerforming inexact rename detection:  80% (75048/93692)\rPerforming inexact rename detection:  81% (75992/93692)\rPerforming inexact rename detection:  82% (76936/93692)\rPerforming inexact rename detection:  83% (77880/93692)\rPerforming inexact rename detection:  84% (78824/93692)\rPerforming inexact rename detection:  85% (79768/93692)\rPerforming inexact rename detection:  86% (80712/93692)\rPerforming inexact rename detection:  87% (81656/93692)\rPerforming inexact rename detection:  88% (82600/93692)\rPerforming inexact rename detection:  89% (83544/93692)\rPerforming inexact rename detection:  90% (84488/93692)\rPerforming inexact rename detection:  91% (85432/93692)\rPerforming inexact rename detection:  92% (86376/93692)\rPerforming inexact rename detection:  93% (87320/93692)\rPerforming inexact rename detection:  94% (88264/93692)\rPerforming inexact rename detection:  95% (89208/93692)\rPerforming inexact rename detection:  96% (90152/93692)\rPerforming inexact rename detection:  97% (91096/93692)\rPerforming inexact rename detection:  98% (92040/93692)\rPerforming inexact rename detection:  99% (92984/93692)\rPerforming inexact rename detection: 100% (93692/93692)\rPerforming inexact rename detection: 100% (93692/93692), done.\nerror: could not apply 5c3c3efdd87... Sharing saved objects, phase 2.5 (#89344)\nhint: after resolving the conflicts, mark the corrected paths\nhint: with 'git add <paths>' or 'git rm <paths>'\nhint: and commit the result with 'git commit'\n"},"level":"info","message":"exec error 'git cherry-pick 5c3c3efdd87089fb1a326854c83397a7253bd7c6':"}
{"meta":{"killed":false,"code":2,"signal":null,"cmd":"git --no-pager diff --check","stdout":"x-pack/test/saved_object_api_integration/common/fixtures/es_archiver/saved_objects/spaces/mappings.json:267: leftover conflict marker\nx-pack/test/saved_object_api_integration/common/fixtures/es_archiver/saved_objects/spaces/mappings.json:276: leftover conflict marker\nx-pack/test/saved_object_api_integration/common/fixtures/es_archiver/saved_objects/spaces/mappings.json:315: leftover conflict marker\n","stderr":""},"level":"info","message":"exec error 'git --no-pager diff --check':"}
Commit could not be cherrypicked due to conflicts

@legrego
Copy link
Member

legrego commented Feb 13, 2021

Commit could not be cherrypicked due to conflicts

🙁 you did your best, KibanaMachine

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
backported release_note:skip Skip the PR/issue when compiling release notes v7.12.0 v8.0.0
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Sharing saved-objects in multiple spaces: phase 2.5