Skip to content

Commit

Permalink
Adding 161249 to known issues for 8.8.x (elastic#161313)
Browse files Browse the repository at this point in the history
Adding elastic#161249 (Kibana can run out
of memory during an upgrade when there are many Fleet agent policies in
place) to known issues for 8.8.x.

---------

Co-authored-by: David Kilfoyle <41695641+kilfoyle@users.noreply.github.com>
  • Loading branch information
ppf2 and kilfoyle committed Jul 12, 2023
1 parent 19f00ec commit d8323c3
Showing 1 changed file with 83 additions and 0 deletions.
83 changes: 83 additions & 0 deletions docs/CHANGELOG.asciidoc
Original file line number Diff line number Diff line change
Expand Up @@ -49,6 +49,36 @@ Review important information about the {kib} 8.x releases.

Review the following information about the {kib} 8.8.2 release.

[float]
[[known-issues-8.8.2]]
=== Known issues

// tag::known-issue-161249[]
[discrete]
.Kibana can run out of memory during an upgrade when there are many {fleet} agent policies.
[%collapsible]
====
*Details* +
Due to a schema version update, during {fleet} setup in 8.8.x, all agent policies are being queried and deployed.
This action triggers a lot of queries to the Elastic Package Registry (EPR) to fetch integration packages. As a result,
there is an increase in Kibana's resident memory usage (RSS).
*Impact* +
Because the default batch size of `100` for schema version upgrade of {fleet} agent policies is too high, this can
cause Kibana to run out of memory during an upgrade. For example, we have observed 1GB Kibana instances run
out of memory during an upgrade when there were 20 agent policies with 5 integrations in each.
*Workaround* +
Two workaround options are available:
* Increase the Kibana instance size to 2GB. So far, we are not able to reproduce the issue with 2GB instances.
* Set `xpack.fleet.setup.agentPolicySchemaUpgradeBatchSize` to `2` in the `kibana.yml` and restart the Kibana instance(s).
In 8.9.0, we are addressing this by changing the default batch size to `2`.
====
// end::known-issue-161249[]

[float]
[[fixes-v8.8.2]]
=== Bug Fixes
Expand Down Expand Up @@ -106,6 +136,32 @@ Review the following information about the {kib} 8.8.1 release.
[[known-issues-8.8.1]]
=== Known issues

// tag::known-issue-161249[]
[discrete]
.Kibana can run out of memory during an upgrade when there are many {fleet} agent policies.
[%collapsible]
====
*Details* +
Due to a schema version update, during {fleet} setup in 8.8.x, all agent policies are being queried and deployed.
This action triggers a lot of queries to the Elastic Package Registry (EPR) to fetch integration packages. As a result,
there is an increase in Kibana's resident memory usage (RSS).
*Impact* +
Because the default batch size of `100` for schema version upgrade of {fleet} agent policies is too high, this can
cause Kibana to run out of memory during an upgrade. For example, we have observed 1GB Kibana instances run
out of memory during an upgrade when there were 20 agent policies with 5 integrations in each.
*Workaround* +
Two workaround options are available:
* Increase the Kibana instance size to 2GB. So far, we are not able to reproduce the issue with 2GB instances.
* Set `xpack.fleet.setup.agentPolicySchemaUpgradeBatchSize` to `2` in the `kibana.yml` and restart the Kibana instance(s).
In 8.9.0, we are addressing this by changing the default batch size to `2`.
====
// end::known-issue-161249[]

// tag::known-issue-159807[]
[discrete]
.Memory leak in {fleet} audit logging.
Expand Down Expand Up @@ -198,6 +254,32 @@ Review the following information about the {kib} 8.8.0 release.
[[known-issues-8.8.0]]
=== Known issues

// tag::known-issue-161249[]
[discrete]
.Kibana can run out of memory during an upgrade when there are many {fleet} agent policies.
[%collapsible]
====
*Details* +
Due to a schema version update, during {fleet} setup in 8.8.x, all agent policies are being queried and deployed.
This action triggers a lot of queries to the Elastic Package Registry (EPR) to fetch integration packages. As a result,
there is an increase in Kibana's resident memory usage (RSS).
*Impact* +
Because the default batch size of `100` for schema version upgrade of {fleet} agent policies is too high, this can
cause Kibana to run out of memory during an upgrade. For example, we have observed 1GB Kibana instances run
out of memory during an upgrade when there were 20 agent policies with 5 integrations in each.
*Workaround* +
Two workaround options are available:
* Increase the Kibana instance size to 2GB. So far, we are not able to reproduce the issue with 2GB instances.
* Set `xpack.fleet.setup.agentPolicySchemaUpgradeBatchSize` to `2` in the `kibana.yml` and restart the Kibana instance(s).
In 8.9.0, we are addressing this by changing the default batch size to `2`.
====
// end::known-issue-161249[]

// tag::known-issue-158940[]
[discrete]
.Failed upgrades to 8.8.0 can cause bootlooping and data loss
Expand All @@ -221,6 +303,7 @@ The 8.8.1 release includes in {kibana-pull}158940[a fix] for this problem. Custo
*Details* +
{fleet} introduced audit logging for various CRUD (create, read, update, and delete) operations in version 8.8.0.
While audit logging is not enabled by default, we have identified an off-heap memory leak in the implementation of {fleet} audit logging that can result in poor {kib} performance, and in some cases {kib} instances being terminated by the OS kernel's oom-killer. This memory leak can occur even when {kib} audit logging is not explicitly enabled (regardless of whether `xpack.security.audit.enabled` is set in the `kibana.yml` settings file).
*Impact* +
The version 8.8.2 release includes in {kibana-pull}159807[a fix] for this problem. If you are using {fleet} integrations
and {kib} audit logging in version 8.8.0 or 8.8.1, you should upgrade to 8.8.2 or above to obtain the fix.
Expand Down

0 comments on commit d8323c3

Please sign in to comment.