Handle Elasticsearch health status changes/restarts more gracefully during Kibana index migration #26049
Labels
Team:Core
Core services & architecture: plugins, logging, config, saved objects, http, ES client, i18n, etc
triage_needed
Kibana version: 6.5.1
During a Kibana index migration, an ES node restarted.
See the initial no living connections messages. Then it was able to reconnect and issued an index creation request for the next incremental upgrade index (.kibana_7):
Except that it failed at "Error registering Kibana Privileges" shortly after. Here are the corresponding logs on the Elasticsearch side (timestamps in US Pacific below).
You will see that server101 (the node that was restarted) returned to the cluster and the subsequent corresponding kibana_7 index creation request. And the cluster turned green (from yellow) afterwards. While the cluster was yellow, there should at least be a copy of the security-6 index available. So it seems like Kibana had trouble determining the actual status of the indices in the cluster.
It will be nice if Kibana can handle these situations more gracefully. Perhaps we can retry, or prevent the next incremental migration from starting if it detects a cluster health status change during the full migration, etc..
The text was updated successfully, but these errors were encountered: