-
Notifications
You must be signed in to change notification settings - Fork 25k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Include size of snapshot in snapshot metadata #18543
Comments
It seems that the snapshot status endpoint exposes this information in stats.total_size field: GET /_snapshot/my_backup/snapshot_1/_status?human |
@luizgpsantos that's different, that's just the size of the increment. @tlrx and I looked at the code and think this should be simple to compute and add to the stats. |
There are properties total_size_in_bytes and processed_size_in_bytes in snapshot stats. In fact total_size_in_bytes shows difference size between previous snapshot and current one, processed_size_in_bytes shows the progress of making snapshot - so it in in a range between 0 and total_size_in_bytes. If we add another property, e.g. accumulated_size_in_bytes - it will confuse usage even more. Instead of that I'd like to break the semantic of total_size_in_bytes - it will reflect indeed total size of snapshot, while another property diff_size will reflect difference size between previous snapshot and current one (processed_size_in_bytes will be in a range 0 and diff_size) Another question: is it worth to align number_of_files to be consistent with total_size_in_bytes and diff_size - it will require adding one more extra property like diff_number_of_files ? |
Adds difference of number of files (and file sizes) between prev and current snapshot. Total number/size reflects total number/size of files in snapshot. Closes elastic#18543
To summarize the change: stats (for op like before: "stats": {
"number_of_files": 8,
"processed_files": 8,
"total_size": "4.6kb",
"total_size_in_bytes": 4797,
"processed_size": "4.6kb",
"processed_size_in_bytes": 4797,
"start_time_in_millis": 1523882347327,
"time": "225ms",
"time_in_millis": 225
} after: "stats": {
"incremental_file_count": 8,
"total_file_count": 11,
"processed_file_count": 8,
"incremental_size": "4.6kb",
"incremental_size_in_bytes": 4797,
"total_size": "8kb",
"total_size_in_bytes": 8276,
"processed_size": "4.6kb",
"processed_size_in_bytes": 4797,
"start_time_in_millis": 1523882347327,
"time": "225ms",
"time_in_millis": 225
} |
…" and "total" naming confusion
* master: silence InstallPluginCommandTests, see #30900 Remove left-over comment Fix double semicolon in import statement [TEST] Fix minor random bug from #30794 Include size of snapshot in snapshot metadata #18543, bwc clean up (#30890) Enabling testing against an external cluster (#30885) Add public key header/footer (#30877) SQL: Remove the last remaining server dependencies from jdbc (#30771) Include size of snapshot in snapshot metadata (#29602) Do not serialize basic license exp in x-pack info (#30848) Change BWC version for VerifyRepositoryResponse (#30796) [DOCS] Document index name limitations (#30826) Harmonize include_defaults tests (#30700)
* 6.x: [DOCS] Fixes kibana security file location Fix synced flush docs REST high-level client: add synced flush API (2) (#30650) stable filemode for zip distributions (#30854) Fix index_prefixes cross-type compatibility check (#30956) Add deprecation notice for missing option Add missing_bucket option in the composite agg (#29465) Amend skip version for snapshot REST test Relates #18543 Reintroduces SSL tests (#30947) Fsync state file before exposing it (#30929) Rename index_prefix to index_prefixes (#30932)
Feature: add snapshot size to snapshot metadata.
While the snapshot size (as a snapshot) on disk may not have much relevance because of the additive nature of snapshots (segments may already exist in the repository, so data transferred may be less than the total, and deleting a snapshot may not release all of the disk space because some segments may be retained for other snapshots...), it should be easy to capture the total size of the snapshot as it will require disk space to be restored, and this is an information which can be very useful when trying to determine before executing a restore, whether there is enough disk space on the target cluster.
The text was updated successfully, but these errors were encountered: