Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Compaction limits #8348

Merged
merged 10 commits into from
May 4, 2017
Merged

Compaction limits #8348

merged 10 commits into from
May 4, 2017

Conversation

jwilder
Copy link
Contributor

@jwilder jwilder commented May 2, 2017

Required for all non-trivial PRs
  • Rebased/mergable
  • Tests pass
  • CHANGELOG.md updated
  • Sign CLA (if not already signed)

This change adds a new limit to control how many concurrent full/level compactions can run at one time. It also stops background compaction goroutines for cold shards.

  • max-concurrent-compactions can be set to limit the number of full and level compactions that run concurrently. This limit does not apply to snapshot compactions as they must run to avoid filling the cache. A value of 0, default, sets the limit to runtime.GOMAXPROCS(0). Any value greater than 0 will limit compactions to that number. Compactions that are scheduled to run when the limit is met, will block until one completes. This limit can be used to throttle CPU usage due to many concurrent compactions.
  • The startup process has changed to disable compactions until all shards have opened. Previously, a shard would start compactions as soon as it was opened even if other shards were still opening. This could slow down startup due to many compactions kicking in and consuming CPU/IO while other shards are loading.
  • Shards have about 5 goroutines used for level/full compactions. When there are thousands of shards, the number of goroutines can be high. While this hasn't been an issue per-se, it is inefficient considering most of these goroutines are not doing anything once a shard goes cold and is fully compacted. These goroutines are now stopped once the shard goes cold and they are restarted after new writes/deletes to the shard occur.
  • Compaction planning runs independently for each level. Sometimes the planning can assign the same file to different plans. This was handled by the Compactor which would keep track of which files are currently being compacted. If a duplicate file was seen, the second compaction would be aborted leading to messages as seen in move some compaction logs to trace level #7425. The planner was updated to keep track of what files have been assigned to plans to prevent plans being returned with overlapping files. move some compaction logs to trace level #7425 should not occur now, but the safeguards in the Compactor are still in place.
  • There was bug where tmp files would not be cleaned up when an error occurred during a compaction. These are now removed.
  • monitor goroutine per shard has been removed and handled by a single goroutine for all shards on the store.
  • Cache.Size didn't include the snapshot size which cause the size to be misreported.

Fixes #7425 #8276 #8123

@jwilder jwilder added the review label May 2, 2017
@jwilder jwilder requested a review from benbjohnson May 2, 2017 17:24
@jwilder jwilder force-pushed the jw-tsm-compaction-limit branch from b969673 to 7c90c78 Compare May 2, 2017 19:11
@jwilder
Copy link
Contributor Author

jwilder commented May 3, 2017

@benbjohnson Found a bug while testing and pushed up a fix. I also reworked the monitor goroutine so there is no longer 1 per shard.

@jwilder jwilder force-pushed the jw-tsm-compaction-limit branch from dc4c31f to 053064d Compare May 3, 2017 19:24
@benbjohnson
Copy link
Contributor

@jwilder 👍

jwilder added 8 commits May 3, 2017 16:31
Compactions are enabled as soon as the shard is opened.  This can
slow down startup or cause the system to spike in CPU usage at startup
if many shards need to be compacted.

This now delays compactions until after they are loaded.
This limit allows the number of concurrent level and full compactions
to be throttled.  Snapshot compactions are not affected by this limit
as then need to run continously.

This limit can be used to control how much CPU is consumed by compactions.
The default is to limit to the number of CPU available.
The compactor prevents the same file from being compacted by different
compaction runs, but it can result in warning errors in the logs that
are confusing.

This adds compaction plan tracking to the planner so that files are
only part of one plan at a given time.
Each shard has a number of goroutines for compacting different levels
of TSM files.  When a shard goes cold and is fully compacted, these
goroutines are still running.

This change will stop background shard goroutines when the shard goes
cold and start them back up if new writes arrive.
The monitor goroutine ran for each shard and updated disk stats
as well as logged cardinality warnings.  This goroutine has been
removed by making the disks stats more lightweight and callable
direclty from Statisics and move the logging to the tsdb.Store.  The
latter allows one goroutine to handle all shards.
This was causing a shard to appear idle when in fact a snapshot compaction
was running.  If the time was write, the compactions would be disabled and
the snapshot compaction would be aborted.
Index.ForEachMeasurementTagKey held an RLock while call the fn,
if the fn made another call into the index which acquired an RLock
and after another goroutine tried to acquire a Lock, it would deadlock.
@jwilder jwilder force-pushed the jw-tsm-compaction-limit branch from 053064d to 7371f10 Compare May 4, 2017 15:25
jwilder added 2 commits May 4, 2017 09:56
Since this is called more frequently now, the cleanup func was invoked
quite a bit which makes several syscalls per shard.  This should only
be called the first time compactions are disabled.
Avoids some extra allocations.
@mahaveer1707
Copy link

mahaveer1707 commented Sep 19, 2019

#7640 (comment)

I am still facing the issue. I went through the whole thread in #7640 . I could only understand to change the duration of my shards.,which i already did. But no much changes yet.

Please advice. I can provide any sort of details needed.
I have put some details in the comment of issue #7640 .
#7640 (comment)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

move some compaction logs to trace level
3 participants