Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix write path lock contention #6168

Merged
merged 4 commits into from
Mar 31, 2016
Merged

Fix write path lock contention #6168

merged 4 commits into from
Mar 31, 2016

Conversation

jwilder
Copy link
Contributor

@jwilder jwilder commented Mar 31, 2016

Required for all non-trivial PRs
  • Rebased/mergable
  • Tests pass
  • CHANGELOG.md updated
  • Sign CLA (if not already signed)

This fixes two performance regressions in the write path.

cc @mark-rushakoff

Fixes #6131

@jwilder jwilder added this to the 0.12.0 milestone Mar 31, 2016
@mark-rushakoff
Copy link
Contributor

Good profiling work here. We'll be fine for now without those stats. Choosing those speedups over the stats is a no-brainer.

@@ -1311,6 +1291,7 @@ func (a Measurements) union(other Measurements) Measurements {

// Series belong to a Measurement and represent unique time series in a database
type Series struct {
mu sync.RWMutex
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you move shardIDs to under mu since it's now being protected by mu?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

mu is protecting everything right now.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah the space between Tags and id threw me. Can you close that up if mu is protecting everything?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@e-dard
Copy link
Contributor

e-dard commented Mar 31, 2016

Aside from moving location of shardIDs in struct LGTM 👍

@@ -1360,11 +1349,16 @@ func (s *Series) UnmarshalBinary(buf []byte) error {

// InitializeShards initializes the list of shards.
func (s *Series) InitializeShards() {
s.mu.Lock()
defer s.mu.Unlock()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This doesn't need to be deferred, and can just be called directly after s.shardIDs is assigned.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point. Fixed.

@@ -1360,11 +1349,16 @@ func (s *Series) UnmarshalBinary(buf []byte) error {

// InitializeShards initializes the list of shards.
func (s *Series) InitializeShards() {
s.mu.Lock()
defer s.mu.Unlock()
s.shardIDs = make(map[uint64]bool)
}

// match returns true if all tags match the series' tags.
func (s *Series) match(tags map[string]string) bool {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can't find where this function is called, and since it isn't exported, we should just remove it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

@joelegasse
Copy link
Contributor

Needs a rebase (changelog, I'm guessing), but otherwise looks good to me 👍

@@ -1329,11 +1309,16 @@ func NewSeries(key string, tags map[string]string) *Series {
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is there a lot of read contention on Series.shardIDs? If there's not then it would be more efficient to use a sync.Mutex.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm going to leave this as a sync.RWMutex because I've had to change almost all of our uses of sync.Mutex to sync.RWMutex due to lock contention issues that appear under different workloads.

jwilder added 4 commits March 31, 2016 10:19
The stats setup ends up creating a lot of lock contention which signifcantly
impacts write throughput when a large number of measurements are used.

Fixes #6131
@jwilder jwilder merged commit 319a2d9 into master Mar 31, 2016
@mark-rushakoff mark-rushakoff deleted the jw-locks branch March 31, 2016 16:44
@jwilder jwilder modified the milestones: 0.11.1, 0.12.0 Mar 31, 2016
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Performance degradation on influx 0.11 and memory saturation on high volume.
5 participants