Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
FixedBufPool::next
#199FixedBufPool::next
#199Changes from all commits
e4a7f68
86c92f3
e780e64
6d0af6b
09cf69a
9661449
2a42972
ea8f3bc
988046d
27e98f8
d83eacb
091cb4d
c414ba0
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The
Arc
is not necessary for the single-threaded pool, but I'm going to reuse this for the thread-safe implementation.Notify
contains a mutex and uses atomic pointers for tracking waiters, so it's going to be slow regardless when we hit the waiting branch innext
. Performance ofnext
in the depleted pool case is not high on my current priority list for micro-optimizations.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But we are left without a simple/fast pool, that doesn't allocate, for the current-thread case. The case I care about. But all is not lost, once the API allows us to provide our own pool.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd just like to clarify my understanding here: once a notify has been registered for a capacity, then every time a buffer of that capacity is commited, notify_one() is called (because the notify_next_by_cap entry is never removed)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this could be improved by keeping a
(usize, Notify)
instead, to explicitly keep a count for the number of waiting tasks, and removing the entry from the hashset once count == zero. That way, you only pay theSeqCst
cost of callingnotify()
when you need to.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not saying this is needed, I will let @mzabaluev set the path on this one. But I would prefer something that had a little hysteresis to it. Like removing the Notify if the pool size reached something like 10 buffers again. (But I have no idea why I would pick 10. Just something larger than 0.) But I haven't tried to design anything around this idea so feel free to disregard entirely.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My concern is that with the current design, once
try_next()
has failed once in thenext()
method. every checking from then on for that buffer size is going to issue aSeqCst
memory barrier with the call tonotify_one()
. Thats quite a penalty. Its unavoidable with the surrent Notify, which is designed forSend
operations, but we could at least limit it to when there is actually a consumer waiting.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ollie-etl With your idea, I don't see how the usize field is decremented properly? In the case of a future being cancelled, there is no future to do the work of updating the count (from my understanding).
And along my idea of not keeping a count but removing once there are some number of buffers back in the pool, I don't see an elegant way of knowing whether the waiters linked list is empty or not. There is no public API.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You'd need to return a
struct Next(usize, Notify)
wrapper, implment Future,and decrement on dropThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, that's much more like @mzabaluev 's original idea I think, of returning a future that does work when it is polled, and does work when it is dropped. But using Notify.