Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Refactor
CallbackStack
based on new "pack locking" strategy #3943Refactor
CallbackStack
based on new "pack locking" strategy #3943Changes from 22 commits
6e2e87d
632ca06
dccd380
250592e
64cf1b6
7f38b40
1d8224f
ca8f1a3
1e54754
ede4637
55ba5c6
e970713
d448086
13115ff
7f16898
ea20d1d
e9301e6
9c12425
9121e45
4563927
82c686f
9e3579b
35aa9a1
a18138e
5b3ad16
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As explained above, it's not strictly necessary, but if we did read
allowedToPack
then at least some of thepack
ing would be guaranteed to be visible. But I'm not sure if this additional synchronization is worth it: the modifications may already be visible fortuitously and we still need to be prepared for concurrentpack
ing anyway.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point, I didn't consider concurrent
pack
andapply
. If there were any concerns regarding correctness (boils down to: is it ok forpack
to rewrite the edges pointing to already published data without synchronization?), could even consider spinning on theallowedToPack
lock and own the list duringapply
? That might even amortize because apply won't take the packed "detour"?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sam and I considered this idea but decided against it. We could be spinning for an unbounded time while we wait for
pack
to complete iterating over a very long list. Or what if we lose the race and anotherpack
acquires the lock? For sure, these are worst case pathologies, but since owning the lock is not required to safely iterate the list, spinning until we acquire doesn't seem worth the risk.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd say yes, read
allowedToPack
here, because: what will we do here? We'll call all thecb
s in the list, right? Which are cont'sresume
s, so when we call them, each will do 1-2 CASes. If we can avoid calling some of them (the cleared ones) by doing a volatile read, that seems like a win. (Of course we might already not call them, because we might already see that they're null... so I'm not sure actually...)There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's a really interesting point :) it's essentially gambling:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The cleared ones are nulled out though, so we shouldn't have any penalty there besides the node read, or am I missing something?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, but currently there is no guarantee that this is published either. They are ordinary writes/reads.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I wonder if we need to be more aggressive about iterating the list and
null
ing all the callbacks andnext
s. The concern is if someone holds onto aHandle
for cancelation, they could actually be keeping the entire structure in memory.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that's a valid concern. Maybe we could do that during
apply
? It's a little strange, becauseapply
andclear
are separate, but a call toclear
always followsapply
, doesn't it?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree. I'd be happy to see
clear
go away and we do the clearing as part ofapply
.Having a separate
clear
seems important here. This code surprised me.cats-effect/core/shared/src/main/scala/cats/effect/IOFiber.scala
Lines 1075 to 1084 in 0207637
I think the only callback that can throw is one installed at construction e.g. via
IO#unsafeRunAsync
. But it seems better totry
/catch
where the callback is defined, and only install non-throwing callbacks inIOFiber
. It especially strikes me as odd if the other callbacks are not evaluated, because the first one threw.However, these changes seem involved enough to warrant their own PR ...