-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
std::sync::mpsc::SharedSender #1299
Conversation
|
||
```rust | ||
impl<T: Send> Sender<T> { | ||
pub fn shared(self) -> SharedSender<T> { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would personally prefer this to a second channel constructor function, but discoverability might be hard?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Er, I forgot to add description as to its downside: if you at the beginning you want a shared sender, this is just wasted operations going from oneshot -> shared, and bumping the internal counter by one since the Sender
will be dropped and drop the counter.
I'm not terribly familiar with the internals of |
@BurntSushi a Reasons: |
I think that it's probably worth exploring adding |
@alexcrichton huh, I didn't consider that it'd be possible to detect concurrent access. I suppose with a |
Yeah I imagine something along those lines could be done. Some extra synchronization will have to happen no matter what (and it may be quite difficult to bolt on the already large amount going on), but in theory if concurrent access could be detected in a lightweight manner a CAS-like thing could be done to upgrade the channel, and the first thread to succeed ends up creating the new shared channel while all other threads fail the CAS and then start re-using the shared channel from the initial thread. Again, though, this implementation would be significantly more complicated than the outline you've got in this RFC. The benefit, however, are that we don't need another type of sender which would be nice! |
@alexcrichton it might be nice not having an additional |
@seanmonstar Thanks for the explanation! That was helpful. Two more questions (again, please keep in mind that my familiarity with the
|
@BurntSushi I tried to make a couple queues with as few operations as possible, to make a benchmark. I can't say if the If sane, then this benchmark gave me these results on my crappy machine:
|
I think that this will be really confusing for people because of the If you asked me which channel implements |
@TyOverby I agree that the naming of |
OK, so let me see if I've got the situation right. I think we have some variation of the following four choices:
Is there a choice I've left out? None of them look particularly appealing to me. I guess the next thing to look at are use cases for making channels |
The original motivation I had for writing this RFC was because it's a common stumbling block when people try to use hyper's Server. The Server takes a So the following situation happens a lot: let (tx, rx) = channel();
Server::http(addr).listen(move |req, res| {
tx.send("request!"); // tx does not implement Sync
}); With the unfortunate solution being: let (tx, rx) = channel();
let tx = Mutex::new(tx);
Server::http(addr).listen(move |req, res| {
tx.lock().unwrap().send("request!");
}); |
I am personally pretty ambivalent here, and as such, I'm not sure exactly how to move it forward. Part of me thinks a tiny perf hit might be acceptable, especially if we get a more sane API, because wrapping channels in a mutex is a bummer. I will say though that @alexcrichton thinks this may be tricky to implement! |
Likewise, I don't strongly prefer either option over the other, I just want to stop wrapping senders in mutexs.
|
🔔 This RFC is now entering its week-long final comment period 🔔 |
I ran into this issue the other day with a client that spins up a thread that I don't feel strongly about the particular implementation; any |
The libs team discussed this during the triage meeting today and the decision was to close. This unfortunately doesn't interact well with code that already takes a Thanks for the RFC, though, @seanmonstar! |
Add a
SharedSender
tostd::sync::mpsc
that implementsSync
.Rendered