Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Concurrent calls to Stream.Read can return different data #128

Closed
lattwood opened this issue Jun 17, 2024 · 2 comments · Fixed by #136
Closed

Concurrent calls to Stream.Read can return different data #128

lattwood opened this issue Jun 17, 2024 · 2 comments · Fixed by #136
Assignees

Comments

@lattwood
Copy link
Contributor

Background

Read can be called concurrently because the documentation for net.Conn says: Multiple goroutines may invoke methods on a Conn simultaneously.

The specific behaviour is undefined, but it is reasonable to assume that calls to Read made at T+0 will receive data before calls to Read made at T+X, but that's roughly the behaviour a fully synchronous implementation would have.

Bug

If a Stream has an empty recvBuf and Read is being called concurrently, the order and data returned by the Read calls is undefined.

Because multiple goroutines would be blocked here

yamux/stream.go

Line 146 in 6034404

case <-s.recvNotifyCh:

When readData uses recvNotifyCh to notify waiting/blocked readers here

yamux/stream.go

Line 506 in 6034404

asyncNotify(s.recvNotifyCh)

there is no guarantee the first Read call will be woken up. See go playground for an example: https://go.dev/play/p/4QR3HvLoGyz

Impact

All depends on whether whatever yamux is being used in can tolerate getting out of order data from Read calls.

@lattwood
Copy link
Contributor Author

This library is used by the Go Plugin System over RPC, which is used by Vault for plugins.

Having just realized this, my apologies for not going through security@hashicorp.com.

@tgross
Copy link
Member

tgross commented Jul 1, 2024

Hi @lattwood! I had a look at this and a chat with some folks internally and this is working as expected, but definitely not documented well. Concurrent operations on the stream are "safe" inasmuch as they don't cause a data race. That is, calling close concurrently with a read won't panic, you can write and read concurrently, and reading from two different goroutines shouldn't result in short reads. But there's no intended guarantee that the reads are semantically meaningful if you're reading concurrently.

In practice, most consumers of yamux at HashiCorp (and I'm only saying "most" because I haven't examined 100% of them) use one goroutine per stream. For example, the RPC abstraction used by products like Consul and Nomad (net-rpc-msgpack) doesn't even allow sharing streams between goroutines.

I do appreciate you noticing this was potentially a security issue, but I think we'll want to close this issue out via improving the documentation rather than changing the behavior. I'll have a go at that as soon as I have a free moment.

@tgross tgross self-assigned this Jul 2, 2024
tgross added a commit that referenced this issue Oct 1, 2024
Concurrent operations on the stream are "safe" inasmuch as they don't cause a
data race. That is, calling close concurrently with a read won't panic, you can
write and read concurrently, and reading from two different goroutines shouldn't
result in short reads. But there's no intended guarantee that the reads are
semantically meaningful if you're reading concurrently.

Fixes: #128
tgross added a commit that referenced this issue Oct 2, 2024
Concurrent operations on the stream are "safe" inasmuch as they don't cause a
data race. That is, calling close concurrently with a read won't panic, you can
write and read concurrently, and reading from two different goroutines shouldn't
result in short reads. But there's no intended guarantee that the reads are
semantically meaningful if you're reading concurrently.

Fixes: #128
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants