-
Notifications
You must be signed in to change notification settings - Fork 12.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Mutex/Once/OnceLock aren't guaranteed to have atomic acquire/release semantics #126239
Comments
I don't think we need to explicitly say "It does an atomic operation". We can just say
|
That sounds good. However, do we have a place that says that synchronizes-with implies happens-before? I only see it implicitly in https://doc.rust-lang.org/std/sync/atomic/fn.fence.html:
Whereas Ordering::Release and Ordering::Acquire basically indirectly say that It seems like if we want to use the term synchronizes-with for https://preshing.com/20130823/the-synchronizes-with-relation/#other-ways-to-achieve-synchronizes-with and some other places also mention that spawning a thread synchronizes-with ( I believe |
@rustbot label -needs-triage +A-atomic |
Yes, the C++ Standard, |
Mutex explicitly documented as having what amounts to "side effects" (a.k.a. spooky action at a distance) w.r.t. random atomics does sound somewhat inappropriate for rust in general, especially in environments that might have the machinery to support a mutex, but not necessarily the side-effects that you find on e.g. an ARM implementation of it. Specifically, if you want to enforce a particular ordering w.r.t. a particular atomic, maybe that is what you need to state in the code using an appropriate fence call? Expecting a mutex to implicitly do that for you essentially relies on mutex's internal implementation to work. I might be missing something very important here, of course, but it seems that exposing these internal details into the docs may not be the best idea for future portability of Mutex to newer architectures, unless someone can certify that it is impossible to implement mutex in any other way. |
I don't think any other implementation exists that is sound, no. Being a fence is an overly restrictive specification, but it saying synchronizes-with is required IMO for the API to be sound. Edit: Yes, Mutex could probably use dependency ordered before, depending on what exactly the preconditions on |
That does indeed sound like "the rust way". However, from what I can see in c++ specs, some of them claim that memory order consume was in fact re-added in c++20, and its former definition was discouraged since c++17. It is all a bit wonky in there, so I'd just ignore the C++ spec entirely for the purpose of this discussion, especially since llvm has no implementation for consume anyway. What truly matters is what CPUs can or can not do in principle. With respect to what consume ordering was supposed to do, as far as I can see, it was intended to support specifically use-cases such as rust's Mutex. As a former hardware engineer, I can envision mechanisms that would enable hardware-level simplifications in cases when Mutex-protected area is very small (e.g. one cache line), as I could plausibly do less work then in the case of a full "happens before" synchronization. Arguably, the main benefits would be seen on NUMA machines and GPUs, and we should look for what is done e.g. in CUDA to be certain. On the other hand, I'd imagine this sort of aggressive optimization, even if we find a way to implement consume "correctly", would break horribly around unsafe code. Compiler would need a far better understanding of what depends on what to not mess this up, and unsafe/interior mutability stuff will not make it easy at all. As a solution to the doc problem, a generic parameter may be provided to std::Mutex that would toggle whether it uses memory ordering acquire-release or acquire-consume, with acquire-release being the default. This way we are explicit about the side-effects that may occur or relations that can be relied upon, and can document it as such. If/when memory order consume becomes a thing, users can switch to that without breaking the world. |
Consume relies on a notion of "dependency", and there is no credible proposal for how to define "dependency" in a language that is aggressively optimized (other than by introducing some form of provenance on integers, but that's not going to happen). What the hardware does is largely irrelevant here (as is often the case when discussing Rust semantics); whatever semantics we want has to first pass the test of fire of being compatible with all sorts of analyses and optimizations -- and then we can ask whether it can be efficiently mapped to hardware. Adding "consume" to Rust is definitely out-of-scope for this issue. In the Rust memory model (basically, C++ without "consume"), there's not really any way to implement a Mutex that doesn't establish synchronization between |
You make a good point, in this case the docs indeed should acknowledge this. If someone ever implements hardware that can do better, then we bother:)
12.6.2024 8.57.09 Ralf Jung ***@***.***>:
…
Consume relies on a notion of "dependency", and there is no credible proposal for how to define "dependency" in a language that is aggressively optimized. So what the hardware does is largely irrelevant here (as is often the case when discussing Rust semantics).
In the Rust/C++ memory model, there's not really any way to implement a Mutex that doesn't establish synchronization between *release* and future *acquire* calls. Furthermore, using *Mutex<()>* to guard data "outside" the Mutex is, to my knowledge, a generally accepted-to-be-correct pattern. So I don't think there is a realistic alternative to adding such a guarantee.
—
Reply to this email directly, view it on GitHub[#126239 (comment)], or unsubscribe[https://github.com/notifications/unsubscribe-auth/ABNIL3XH3XCIROE47HNJWT3ZG7PLLAVCNFSM6AAAAABJC367A6VHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDCNRSGE3DIMBUHA].
You are receiving this because you commented.
[Seurantakuva][https://github.com/notifications/beacon/ABNIL3SYROGR45EXSNBR4XTZG7PLLA5CNFSM6AAAAABJC367A6WGG33NNVSW45C7OR4XAZNMJFZXG5LFINXW23LFNZ2KUY3PNVWWK3TUL5UWJTUA4AAVA.gif]
|
Considering that is also Rust's Specification (for atomics/Multithreaded Memory Model), I don't think we can.
Hardware does support Consume, fwiw. ARMv8's "default" memory order is dependency ordered, aka consume. However, no optimizing compiler I'm aware of supports consume (opting to promote it to acquire in all cases). |
Does anyone have a functional example of a mutex built around consume memory ordering? |
I think it would be very cool if Rust could get a useful (higher-performing) consume-based (acquire-release is usually introduced conceptually as working "like acquiring and releasing a mutex" so it would be humorous, but confusing, if Rust's actual Mutex didn't have acquire-release semantics). |
I've done this before. In my emulator, I do this in my implementation of physical memory: #[derive(Debug)]
pub struct SysMemory {
page_limit: u32,
// SAFETY:
// page_count is synchronized by the `pages` lock despite not being directly covered by it.
// It is sound to perform read accesses to this cell from a thread that owns a shared read lock to `pages` or the exclusive write lock to `pages`
// It is sound to perform write accesses to this cell from a thread that owns the excusive write lock to `pages`
page_count: SyncUnsafeCell<u32>,
pages: RwLock<HashMap<LeU32, RwLock<Box<Page>>>>,
} (Yes that's a nested RwLock, I don't want reading a page to conflict with writing to an already allocated page) |
#117485 is an instance of a similar issue: people were relying on non-guaranteed acquire/release semantics for |
Location
https://doc.rust-lang.org/std/sync/struct.Mutex.html, including at least:
https://doc.rust-lang.org/std/sync/struct.Mutex.html#method.lock
https://doc.rust-lang.org/std/sync/struct.Mutex.html#method.try_lock
https://doc.rust-lang.org/std/sync/struct.MutexGuard.html
Summary
See https://marabos.nl/atomics/memory-ordering.html#example-locking, emphasis mine:
Many people are relying on this guarantee, but the documentation for
Mutex
does not state this guarantee. We should explicitly guarantee this in the documentation forMutex
.In theory, a mutex could have "consume" semantics instead of acquire/release semantics, such that reads/writes to the "contents" of a mutex (the
T
inMutex<T>
) and pointers/reference contained therein, are treated as "consume" load/stores (even if Rust's atomics API doesn't support consume ordering). In practice, people have already written much code that relies on the acquire/release semantics.Or, in theory, a mutex could have
SeqCst
semantics during lock and unlock, but I think we don't want people to rely on that (though I already have seen code that does, accidentally).Why does this matter?. Consider:
We want some official reassurance of the "obvious" fact that the
FD.store(fd, Ordering::Relaxed)
will be sequenced before the unlocking of the Mutex, i.e. that the mutex is doing an atomic release when unlocking the mutex so that the side effect of storing to FD will be seen by other thread after locking the mutex, which would only be guaranteed if locking the mutex does an atomic aquire that synchronizes with that atomic release.This would also clarify the more basic fact that in a
Mutex<T>
, there is nothing special about what's "contained" in theMutex
, as far as synchronization is concerned. (This is unique to rust; C/C++/Posix mutexes don't have the concept of theMutex
containing a value.)The text was updated successfully, but these errors were encountered: