-
Notifications
You must be signed in to change notification settings - Fork 13.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
unsafe::reinterpret_cast can cause valgrind errors #1175
Comments
fn get_compile_unit_metadata(cache: metadata_cache, full_path: str)
-> @metadata<compile_unit_md> unsafe {
ret alt *cache[0] {
compile_unit_metadata(md) { unsafe::reinterpret_cast(md) }
};
} Looks like the unsafe::reinterpret_cast here is the catalyst. This version of get_compile_unit_metadata displays the valgrind errors, and the one that just returns md without the cast does not. |
import std::unsafe;
use std;
tag test {
a(@int);
}
fn main() unsafe {
let val = a(@1);
log_err alt val {
a(i) { unsafe::reinterpret_cast::<@int, @int>(i) }
};
} 16:08 13:06 < brson> jdm: the problem is thaht when you reinterpret cast that box, now you have two typed pointers to a box, but you didn't ever bump the refcount, so you ending up dropping the ref twice |
Brian's suggestion resolved the valgrind messages. |
This changes wasm simd intrisnics which deal with memory to match clang where they all are emitted with an alignment of 1. This is expected to not impact performance since wasm engines generally ignore alignment as it's just a hint. Otherwise this can increase safety slightly when used from Rust since if an unaligned pointer was previously passed in that could result in UB on the LLVM side. This means that the intrinsics are slighly more usable in more situations than before. It's expected that if higher alignment is desired then programs will not use these intrinsics but rather the component parts. For example instead of `v128_load` you'd just load the pointer itself (and loading from a pointer in Rust automatically assumes correct alignment). For `v128_load64_splat` you'd do a load followed by a splat operation, which LLVM should optimized into a `v128.load64_splat` instruction with the desired alignment. LLVM doesn't fully support some optimizations (such as optimizing `v128.load16_lane` from component parts) but that's expected to be a temporary issue. Additionally we don't have a way of configuring the alignment on operations that otherwise can't be decomposed into their portions (such as with `i64x2_load_extend_u32x2`), but we can ideally cross such a bridge when we get there if anyone ever needs the alignment configured there.
Sorry for the vague summary, but I've reduced this testcase about as far as I can.
When running the above, I get output like:
and valgrind has lots of entries for glue_drop15:
If I assign the result of get_compile_metadata to a local var valgrind still complains, but the malloc error doesn't seem to show up any more.
The text was updated successfully, but these errors were encountered: