Skip to content

Commit

Permalink
Auto merge of rust-lang#95295 - CAD97:layout-isize, r=scottmcm
Browse files Browse the repository at this point in the history
Enforce that layout size fits in isize in Layout

As it turns out, enforcing this _in APIs that already enforce `usize` overflow_ is fairly trivial. `Layout::from_size_align_unchecked` continues to "allow" sizes which (when rounded up) would overflow `isize`, but these are now declared as library UB for `Layout`, meaning that consumers of `Layout` no longer have to check this before making an allocation.

(Note that this is "immediate library UB;" IOW it is valid for a future release to make this immediate "language UB," and there is an extant patch to do so, to allow Miri to catch this misuse.)

See also rust-lang#95252, [Zulip discussion](https://rust-lang.zulipchat.com/#narrow/stream/219381-t-libs/topic/Layout.20Isn't.20Enforcing.20The.20isize.3A.3AMAX.20Rule).
Fixes rust-lang#95334

Some relevant quotes:

`@eddyb,` rust-lang#95252 (comment)

> [B]ecause of the non-trivial presence of both of these among code published on e.g. crates.io:
>
>   1. **`Layout` "producers" / `GlobalAlloc` "users"**: smart pointers (including `alloc::rc` copies with small tweaks), collections, etc.
>   2. **`Layout` "consumers" / `GlobalAlloc` "providers"**: perhaps fewer of these, but anything built on top of OS APIs like `mmap` will expose `> isize::MAX` allocations (on 32-bit hosts) if they lack extra checks
>
> IMO the only responsible option is to enforce the `isize::MAX` limit in `Layout`, which:
>
>   * makes `Layout` _sound_ in terms of only ever allowing allocations where `(alloc_base_ptr: *mut u8).offset(size)` is never UB
>   * frees both "producers" and "consumers" of `Layout` from manually reimplementing the checks
>     * manual checks can be risky, e.g. if the final size passed to the allocator isn't the one being checked
>     * this applies retroactively, fixing the overall soundness of existing code with zero transition period or _any_ changes required from users (as long as going through `Layout` is mandatory, making a "choke point")
>
>
> Feel free to quote this comment onto any relevant issue, I might not be able to keep track of developments.

`@Gankra,` rust-lang#95252 (comment)

> As someone who spent way too much time optimizing libcollections checks for this stuff and tried to splatter docs about it everywhere on the belief that it was a reasonable thing for people to manually take care of: I concede the point, it is not reasonable. I am wholy spiritually defeated by the fact that _liballoc_ of all places is getting this stuff wrong. This isn't throwing shade at the folks who implemented these Rc features, but rather a statement of how impractical it is to expect anyone out in the wider ecosystem to enforce them if _some of the most audited rust code in the library that defines the very notion of allocating memory_ can't even reliably do it.
>
> We need the nuclear option of Layout enforcing this rule. Code that breaks this rule is _deeply_ broken and any "regressions" from changing Layout's contract is a _correctness_ fix. Anyone who disagrees and is sufficiently motivated can go around our backs but the standard library should 100% refuse to enable them.

cc also `@RalfJung` `@rust-lang/wg-allocators.` Even though this technically supersedes rust-lang#95252, those potential failure points should almost certainly still get nicer panics than just "unwrap failed" (which they would get by this PR).

It might additionally be worth recommending to users of the `Layout` API that they should ideally use `.and_then`/`?` to complete the entire layout calculation, and then `panic!` from a single location at the end of `Layout` manipulation, to reduce the overhead of the checks and optimizations preserving the exact location of each `panic` which are conceptually just one failure: allocation too big.

Probably deserves a T-lang and/or T-libs-api FCP (this technically solidifies the [objects must be no larger than `isize::MAX`](https://rust-lang.github.io/unsafe-code-guidelines/layout/scalars.html#isize-and-usize) rule further, and the UCG document says this hasn't been RFCd) and a crater run. Ideally, no code exists that will start failing with this addition; if it does, it was _likely_ (but not certainly) causing UB.

Changes the raw_vec allocation path, thus deserves a perf run as well.

I suggest hiding whitespace-only changes in the diff view.
  • Loading branch information
bors committed Jul 10, 2022
2 parents 95e7764 + 344b99b commit 4ec97d9
Show file tree
Hide file tree
Showing 4 changed files with 156 additions and 331 deletions.
126 changes: 38 additions & 88 deletions library/alloc/tests/string.rs
Original file line number Diff line number Diff line change
Expand Up @@ -693,12 +693,6 @@ fn test_try_reserve() {
const MAX_CAP: usize = isize::MAX as usize;
const MAX_USIZE: usize = usize::MAX;

// On 16/32-bit, we check that allocations don't exceed isize::MAX,
// on 64-bit, we assume the OS will give an OOM for such a ridiculous size.
// Any platform that succeeds for these requests is technically broken with
// ptr::offset because LLVM is the worst.
let guards_against_isize = usize::BITS < 64;

{
// Note: basic stuff is checked by test_reserve
let mut empty_string: String = String::new();
Expand All @@ -712,35 +706,19 @@ fn test_try_reserve() {
panic!("isize::MAX shouldn't trigger an overflow!");
}

if guards_against_isize {
// Check isize::MAX + 1 does count as overflow
assert_matches!(
empty_string.try_reserve(MAX_CAP + 1).map_err(|e| e.kind()),
Err(CapacityOverflow),
"isize::MAX + 1 should trigger an overflow!"
);

// Check usize::MAX does count as overflow
assert_matches!(
empty_string.try_reserve(MAX_USIZE).map_err(|e| e.kind()),
Err(CapacityOverflow),
"usize::MAX should trigger an overflow!"
);
} else {
// Check isize::MAX + 1 is an OOM
assert_matches!(
empty_string.try_reserve(MAX_CAP + 1).map_err(|e| e.kind()),
Err(AllocError { .. }),
"isize::MAX + 1 should trigger an OOM!"
);

// Check usize::MAX is an OOM
assert_matches!(
empty_string.try_reserve(MAX_USIZE).map_err(|e| e.kind()),
Err(AllocError { .. }),
"usize::MAX should trigger an OOM!"
);
}
// Check isize::MAX + 1 does count as overflow
assert_matches!(
empty_string.try_reserve(MAX_CAP + 1).map_err(|e| e.kind()),
Err(CapacityOverflow),
"isize::MAX + 1 should trigger an overflow!"
);

// Check usize::MAX does count as overflow
assert_matches!(
empty_string.try_reserve(MAX_USIZE).map_err(|e| e.kind()),
Err(CapacityOverflow),
"usize::MAX should trigger an overflow!"
);
}

{
Expand All @@ -753,19 +731,13 @@ fn test_try_reserve() {
if let Err(CapacityOverflow) = ten_bytes.try_reserve(MAX_CAP - 10).map_err(|e| e.kind()) {
panic!("isize::MAX shouldn't trigger an overflow!");
}
if guards_against_isize {
assert_matches!(
ten_bytes.try_reserve(MAX_CAP - 9).map_err(|e| e.kind()),
Err(CapacityOverflow),
"isize::MAX + 1 should trigger an overflow!"
);
} else {
assert_matches!(
ten_bytes.try_reserve(MAX_CAP - 9).map_err(|e| e.kind()),
Err(AllocError { .. }),
"isize::MAX + 1 should trigger an OOM!"
);
}

assert_matches!(
ten_bytes.try_reserve(MAX_CAP - 9).map_err(|e| e.kind()),
Err(CapacityOverflow),
"isize::MAX + 1 should trigger an overflow!"
);

// Should always overflow in the add-to-len
assert_matches!(
ten_bytes.try_reserve(MAX_USIZE).map_err(|e| e.kind()),
Expand All @@ -785,8 +757,6 @@ fn test_try_reserve_exact() {
const MAX_CAP: usize = isize::MAX as usize;
const MAX_USIZE: usize = usize::MAX;

let guards_against_isize = usize::BITS < 64;

{
let mut empty_string: String = String::new();

Expand All @@ -799,31 +769,17 @@ fn test_try_reserve_exact() {
panic!("isize::MAX shouldn't trigger an overflow!");
}

if guards_against_isize {
assert_matches!(
empty_string.try_reserve_exact(MAX_CAP + 1).map_err(|e| e.kind()),
Err(CapacityOverflow),
"isize::MAX + 1 should trigger an overflow!"
);

assert_matches!(
empty_string.try_reserve_exact(MAX_USIZE).map_err(|e| e.kind()),
Err(CapacityOverflow),
"usize::MAX should trigger an overflow!"
);
} else {
assert_matches!(
empty_string.try_reserve_exact(MAX_CAP + 1).map_err(|e| e.kind()),
Err(AllocError { .. }),
"isize::MAX + 1 should trigger an OOM!"
);

assert_matches!(
empty_string.try_reserve_exact(MAX_USIZE).map_err(|e| e.kind()),
Err(AllocError { .. }),
"usize::MAX should trigger an OOM!"
);
}
assert_matches!(
empty_string.try_reserve_exact(MAX_CAP + 1).map_err(|e| e.kind()),
Err(CapacityOverflow),
"isize::MAX + 1 should trigger an overflow!"
);

assert_matches!(
empty_string.try_reserve_exact(MAX_USIZE).map_err(|e| e.kind()),
Err(CapacityOverflow),
"usize::MAX should trigger an overflow!"
);
}

{
Expand All @@ -839,19 +795,13 @@ fn test_try_reserve_exact() {
{
panic!("isize::MAX shouldn't trigger an overflow!");
}
if guards_against_isize {
assert_matches!(
ten_bytes.try_reserve_exact(MAX_CAP - 9).map_err(|e| e.kind()),
Err(CapacityOverflow),
"isize::MAX + 1 should trigger an overflow!"
);
} else {
assert_matches!(
ten_bytes.try_reserve_exact(MAX_CAP - 9).map_err(|e| e.kind()),
Err(AllocError { .. }),
"isize::MAX + 1 should trigger an OOM!"
);
}

assert_matches!(
ten_bytes.try_reserve_exact(MAX_CAP - 9).map_err(|e| e.kind()),
Err(CapacityOverflow),
"isize::MAX + 1 should trigger an overflow!"
);

assert_matches!(
ten_bytes.try_reserve_exact(MAX_USIZE).map_err(|e| e.kind()),
Err(CapacityOverflow),
Expand Down
166 changes: 52 additions & 114 deletions library/alloc/tests/vec.rs
Original file line number Diff line number Diff line change
Expand Up @@ -1517,12 +1517,6 @@ fn test_try_reserve() {
const MAX_CAP: usize = isize::MAX as usize;
const MAX_USIZE: usize = usize::MAX;

// On 16/32-bit, we check that allocations don't exceed isize::MAX,
// on 64-bit, we assume the OS will give an OOM for such a ridiculous size.
// Any platform that succeeds for these requests is technically broken with
// ptr::offset because LLVM is the worst.
let guards_against_isize = usize::BITS < 64;

{
// Note: basic stuff is checked by test_reserve
let mut empty_bytes: Vec<u8> = Vec::new();
Expand All @@ -1536,35 +1530,19 @@ fn test_try_reserve() {
panic!("isize::MAX shouldn't trigger an overflow!");
}

if guards_against_isize {
// Check isize::MAX + 1 does count as overflow
assert_matches!(
empty_bytes.try_reserve(MAX_CAP + 1).map_err(|e| e.kind()),
Err(CapacityOverflow),
"isize::MAX + 1 should trigger an overflow!"
);

// Check usize::MAX does count as overflow
assert_matches!(
empty_bytes.try_reserve(MAX_USIZE).map_err(|e| e.kind()),
Err(CapacityOverflow),
"usize::MAX should trigger an overflow!"
);
} else {
// Check isize::MAX + 1 is an OOM
assert_matches!(
empty_bytes.try_reserve(MAX_CAP + 1).map_err(|e| e.kind()),
Err(AllocError { .. }),
"isize::MAX + 1 should trigger an OOM!"
);

// Check usize::MAX is an OOM
assert_matches!(
empty_bytes.try_reserve(MAX_USIZE).map_err(|e| e.kind()),
Err(AllocError { .. }),
"usize::MAX should trigger an OOM!"
);
}
// Check isize::MAX + 1 does count as overflow
assert_matches!(
empty_bytes.try_reserve(MAX_CAP + 1).map_err(|e| e.kind()),
Err(CapacityOverflow),
"isize::MAX + 1 should trigger an overflow!"
);

// Check usize::MAX does count as overflow
assert_matches!(
empty_bytes.try_reserve(MAX_USIZE).map_err(|e| e.kind()),
Err(CapacityOverflow),
"usize::MAX should trigger an overflow!"
);
}

{
Expand All @@ -1577,19 +1555,13 @@ fn test_try_reserve() {
if let Err(CapacityOverflow) = ten_bytes.try_reserve(MAX_CAP - 10).map_err(|e| e.kind()) {
panic!("isize::MAX shouldn't trigger an overflow!");
}
if guards_against_isize {
assert_matches!(
ten_bytes.try_reserve(MAX_CAP - 9).map_err(|e| e.kind()),
Err(CapacityOverflow),
"isize::MAX + 1 should trigger an overflow!"
);
} else {
assert_matches!(
ten_bytes.try_reserve(MAX_CAP - 9).map_err(|e| e.kind()),
Err(AllocError { .. }),
"isize::MAX + 1 should trigger an OOM!"
);
}

assert_matches!(
ten_bytes.try_reserve(MAX_CAP - 9).map_err(|e| e.kind()),
Err(CapacityOverflow),
"isize::MAX + 1 should trigger an overflow!"
);

// Should always overflow in the add-to-len
assert_matches!(
ten_bytes.try_reserve(MAX_USIZE).map_err(|e| e.kind()),
Expand All @@ -1610,19 +1582,13 @@ fn test_try_reserve() {
{
panic!("isize::MAX shouldn't trigger an overflow!");
}
if guards_against_isize {
assert_matches!(
ten_u32s.try_reserve(MAX_CAP / 4 - 9).map_err(|e| e.kind()),
Err(CapacityOverflow),
"isize::MAX + 1 should trigger an overflow!"
);
} else {
assert_matches!(
ten_u32s.try_reserve(MAX_CAP / 4 - 9).map_err(|e| e.kind()),
Err(AllocError { .. }),
"isize::MAX + 1 should trigger an OOM!"
);
}

assert_matches!(
ten_u32s.try_reserve(MAX_CAP / 4 - 9).map_err(|e| e.kind()),
Err(CapacityOverflow),
"isize::MAX + 1 should trigger an overflow!"
);

// Should fail in the mul-by-size
assert_matches!(
ten_u32s.try_reserve(MAX_USIZE - 20).map_err(|e| e.kind()),
Expand All @@ -1642,8 +1608,6 @@ fn test_try_reserve_exact() {
const MAX_CAP: usize = isize::MAX as usize;
const MAX_USIZE: usize = usize::MAX;

let guards_against_isize = size_of::<usize>() < 8;

{
let mut empty_bytes: Vec<u8> = Vec::new();

Expand All @@ -1656,31 +1620,17 @@ fn test_try_reserve_exact() {
panic!("isize::MAX shouldn't trigger an overflow!");
}

if guards_against_isize {
assert_matches!(
empty_bytes.try_reserve_exact(MAX_CAP + 1).map_err(|e| e.kind()),
Err(CapacityOverflow),
"isize::MAX + 1 should trigger an overflow!"
);

assert_matches!(
empty_bytes.try_reserve_exact(MAX_USIZE).map_err(|e| e.kind()),
Err(CapacityOverflow),
"usize::MAX should trigger an overflow!"
);
} else {
assert_matches!(
empty_bytes.try_reserve_exact(MAX_CAP + 1).map_err(|e| e.kind()),
Err(AllocError { .. }),
"isize::MAX + 1 should trigger an OOM!"
);

assert_matches!(
empty_bytes.try_reserve_exact(MAX_USIZE).map_err(|e| e.kind()),
Err(AllocError { .. }),
"usize::MAX should trigger an OOM!"
);
}
assert_matches!(
empty_bytes.try_reserve_exact(MAX_CAP + 1).map_err(|e| e.kind()),
Err(CapacityOverflow),
"isize::MAX + 1 should trigger an overflow!"
);

assert_matches!(
empty_bytes.try_reserve_exact(MAX_USIZE).map_err(|e| e.kind()),
Err(CapacityOverflow),
"usize::MAX should trigger an overflow!"
);
}

{
Expand All @@ -1696,19 +1646,13 @@ fn test_try_reserve_exact() {
{
panic!("isize::MAX shouldn't trigger an overflow!");
}
if guards_against_isize {
assert_matches!(
ten_bytes.try_reserve_exact(MAX_CAP - 9).map_err(|e| e.kind()),
Err(CapacityOverflow),
"isize::MAX + 1 should trigger an overflow!"
);
} else {
assert_matches!(
ten_bytes.try_reserve_exact(MAX_CAP - 9).map_err(|e| e.kind()),
Err(AllocError { .. }),
"isize::MAX + 1 should trigger an OOM!"
);
}

assert_matches!(
ten_bytes.try_reserve_exact(MAX_CAP - 9).map_err(|e| e.kind()),
Err(CapacityOverflow),
"isize::MAX + 1 should trigger an overflow!"
);

assert_matches!(
ten_bytes.try_reserve_exact(MAX_USIZE).map_err(|e| e.kind()),
Err(CapacityOverflow),
Expand All @@ -1729,19 +1673,13 @@ fn test_try_reserve_exact() {
{
panic!("isize::MAX shouldn't trigger an overflow!");
}
if guards_against_isize {
assert_matches!(
ten_u32s.try_reserve_exact(MAX_CAP / 4 - 9).map_err(|e| e.kind()),
Err(CapacityOverflow),
"isize::MAX + 1 should trigger an overflow!"
);
} else {
assert_matches!(
ten_u32s.try_reserve_exact(MAX_CAP / 4 - 9).map_err(|e| e.kind()),
Err(AllocError { .. }),
"isize::MAX + 1 should trigger an OOM!"
);
}

assert_matches!(
ten_u32s.try_reserve_exact(MAX_CAP / 4 - 9).map_err(|e| e.kind()),
Err(CapacityOverflow),
"isize::MAX + 1 should trigger an overflow!"
);

assert_matches!(
ten_u32s.try_reserve_exact(MAX_USIZE - 20).map_err(|e| e.kind()),
Err(CapacityOverflow),
Expand Down
Loading

0 comments on commit 4ec97d9

Please sign in to comment.