Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[u128; N] array equality test is unproportionally slow #120839

Open
0x7CFE opened this issue Feb 9, 2024 · 7 comments
Open

[u128; N] array equality test is unproportionally slow #120839

0x7CFE opened this issue Feb 9, 2024 · 7 comments
Labels
C-optimization Category: An issue highlighting optimization opportunities or PRs implementing such I-slow Issue: Problems and improvements with respect to performance of generated code. T-compiler Relevant to the compiler team, which will review and decide on the PR/issue.

Comments

@0x7CFE
Copy link

0x7CFE commented Feb 9, 2024

Hello, for my ML research I wrote a generic newtype wrapper around primitive types that act as a fixed-size bit vector.

When profiling my code I've noticed some really strange performance difference depending on what underlying types were used.

It appeared that on the same workload BitVector<[u128; 2]> is twice as slow when compared to BitVector<[u64; 4]> (see profiles below). Notice, that the bit width in both cases is equal: 128 * 2 = 64 * 4 = 256.

I then wrote a minimal repdoducible variant of the issue:

#[test]
fn eq_bench() {
    type BitVector = super::BitVector<[u128; 2]>;
    // type BitVector = super::BitVector<[u64; 4]>;

    let set = std::collections::BTreeSet::from_iter((1..100_000).map(|_| BitVector::random(12)));
    let mut yay = 0usize;
    let mut nay = 0usize;

    let mut timestamp = std::time::Instant::now();
    let mut last_index = 0;

    for i in 0.. {
        let code = BitVector::random(12);

        for item in set.iter() {
            if *item & code == code {
                yay += 1;
            } else {
                nay += 1;
            }
        }

        if timestamp.elapsed().as_millis() > 1000 {
            println!("{i}: {yay} to {nay}, rate {rate}", rate = i - last_index);

            timestamp = std::time::Instant::now();
            last_index = i;
        }
    }
}

Depending on what type is used, on my machine it consistently takes around 4481 iterations per second on [u64; 4] and 1750 on [u128; 2] (2.5 times slower!). Interestingly enough, if I remove & code part, then both versions start performing more or less the same.

The profiles looks even stranger (profile.tar.gz).

u64 version seems normal:

...whereas on u128 you can quickly notice that SpecArrayEq::spec_eq's time skyrockets and even dwarfs BTreeSet iteration!

It appears that the unsafe call to crate::intrinsics::raw_eq is the culprit:

impl<T: BytewiseEq<U>, U, const N: usize> SpecArrayEq<U, N> for T {
fn spec_eq(a: &[T; N], b: &[U; N]) -> bool {
// SAFETY: Arrays are compared element-wise, and don't add any padding
// between elements, so when the elements are `BytewiseEq`, we can
// compare the entire array at once.
unsafe { crate::intrinsics::raw_eq(a, crate::mem::transmute(b)) }
}
fn spec_ne(a: &[T; N], b: &[U; N]) -> bool {
!Self::spec_eq(a, b)
}
}

Meta

I built everything with just cargo test --release. No LTO, no target-cpu=native.

rustc --version --verbose:

rustc 1.74.0 (79e9716c9 2023-11-13)
binary: rustc
commit-hash: 79e9716c980570bfd1f666e3b16ac583f0168962
commit-date: 2023-11-13
host: x86_64-unknown-linux-gnu
release: 1.74.0
LLVM version: 17.0.4

lscpu:

$ lscpu
Architecture:            x86_64
  CPU op-mode(s):        32-bit, 64-bit
  Address sizes:         39 bits physical, 48 bits virtual
  Byte Order:            Little Endian
CPU(s):                  16
  On-line CPU(s) list:   0-15
Vendor ID:               GenuineIntel
  Model name:            12th Gen Intel(R) Core(TM) i7-1260P
    CPU family:          6
    Model:               154
    Thread(s) per core:  2
    Core(s) per socket:  12
    Socket(s):           1
    Stepping:            3
    CPU max MHz:         4700,0000
    CPU min MHz:         400,0000
    BogoMIPS:            4992.00
    Flags:               fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe sy
                         scall nx pdpe1gb rdtscp lm constant_tsc art arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc cpuid aperfmperf tsc
                         _known_freq pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm sse4_1 sse4_2 x2apic movbe p
                         opcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm 3dnowprefetch cpuid_fault epb ssbd ibrs ibpb stibp ibrs_enh
                         anced tpr_shadow vnmi flexpriority ept vpid ept_ad fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid rdseed adx smap cl
                         flushopt clwb intel_pt sha_ni xsaveopt xsavec xgetbv1 xsaves split_lock_detect avx_vnni dtherm ida arat pln pts hwp hwp_no
                         tify hwp_act_window hwp_epp hwp_pkg_req umip pku ospke waitpkg gfni vaes vpclmulqdq rdpid movdiri movdir64b fsrm md_clear 
                         serialize arch_lbr flush_l1d arch_capabilities
Virtualization features: 
  Virtualization:        VT-x
Caches (sum of all):     
  L1d:                   448 KiB (12 instances)
  L1i:                   640 KiB (12 instances)
  L2:                    9 MiB (6 instances)
  L3:                    18 MiB (1 instance)
NUMA:                    
  NUMA node(s):          1
  NUMA node0 CPU(s):     0-15
Vulnerabilities:         
  Gather data sampling:  Not affected
  Itlb multihit:         Not affected
  L1tf:                  Not affected
  Mds:                   Not affected
  Meltdown:              Not affected
  Mmio stale data:       Not affected
  Retbleed:              Not affected
  Spec rstack overflow:  Not affected
  Spec store bypass:     Mitigation; Speculative Store Bypass disabled via prctl and seccomp
  Spectre v1:            Mitigation; usercopy/swapgs barriers and __user pointer sanitization
  Spectre v2:            Mitigation; Enhanced IBRS, IBPB conditional, RSB filling, PBRSB-eIBRS SW sequence
  Srbds:                 Not affected
  Tsx async abort:       Not affected
@0x7CFE 0x7CFE added the C-bug Category: This is a bug. label Feb 9, 2024
@rustbot rustbot added the needs-triage This issue may need triage. Remove it if it has been sufficiently triaged. label Feb 9, 2024
@Noratrieb
Copy link
Member

This does seem like particularly bad performance, but in general, u128 is not where you go if you need something fast.

@Noratrieb Noratrieb added C-optimization Category: An issue highlighting optimization opportunities or PRs implementing such T-compiler Relevant to the compiler team, which will review and decide on the PR/issue. I-slow Issue: Problems and improvements with respect to performance of generated code. and removed needs-triage This issue may need triage. Remove it if it has been sufficiently triaged. C-bug Category: This is a bug. labels Feb 9, 2024
@Urgau
Copy link
Member

Urgau commented Feb 9, 2024

i128/u128 cannot natively be represented on x86_64 machines, since (regular) register can only go as high as 64-bits. Therefore, rustc (the Rust compiler) has to emulate them, generally using two u64 (it's more complicated, but you get the idea).

So, for me, it makes sense that comparing a u128 can take twice as much since it's technically doing twice the work.

As for why removing the & reference makes such a difference, I can only speculate that without it, the compiler is able to better optimize the comparison because there is no more indirection.

@0x7CFE
Copy link
Author

0x7CFE commented Feb 9, 2024

I still don't get it.

Yes, I do understand that u128 is not a primitive type on x86.

However, in this particular case we're essentially comparing two pairs: [u128; 2] == [u128; 2] and [u64; 4] == [u64; 4]. In both cases the size is known statically, and in both cases it boils down to a single crate::intrinsics::raw_eq intrinsic.

In both cases these structures have the same size (32) and alignment (8), so I expect them to be virtually identical in terms of comparison speed.

Moreover, the documentation for the intrinsic clearly states:

/// Determines whether the raw bytes of the two values are equal.
///
/// This is particularly handy for arrays, since it allows things like just
/// comparing `i96`s instead of forcing `alloca`s for `[6 x i16]`.
///
/// Above some backend-decided threshold this will emit calls to `memcmp`,
/// like slice equality does, instead of causing massive code size.
///
/// Since this works by comparing the underlying bytes, the actual `T` is
/// not particularly important. It will be used for its size and alignment,
/// but any validity restrictions will be ignored, not enforced.
///
/// # Safety
///
/// It's UB to call this if any of the *bytes* in `*a` or `*b` are uninitialized or carry a
/// pointer value.
/// Note that this is a stricter criterion than just the *values* being
/// fully-initialized: if `T` has padding, it's UB to call this intrinsic.
///
/// (The implementation is allowed to branch on the results of comparisons,
/// which is UB if any of their inputs are `undef`.)
#[rustc_const_unstable(feature = "const_intrinsic_raw_eq", issue = "none")]
#[rustc_nounwind]
pub fn raw_eq<T>(a: &T, b: &T) -> bool;

If I understood correctly, in both cases this should yield the same byte-string comparison (memcmp) which should take the same amount of time given the same byte representation and alignment. But apparently it isn't.

@Scripter17
Copy link
Contributor

Scripter17 commented Feb 9, 2024

Does std::mem::transmuteing the [u128; 2] into a [u64; 4] make it faster?

@Mark-Simulacrum
Copy link
Member

In both cases these structures have the same size (32) and alignment (8), so I expect them to be virtually identical in terms of comparison speed.

u128 has 16-byte alignment on nightly (#116672), but on stable a simple example at least compiles to the same code for me: https://rust.godbolt.org/z/zj4rEWoxf

For larger lengths, array comparisons compile to a bcmp call, so I'd not expect any differences at all from a Rust perspective (the underlying implementation of bcmp may have different behavior for different alignments I guess).

Can you provide a minimal, complete example on e.g. godbolt for the slow & fast cases?

@saethlin
Copy link
Member

saethlin commented Feb 9, 2024

On nightly only, adding a newtype wrapper around [u64; 4] and [u128; 2] causes them to produce different codegen: https://godbolt.org/z/saYdsqjjc

@0x7CFE
Copy link
Author

0x7CFE commented Feb 10, 2024

On nightly only, adding a newtype wrapper around [u64; 4] and [u128; 2] causes them to produce different codegen: https://godbolt.org/z/saYdsqjjc

IIUC, the difference is precisely because nightly aligns u128 by 16 bytes, so it can use aligned version of the mov instruction. Previous versions use unaligned versions for both u64 and u128.

My initial guess was that the slowdown is caused by a misaligned load of u128, but according to the documentation it would cause general protection exception.

Now trying to reproduce that with my setup.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
C-optimization Category: An issue highlighting optimization opportunities or PRs implementing such I-slow Issue: Problems and improvements with respect to performance of generated code. T-compiler Relevant to the compiler team, which will review and decide on the PR/issue.
Projects
None yet
Development

No branches or pull requests

7 participants