Skip to content

CUDA: add FP32 FlashAttention vector kernel #11682

CUDA: add FP32 FlashAttention vector kernel

CUDA: add FP32 FlashAttention vector kernel #11682

windows-latest-cmake (avx, -DLLAMA_NATIVE=OFF -DLLAMA_BUILD_SERVER=ON -DLLAMA_AVX2=OFF -DBUILD_SH...

succeeded May 11, 2024 in 6m 6s