Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

CUDA: add FP32 FlashAttention vector kernel #7188

Merged
merged 4 commits into from
May 12, 2024

fixup! fixup! fixup! CUDA: add FP32 FlashAttention vector kernel

aa9cbd7
Select commit
Loading
Failed to load commit list.
Sign in for the full log view
Merged

CUDA: add FP32 FlashAttention vector kernel #7188

fixup! fixup! fixup! CUDA: add FP32 FlashAttention vector kernel
aa9cbd7
Select commit
Loading
Failed to load commit list.

Annotations

5 warnings
bench-server-baseline (phi-2, q4_0)
succeeded May 11, 2024 in 14m 10s