Skip to content

Commit

Permalink
fixup! fixup! fixup! CUDA: add FP32 FlashAttention vector kernel
Browse files Browse the repository at this point in the history
  • Loading branch information
JohannesGaessler committed May 11, 2024
1 parent f3c3eaf commit aa9cbd7
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion ggml-cuda/fattn-vec-f16.cu
Original file line number Diff line number Diff line change
Expand Up @@ -342,7 +342,7 @@ void ggml_cuda_flash_attn_ext_vec_f16_no_mma(ggml_backend_cuda_context & ctx, gg

ggml_tensor * KQV = dst;

const int32_t precision = KQV->op_params[1];
const int32_t precision = KQV->op_params[2];
GGML_ASSERT(precision == GGML_PREC_DEFAULT);
GGML_ASSERT(Q->ne[0] == 64 || Q->ne[0] == 128 && "FlashAttention without tensor cores only supports head sizes 64 and 128.");

Expand Down

0 comments on commit aa9cbd7

Please sign in to comment.