Skip to content

Using abort_callback from ggml to interrupt llama computation #9209

Using abort_callback from ggml to interrupt llama computation

Using abort_callback from ggml to interrupt llama computation #9209

Annotations

1 warning

windows-latest-cmake (avx, -DLLAMA_NATIVE=OFF -DLLAMA_BUILD_SERVER=ON -DLLAMA_AVX2=OFF -DBUILD_SH...

succeeded Mar 2, 2024 in 5m 11s