Skip to content

Actions: Pints-AI/llama.cpp

flake8 Lint

Actions

Loading...
Loading

Show workflow options

Create status badge

Loading
6 workflow run results
6 workflow run results

Filter by Event

Filter by Status

Filter by Branch

Filter by Actor

(wip) add Flash decoding
flake8 Lint #6: Pull request #5 opened by FSSRepo
March 6, 2024 05:29 21s flash-decoding
March 6, 2024 05:29 21s
cleanup FA implementation + flash decoding kernel (wip)
flake8 Lint #5: Commit 888a724 pushed by FSSRepo
March 6, 2024 05:23 21s flash-decoding
March 6, 2024 05:23 21s
Merge branch 'master' of https://github.com/Pints-App/llama.cpp
flake8 Lint #4: Commit c42f7cb pushed by FSSRepo
March 6, 2024 03:38 20s master
March 6, 2024 03:38 20s
compare-llama-bench.py : remove mul_mat_q (#5892)
flake8 Lint #2: Commit 652ca2b pushed by FSSRepo
March 6, 2024 03:27 27s master
March 6, 2024 03:27 27s
ggml : fix IQ3_S AVX implementation (#5834)
flake8 Lint #1: Commit 494c870 pushed by FSSRepo
March 2, 2024 19:14 24s master
March 2, 2024 19:14 24s