RepoQA v0.1.1
Notable updates
- Trimming output before post-processing largely improved certain cases @ganler
- Fixed HF backend @zyzzzz-123 @ganler
- HF backend supports
attn-implementation
to enable flash-attn 2 @ganler - Optimized the computation of trained context size @JialeTomTian #38
- End-of-string optimization largely improved the inference speed @ganler
- Optimized post-processing accuracy using a better regex expression @ganler
Finished features/fixes are listed as noticeable.
WIP updates will be listed in subsequent releases when they are fully done.
Full changelog: v0.1.0...v0.1.1
Quick examples
pip install repoqa
repoqa.search_needle_function --model "gpt4-turbo" --backend openai
repoqa.search_needle_function --model "claude-3-haiku-20240307" --backend anthropic
repoqa.search_needle_function --model "Qwen/CodeQwen1.5-7B-Chat" --backend vllm
repoqa.search_needle_function --model "Qwen/CodeQwen1.5-7B-Chat" --backend hf --trust-remote-code --attn-implementation "flash_attention_2"
repoqa.search_needle_function --model "gemini-1.5-pro-latest" --backend google
Resources
- PyPI: https://pypi.org/project/repoqa/0.1.1/
- Homepage: https://evalplus.github.io/repoqa.html
- Dataset release: https://github.com/evalplus/repoqa_release