Quantized BakLLaVA (Mistral + LLaVA 1.5) GGUF + 10 lines of Python #3998
paschembri
started this conversation in
Show and tell
Replies: 1 comment 2 replies
-
Interesting Maybe Bakllava 2 will be better in that regard |
Beta Was this translation helpful? Give feedback.
2 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Congrats to everyone who worked on the LLaVA integration (both on llama.cpp and llama-cpp-python).
I converted and quantized BakLLaVA model (weights here: https://huggingface.co/advanced-stack/bakllava-mistral-v1-gguf) and wrote a quick tutorial there: https://advanced-stack.com/resources/multi-modalities-inference-using-mistral-ai-llava-bakllava-and-llama-cpp.html
Beta Was this translation helpful? Give feedback.
All reactions