You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have tried to use llama.cpp for PandaGPT in panda_gpt_llama_cpp.
The script get poor performance. Is there any thing wrong for the procedure? Or is it just the limit of the model or q4_1 precision?
I have tried to use llama.cpp for PandaGPT in panda_gpt_llama_cpp.
The script get poor performance. Is there any thing wrong for the procedure? Or is it just the limit of the model or
q4_1
precision?The following are my steps.
q4_1
. The result is ggml-pandagpt-vicuna-merge.The model seems to recognize
<Img>...</Img>
labels.The text was updated successfully, but these errors were encountered: