-
Notifications
You must be signed in to change notification settings - Fork 15.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ggml-alpaca-7b-q4.bin: invalid model file (bad magic [got 0x67676d66 want 0x67676a74]) #2392
Comments
As noted in https://github.com/ggerganov/llama.cpp/blob/master/migrate-ggml-2023-03-30-pr613.py, Authors from `llama.cpp` caused a breaking change to the file format on 2023-03-30 in: ggerganov/llama.cpp#613 Therefore, we need further use `migrate-ggml-2023-03-30-pr613.py` to convert the llama model.
Hi @ShoufaChen unfortunately this has to do with a recent change to the model format in
|
As noted in https://github.com/ggerganov/llama.cpp/blob/master/migrate-ggml-2023-03-30-pr613.py, Authors from `llama.cpp` caused a breaking change to the file format on 2023-03-30 in: ggerganov/llama.cpp#613 Therefore, we need further use `migrate-ggml-2023-03-30-pr613.py` to convert the llama model.
migrate-ggml-2023-03-30-pr613.py does not seem to exist on this repo: https://github.com/ggerganov/llama.cpp.git |
I also can't find it, instead convert-lora-to-ggml.py, convert-pth-to-ggml.py and convert.py are found |
doest start converting
|
python convert.py x.bin --outfile x_new.bin
|
I'm having the same issue. Converting did not help.
|
i met the same bug like @AlekzNet , anyone has solution ? i've run convert.py successfully then i run ./main -m models/7B/ggml-model-q4_0_new.bin but it show bad magic |
Hi, @ShoufaChen. I'm Dosu, and I'm helping the LangChain team manage their backlog. I wanted to let you know that we are marking this issue as stale. Based on my understanding of the issue, you reported that the Before we proceed, we would like to confirm if this issue is still relevant to the latest version of the LangChain repository. If it is, please let us know by commenting on this issue. Otherwise, feel free to close the issue yourself, or the issue will be automatically closed in 7 days. Thank you for your understanding and contribution to the LangChain project. We appreciate your patience and look forward to hearing from you soon. |
Hi, @ShoufaChen. I'm Dosu, and I'm helping the LangChain team manage their backlog. I wanted to let you know that we are marking this issue as stale. Based on my understanding of the issue, you reported that the Before we proceed, we would like to confirm if this issue is still relevant to the latest version of the LangChain repository. If it is, please let us know by commenting on this issue. Otherwise, feel free to close the issue yourself, or the issue will be automatically closed in 7 days. Thank you for your understanding and contribution to the LangChain project. We appreciate your patience and look forward to hearing from you soon. |
When loading the converted
ggml-alpaca-7b-q4.bin
model, I met the error:The text was updated successfully, but these errors were encountered: