Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ggml-alpaca-7b-q4.bin: invalid model file (bad magic [got 0x67676d66 want 0x67676a74]) #2392

Closed
ShoufaChen opened this issue Apr 4, 2023 · 10 comments

Comments

@ShoufaChen
Copy link

When loading the converted ggml-alpaca-7b-q4.bin model, I met the error:

>>> llm = LlamaCpp(model_path="ggml-alpaca-7b-q4.bin")
llama_model_load: loading model from 'ggml-alpaca-7b-q4.bin' - please wait ...
ggml-alpaca-7b-q4.bin: invalid model file (bad magic [got 0x67676d66 want 0x67676a74])
        you most likely need to regenerate your ggml files
        the benefit is you'll get 10-100x faster load times
        see https://github.com/ggerganov/llama.cpp/issues/91
        use convert-pth-to-ggml.py to regenerate from original pth
        use migrate-ggml-2023-03-30-pr613.py if you deleted originals
llama_init_from_file: failed to load model
ShoufaChen added a commit to ShoufaChen/langchain-patch that referenced this issue Apr 4, 2023
As noted in https://github.com/ggerganov/llama.cpp/blob/master/migrate-ggml-2023-03-30-pr613.py,

Authors from `llama.cpp` caused a breaking change to the file format on 2023-03-30 in:
ggerganov/llama.cpp#613

Therefore, we need further use `migrate-ggml-2023-03-30-pr613.py` to convert the llama model.
@abetlen
Copy link
Contributor

abetlen commented Apr 4, 2023

Hi @ShoufaChen unfortunately this has to do with a recent change to the model format in llama.cpp. To fix this you'll just need to migrate the model flie as follows.

git clone https://github.com/ggerganov/llama.cpp.git
cd llama.cpp
python3 migrate-ggml-2023-03-30-pr613.py ggml-alpaca-7b-q4.bin ggml-alpaca-7b-q4-new.bin

@ShoufaChen
Copy link
Author

Hi @abetlen

Thanks for your help. I believe #2393 could help.

hwchase17 pushed a commit that referenced this issue Apr 6, 2023
As noted in
https://github.com/ggerganov/llama.cpp/blob/master/migrate-ggml-2023-03-30-pr613.py,

Authors from `llama.cpp` caused a breaking change to the file format on
2023-03-30 in: ggerganov/llama.cpp#613

Therefore, we need further use `migrate-ggml-2023-03-30-pr613.py` to
convert the llama model.
@akash-ravikumar
Copy link

migrate-ggml-2023-03-30-pr613.py does not seem to exist on this repo: https://github.com/ggerganov/llama.cpp.git

@jav-ed
Copy link

jav-ed commented Apr 26, 2023

I also can't find it, instead convert-lora-to-ggml.py, convert-pth-to-ggml.py and convert.py are found

@Diyago
Copy link

Diyago commented May 1, 2023

doest start converting

> PS C:\Users\dex\Desktop\gpt4free\voice_chatbot\migrate_models\llama.cpp> python3 convert.py ggml-vic7b-uncensored-q5_1.bin ggml-vic7b-uncensored-q5_1_new.bin

Python

@am0oma
Copy link

am0oma commented May 27, 2023

python convert.py x.bin --outfile x_new.bin
or
python3 convert.py x.bin --outfile x_new.bin

doest start converting

> PS C:\Users\dex\Desktop\gpt4free\voice_chatbot\migrate_models\llama.cpp> python3 convert.py ggml-vic7b-uncensored-q5_1.bin ggml-vic7b-uncensored-q5_1_new.bin

Python

@AlekzNet
Copy link

AlekzNet commented Jun 11, 2023

I'm having the same issue. Converting did not help.

$ python3 ./convert.py --outfile models/7B/ggml-model-q4_0_new.bin models/7B/ggml-model-q4_0.bin
Loading model file models/7B/ggml-model-q4_0.bin
Writing vocab...
. . .
Wrote models/7B/ggml-model-q4_0_new.bin

$ ./main -m models/7B/ggml-model-q4_0_new.bin
main: seed = 1686525413
llama_model_load: loading model from 'models/7B/ggml-model-q4_0_new.bin' - please wait ...
llama_model_load: invalid model file 'models/7B/ggml-model-q4_0_new.bin' (bad magic)
main: failed to load model from 'models/7B/ggml-model-q4_0_new.bin'

@vinitran
Copy link

i met the same bug like @AlekzNet , anyone has solution ? i've run convert.py successfully then i run ./main -m models/7B/ggml-model-q4_0_new.bin but it show bad magic

@dosubot
Copy link

dosubot bot commented Sep 22, 2023

Hi, @ShoufaChen. I'm Dosu, and I'm helping the LangChain team manage their backlog. I wanted to let you know that we are marking this issue as stale.

Based on my understanding of the issue, you reported that the ggml-alpaca-7b-q4.bin model file is invalid and cannot be loaded. There have been suggestions to regenerate the ggml files using the convert-pth-to-ggml.py script or the migrate-ggml-2023-03-30-pr613.py script. However, there are mixed reactions to their effectiveness.

Before we proceed, we would like to confirm if this issue is still relevant to the latest version of the LangChain repository. If it is, please let us know by commenting on this issue. Otherwise, feel free to close the issue yourself, or the issue will be automatically closed in 7 days.

Thank you for your understanding and contribution to the LangChain project. We appreciate your patience and look forward to hearing from you soon.

@dosubot dosubot bot added the stale Issue has not had recent activity or appears to be solved. Stale issues will be automatically closed label Sep 22, 2023
@dosubot
Copy link

dosubot bot commented Sep 22, 2023

Hi, @ShoufaChen. I'm Dosu, and I'm helping the LangChain team manage their backlog. I wanted to let you know that we are marking this issue as stale.

Based on my understanding of the issue, you reported that the ggml-alpaca-7b-q4.bin model file is invalid and cannot be loaded. There have been suggestions to regenerate the ggml files using the convert-pth-to-ggml.py script or the migrate-ggml-2023-03-30-pr613.py script. However, there are mixed reactions to their effectiveness.

Before we proceed, we would like to confirm if this issue is still relevant to the latest version of the LangChain repository. If it is, please let us know by commenting on this issue. Otherwise, feel free to close the issue yourself, or the issue will be automatically closed in 7 days.

Thank you for your understanding and contribution to the LangChain project. We appreciate your patience and look forward to hearing from you soon.

@dosubot dosubot bot closed this as not planned Won't fix, can't repro, duplicate, stale Sep 29, 2023
@dosubot dosubot bot removed the stale Issue has not had recent activity or appears to be solved. Stale issues will be automatically closed label Sep 29, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

8 participants