Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

UnpicklingError: invalid load key, 'v'. #1132

Closed
LayanNCAI opened this issue May 10, 2023 · 3 comments
Closed

UnpicklingError: invalid load key, 'v'. #1132

LayanNCAI opened this issue May 10, 2023 · 3 comments

Comments

@LayanNCAI
Copy link

Hello,

after running this command:
python3 -m fastchat.model.apply_delta --base-model-path ./llama-13b-hf --target-model-path ./vicuna-13b --delta-path ./vicuna-13b-delta-v0/

I get this:

Loading the base model from ./llama-13b-hf
Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 41/41 [00:51<00:00,  1.25s/it]
Loading the delta from ./vicuna-13b-delta-v0/
Loading checkpoint shards:   0%|                                                                                                                                                     | 0/3 [00:00<?, ?it/s]

then:

UnpicklingError: invalid load key, 'v'.

During handling of the above exception, another exception occurred:
OSError: You seem to have cloned a repository without having git-lfs installed. Please install git-lfs and run `git lfs install` followed by `git lfs pull` in the folder you cloned.

Any idea why that would happen? I deleted and installed the llama and vicuna files multiple times but nothing worked.

@LayanNCAI
Copy link
Author

when I ran the same command on my local machine this is what I got:

Loading the base model from ./llama-13b-hf
Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 41/41 [00:17<00:00,  2.30it/s]
Loading the delta from ./vicuna-13b-delta-v0/
Loading checkpoint shards:  33%|███████████████████████████████████████████████                                                                                              | 1/3 [00:04<00:08,  4.10s/it]
Killed

@LayanNCAI
Copy link
Author

LayanNCAI commented May 10, 2023

when I ran the same command on my local machine this is what I got:

Loading the base model from ./llama-13b-hf
Loading checkpoint shards: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 41/41 [00:17<00:00,  2.30it/s]
Loading the delta from ./vicuna-13b-delta-v0/
Loading checkpoint shards:  33%|███████████████████████████████████████████████                                                                                              | 1/3 [00:04<00:08,  4.10s/it]
Killed

for this issue I fixed it by increasing the swap file size to 16G, reference

then I got this error:
ValueError: Tokenizer class LLaMATokenizer does not exist or is not currently imported.
which was fixed by changing the tokenizer name in "llama-13b-hf/tokenizer_config.json " to LlamaTokenizer for more on this: huggingface/transformers#22222 (comment)

@luofeng1994
Copy link

I had the same problem, did you solve it。

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants