-
Notifications
You must be signed in to change notification settings - Fork 4.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
UnpicklingError: invalid load key, 'v'. #1132
Comments
when I ran the same command on my local machine this is what I got:
|
for this issue I fixed it by increasing the swap file size to 16G, reference then I got this error: |
I had the same problem, did you solve it。 |
Hello,
after running this command:
python3 -m fastchat.model.apply_delta --base-model-path ./llama-13b-hf --target-model-path ./vicuna-13b --delta-path ./vicuna-13b-delta-v0/
I get this:
then:
Any idea why that would happen? I deleted and installed the llama and vicuna files multiple times but nothing worked.
The text was updated successfully, but these errors were encountered: