Remove zero padding #2
misutoneko
started this conversation in
Ideas
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hello
A grateful user of gguf-frankenstein.py reporting in, sir.
I had to customize the script a little bit to fix tensor names, which inspired this post.
If this is a common thing, perhaps a command line switch --remove-padding could be added.
Here's my use case:
ggerganov/llama.cpp#8300
Idk if there's a better way to easily manipulate the tensor names, but this is how I did it:
I overwrote the names and padded the name fields with zero bytes to match the original field length.
I can imagine anyone with a hex editor would be doing just that (well, maybe not for all 742 of them...)
While it seems llama.cpp doesn't mind the zero padding, it still wouldn't load the model.
The reason being that some fields were (originally) too long, so the padding must go.
Here's the code snippet that I used to do it:
Beta Was this translation helpful? Give feedback.
All reactions