This repository has been archived by the owner on May 12, 2023. It is now read-only.
-
Notifications
You must be signed in to change notification settings - Fork 161
Where to find the llama tokenizer? #5
Comments
tokenizer.model from hugging face for llama 7B (https://huggingface.co/decapoda-research/llama-7b-hf/tree/main) worked for me |
Thanks alot |
@vizay08 can you explain to me how/what to do with the hugging face model card? I'm a little confused... |
You need to browse files, and there you can find the tokenizer and download it. It is the second tab on the page. You select tokenizer.model, in the new page you can press download and you're done. |
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
In the documentation, to convert the bin file to ggml format I need to do:
pyllamacpp-convert-gpt4all path/to/gpt4all_model.bin path/to/llama_tokenizer path/to/gpt4all-converted.bin
I don't know where to find the llama_tokenizer.
The text was updated successfully, but these errors were encountered: