Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

GGUF conversion doesn't respect tokenizer config add_bos/eos_token setting #3966

Closed
KerfuffleV2 opened this issue Nov 6, 2023 · 3 comments
Closed

Comments

@KerfuffleV2
Copy link
Collaborator

This causes problems with at least one model (Yi), see discussion here: 01-ai/Yi#5

The automatic BOS that gets prepended apparently confuses the model.

SpecialVocab in gguf.py already loads tokenizer_config.json (although only as a fallback currently). The main question is probably how to add it to the GGUF file - what key, etc.

@shinomakoi
Copy link

Have been testing Yi-6B over the server API, it becomes very repetitive and incoherent quickly. So instead of passing the prompt as a string (in Python) I tried passing it as a list like prompt = [2, "my prompt here"] and (2 being the override BOS Token ID I believe). This seems to completely fix the model, with no more repetition or incoherency issues. So yea something wrong with the automatic BOS.

@KerfuffleV2
Copy link
Collaborator Author

These models are looking pretty interesting now that the 200K (!) context versions were released. If it works as described, a 34B with 200K context is pretty insane.

@KerfuffleV2
Copy link
Collaborator Author

Should be resolved by #4040

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants