We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
transformers
Model I am using (Bert, XLNet ...):
The problem arises when using:
The tasks I am working on is:
Steps to reproduce the behavior:
from transformers import AutoTokenizer, AutoModelForMaskedLM tokenizer = AutoTokenizer.from_pretrained("ckiplab/albert-tiny-chinese") model = AutoModelForMaskedLM.from_pretrained("ckiplab/albert-tiny-chinese")
Downloading: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 683/683 [00:00<00:00, 1.32MB/s] Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 112/112 [00:00<00:00, 215kB/s] Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 174/174 [00:00<00:00, 334kB/s] Traceback (most recent call last): File "/home/faith/torch_tutorials/torch_chatbot.py", line 30, in <module> tokenizer = AutoTokenizer.from_pretrained("ckiplab/albert-tiny-chinese") File "/home/faith/miniconda3/envs/torch/lib/python3.7/site-packages/transformers/tokenization_auto.py", line 341, in from_pretrained return tokenizer_class_py.from_pretrained(pretrained_model_name_or_path, *inputs, **kwargs) File "/home/faith/miniconda3/envs/torch/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 1653, in from_pretrained resolved_vocab_files, pretrained_model_name_or_path, init_configuration, *init_inputs, **kwargs File "/home/faith/miniconda3/envs/torch/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 1725, in _from_pretrained tokenizer = cls(*init_inputs, **init_kwargs) File "/home/faith/miniconda3/envs/torch/lib/python3.7/site-packages/transformers/tokenization_albert.py", line 149, in __init__ self.sp_model.Load(vocab_file) File "/home/faith/miniconda3/envs/torch/lib/python3.7/site-packages/sentencepiece.py", line 367, in Load return self.LoadFromFile(model_file) File "/home/faith/miniconda3/envs/torch/lib/python3.7/site-packages/sentencepiece.py", line 177, in LoadFromFile return _sentencepiece.SentencePieceProcessor_LoadFromFile(self, arg) TypeError: not a string
Expect to download this model correctly with error prompting.
The text was updated successfully, but these errors were encountered:
Can you share your version of transformers, tokenizers?
tokenizers
Sorry, something went wrong.
I can reproduce this in a Colab notebook when doing pip install transformers.
pip install transformers
Might be solved with v4?
I am having the same issue with AlbertTokenizer.from_pretrained
This issue has been automatically marked as stale and been closed because it has not had recent activity. Thank you for your contributions.
If you think this still needs to be addressed please comment on this thread.
i have the same question!
No branches or pull requests
Environment info
transformers
version:Who can help
Information
Model I am using (Bert, XLNet ...):
The problem arises when using:
The tasks I am working on is:
To reproduce
Steps to reproduce the behavior:
Expected behavior
Expect to download this model correctly with error prompting.
The text was updated successfully, but these errors were encountered: