-
Notifications
You must be signed in to change notification settings - Fork 27.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
An Error report about pipeline #3227
Comments
I have this same issue, but have no problems running: nlp = pipeline("question-answering") Note: To install the library, I had to install tokenizers version 0.6.0 separately, git clone the transformers repo and edit the setup.py file before installing as per @dafraile's answer for issue: #2831 Update: This error was fixed when I installed tokenizers==0.5.2 |
I sadly have this issue too with the newest transformers 2.6.0 version. Tokenizers is at version 0.5.2. But newest version of tokenizers sadly also doesn't work. And solutions to fix this issue? |
I have the same issue here. I first ran with my own tokenizer, but it failed, and then I tried to run the 03-pipelines.ipynb code with QnA example and I get the following error code. Environment: Code that I ran: Error code: HBox(children=(FloatProgress(value=0.0, description='Downloading', max=230.0, style=ProgressStyle(description_… convert squad examples to features: 0%| | 0/1 [00:00<?, ?it/s]RemoteTraceback Traceback (most recent call last) The above exception was the direct cause of the following exception: KeyError Traceback (most recent call last) ~/anaconda3/envs/transformers/lib/python3.7/site-packages/transformers/pipelines.py in call(self, *texts, **kwargs) ~/anaconda3/envs/transformers/lib/python3.7/site-packages/transformers/pipelines.py in (.0) ~/anaconda3/envs/transformers/lib/python3.7/site-packages/transformers/data/processors/squad.py in squad_convert_examples_to_features(examples, tokenizer, max_seq_length, doc_stride, max_query_length, is_training, return_dataset, threads) ~/anaconda3/envs/transformers/lib/python3.7/site-packages/tqdm/std.py in iter(self) ~/anaconda3/envs/transformers/lib/python3.7/multiprocessing/pool.py in (.0) ~/anaconda3/envs/transformers/lib/python3.7/multiprocessing/pool.py in next(self, timeout) KeyError: 'token_type_ids' |
Any help would be greatly appreciated! |
use : |
Thank you @paras55. your solution worked for me! |
Installing |
2.7.0 fails with the same error (at least with tokenizers==0.5.2) |
Close #3639 + spurious warning mentioned in #3227 cc @LysandreJik @thomwolf
🐛 Bug
Information
This may be an easy question, but it has been bothering me all day.
When I run the code:
nlp = pipeline("question-answering")
It always tells me:
Couldn't reach server at 'https://s3.amazonaws.com/models.huggingface.co/bert/distilbert-base-cased-distilled-squad-modelcard.json' to download model card file.
Creating an empty model card.
If I ignore it and continue to run the rest of the code:
nlp({
'question': 'What is the name of the repository ?',
'context': 'Pipeline have been included in the huggingface/transformers repository'
})
The error will appear:
KeyError: 'token_type_ids'
The text was updated successfully, but these errors were encountered: