Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

OSError: Can't load tokenizer for 'google/t5-v1_1-xxl'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'google/t5-v1_1-xxl' is the correct path to a directory containing all relevant files for a T5TokenizerFast tokenizer. #16225

Open
1 of 6 tasks
pjki100 opened this issue Jul 18, 2024 · 20 comments
Labels
bug-report Report of a bug, yet to be confirmed

Comments

@pjki100
Copy link

pjki100 commented Jul 18, 2024

Checklist

  • The issue exists after disabling all extensions
  • The issue exists on a clean installation of webui
  • The issue is caused by an extension, but I believe it is caused by a bug in the webui
  • The issue exists in the current version of the webui
  • The issue has not been reported before recently
  • The issue has been reported before but has not been fixed yet

What happened?

env: Version: v1.10.0-RC on ubuntu24.04 cuda12.1 pytorch2.1.2+cu121
when switch sd3 model ,then command console report:
OSError: Can't load tokenizer for 'google/t5-v1_1-xxl'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'google/t5-v1_1-xxl' is the correct path to a directory containing all relevant files for a T5TokenizerFast tokenizer.

Steps to reproduce the problem

env: Version: v1.10.0-RC on ubuntu24.04 cuda12.1 pytorch2.1.2+cu121
when switch sd3 model ,then command console report:
OSError: Can't load tokenizer for 'google/t5-v1_1-xxl'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'google/t5-v1_1-xxl' is the correct path to a directory containing all relevant files for a T5TokenizerFast tokenizer.

What should have happened?

env: Version: v1.10.0-RC on ubuntu24.04 cuda12.1 pytorch2.1.2+cu121
when switch sd3 model ,then command console report:
OSError: Can't load tokenizer for 'google/t5-v1_1-xxl'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'google/t5-v1_1-xxl' is the correct path to a directory containing all relevant files for a T5TokenizerFast tokenizer.

What browsers do you use to access the UI ?

No response

Sysinfo

Version: v1.10.0-RC on ubuntu24.04 cuda12.1 pytorch2.1.2+cu121

Console logs

OSError: Can't load tokenizer for 'google/t5-v1_1-xxl'. If you were trying to load it from 'https://huggingface.co/models', make sure you don't have a local directory with the same name. Otherwise, make sure 'google/t5-v1_1-xxl' is the correct path to a directory containing all relevant files for a T5TokenizerFast tokenizer.

Additional information

No response

@pjki100 pjki100 added the bug-report Report of a bug, yet to be confirmed label Jul 18, 2024
@lijiajun1997
Copy link

same issue

@tanggogogo123
Copy link

应该如何解决呢,我也遇到同样的问题

@Dennis-NT
Copy link

same issue, hope it can resovled soon.

@Dennis-NT
Copy link

Dennis-NT commented Jul 30, 2024

It seems that it is a web issue. Try more times at different time, when it successfully download several .json files, Web UI works.

@lijiajun1997
Copy link

It seems that it is a web issue. Try more times at different time, when it successfully download several .json files, Web UI works.

I tried for one week, and it could connect hugging face but could not load tokenizer for 'google/t5-v1_1-xxl'.
Or how could we download the model manually and put on which folder coud work?

@Dennis-NT
Copy link

It seems that it is a web issue. Try more times at different time, when it successfully download several .json files, Web UI works.

I tried for one week, and it could connect hugging face but could not load tokenizer for 'google/t5-v1_1-xxl'. Or how could we download the model manually and put on which folder coud work?

I found some *.json files which automaticlly download from hugging face. You can tell me your mail box, I can send them to you.
character

@lijiajun1997
Copy link

lijiajun1997 commented Jul 30, 2024

It seems that it is a web issue. Try more times at different time, when it successfully download several .json files, Web UI works.

I tried for one week, and it could connect hugging face but could not load tokenizer for 'google/t5-v1_1-xxl'. Or how could we download the model manually and put on which folder coud work?

I found some *.json files which automaticlly download from hugging face. You can tell me your mail box, I can send them to you. !

498745918@qq.com

@lijiajun1997
Copy link

It seems that it is a web issue. Try more times at different time, when it successfully download several .json files, Web UI works.

I tried for one week, and it could connect hugging face but could not load tokenizer for 'google/t5-v1_1-xxl'. Or how could we download the model manually and put on which folder coud work?

I found some *.json files which automaticlly download from hugging face. You can tell me your mail box, I can send them to you. !

But I am curious, is that string of characters in the file path the same on every computer?

@Dennis-NT
Copy link

It seems that it is a web issue. Try more times at different time, when it successfully download several .json files, Web UI works.

I tried for one week, and it could connect hugging face but could not load tokenizer for 'google/t5-v1_1-xxl'. Or how could we download the model manually and put on which folder coud work?

I found some *.json files which automaticlly download from hugging face. You can tell me your mail box, I can send them to you. !

But I am curious, is that string of characters in the file path the same on every computer?

Files had been sent to your mail box. Also there is picture to show the path. You can try to put them in the same path like me. Good luck.

@seekerzhouk
Copy link

Same issue. How to resolve?

@wangwenfeng0
Copy link

@Dennis-NT please send me the 'google/t5-v1_1-xxl' files, thank you. 1066395480@qq.com

@Dennis-NT
Copy link

@Dennis-NT please send me the 'google/t5-v1_1-xxl' files, thank you. 1066395480@qq.com

Done. Does that work?

@Syavick
Copy link

Syavick commented Aug 5, 2024

I've found similar thread about clip-vit-large-patch14 tokenizer which helped to resolve problem with Google T5 tokenizer.
#11507

I've created directory "google/t5-v1_1-xxl" inside webui directory and put there 4 files from hugginface (https://huggingface.co/google/t5-v1_1-xxl/tree/main) :

config.json
special_tokens_map.json
spiece.model
tokenizer_config.json

Additionally created directory "openai/clip-vit-large-patch14" and put there 5 files from here:
config.json
merges.txt
special_tokens_map.json
tokenizer_config.json
vocab.json

Then launched webui and finally SD3 started working! (it also automatically downloaded few CLIP model files)

@nikyyoung
Copy link

看起来这是 Web 问题。在不同时间尝试多次,当成功下载多个 .json 文件时,Web UI 即可正常工作。

我试了一个星期,可以连接 hugging face,但无法加载 'google/t5-v1_1-xxl' 的 tokenizer。或者我们如何手动下载模型并放在哪个文件夹中才能工作?

我找到了一些可以从 hugging face 自动下载的 *.json 文件。你可以告诉我你的邮箱,我可以将它们发送给你。 特点

@Dennis-NT,hello,Please send it to my email:1394924644@qq.com, thanks!

@Dennis-NT
Copy link

I've found similar thread about clip-vit-large-patch14 tokenizer which helped to resolve problem with Google T5 tokenizer. #11507

I've created directory "google/t5-v1_1-xxl" inside webui directory and put there 4 files from hugginface (https://huggingface.co/google/t5-v1_1-xxl/tree/main) :

config.json special_tokens_map.json spiece.model tokenizer_config.json

Additionally created directory "openai/clip-vit-large-patch14" and put there 5 files from here: config.json merges.txt special_tokens_map.json tokenizer_config.json vocab.json

Then launched webui and finally SD3 started working! (it also automatically downloaded few CLIP model files)

Good. It is a offline mode solution.

@seekerzhouk
Copy link

I've found similar thread about clip-vit-large-patch14 tokenizer which helped to resolve problem with Google T5 tokenizer. #11507

I've created directory "google/t5-v1_1-xxl" inside webui directory and put there 4 files from hugginface (https://huggingface.co/google/t5-v1_1-xxl/tree/main) :

config.json special_tokens_map.json spiece.model tokenizer_config.json

Additionally created directory "openai/clip-vit-large-patch14" and put there 5 files from here: config.json merges.txt special_tokens_map.json tokenizer_config.json vocab.json

Then launched webui and finally SD3 started working! (it also automatically downloaded few CLIP model files)

It works! Thanks!

@tanggogogo123
Copy link

I've found similar thread about clip-vit-large-patch14 tokenizer which helped to resolve problem with Google T5 tokenizer. #11507

I've created directory "google/t5-v1_1-xxl" inside webui directory and put there 4 files from hugginface (https://huggingface.co/google/t5-v1_1-xxl/tree/main) :

config.json special_tokens_map.json spiece.model tokenizer_config.json

Additionally created directory "openai/clip-vit-large-patch14" and put there 5 files from here: config.json merges.txt special_tokens_map.json tokenizer_config.json vocab.json

Then launched webui and finally SD3 started working! (it also automatically downloaded few CLIP model files)

It works! Thank you very much~~!

@jeaves001
Copy link

我找到了有关 clip-vit-large-patch14 标记器的类似线程,它有助于解决 Google T5 标记器的问题。 #11507

我在 webui 目录中创建了目录“google/t5-v1_1-xxl”,并将来自 hugginface 的 4 个文件( https://huggingface.co/google/t5-v1_1-xxl/tree/main)放在其中:

config.json special_tokens_map.json spiece.model tokenizer_config.json

另外创建了目录“openai / clip-vit-large-patch14”并从这里放入了 5 个文件: config.json merges.txt special_tokens_map.json tokenizer_config.json vocab.json

然后启动 webui,最后 SD3 开始工作!(它还自动下载了一些 CLIP 模型文件)

jeaves002@gmail.com thanks so much

@JoshonSmith
Copy link

I've found similar thread about clip-vit-large-patch14 tokenizer which helped to resolve problem with Google T5 tokenizer. #11507

I've created directory "google/t5-v1_1-xxl" inside webui directory and put there 4 files from hugginface (https://huggingface.co/google/t5-v1_1-xxl/tree/main) :

config.json special_tokens_map.json spiece.model tokenizer_config.json

Additionally created directory "openai/clip-vit-large-patch14" and put there 5 files from here: config.json merges.txt special_tokens_map.json tokenizer_config.json vocab.json

Then launched webui and finally SD3 started working! (it also automatically downloaded few CLIP model files)

It is work !!!

@Nukami
Copy link

Nukami commented Aug 23, 2024

I've found similar thread about clip-vit-large-patch14 tokenizer which helped to resolve problem with Google T5 tokenizer. #11507

I've created directory "google/t5-v1_1-xxl" inside webui directory and put there 4 files from hugginface (https://huggingface.co/google/t5-v1_1-xxl/tree/main) :

config.json special_tokens_map.json spiece.model tokenizer_config.json

Additionally created directory "openai/clip-vit-large-patch14" and put there 5 files from here: config.json merges.txt special_tokens_map.json tokenizer_config.json vocab.json

Then launched webui and finally SD3 started working! (it also automatically downloaded few CLIP model files)

this solution works on me, thx!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug-report Report of a bug, yet to be confirmed
Projects
None yet
Development

No branches or pull requests