Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error! can not run the hydit_app.py #147

Open
PiPiNam opened this issue Jul 7, 2024 · 1 comment
Open

Error! can not run the hydit_app.py #147

PiPiNam opened this issue Jul 7, 2024 · 1 comment

Comments

@PiPiNam
Copy link

PiPiNam commented Jul 7, 2024

I just download the docker that you provided (cuda11 version) and I find the version of torch is the cpu version.

I run the code "python app/hydit_app.py" in the container shows:

root@docker-desktop:/workspace/HunyuanDiT# python app/hydit_app.py
2024-07-07 10:43:54.296 | INFO | hydit.inference:init:160 - Got text-to-image model root path: ckpts/t2i
2024-07-07 10:43:54.297 | INFO | hydit.inference:init:169 - Loading CLIP Text Encoder...
2024-07-07 10:43:57.131 | INFO | hydit.inference:init:172 - Loading CLIP Text Encoder finished
2024-07-07 10:43:57.131 | INFO | hydit.inference:init:175 - Loading CLIP Tokenizer...
2024-07-07 10:43:57.284 | INFO | hydit.inference:init:178 - Loading CLIP Tokenizer finished
2024-07-07 10:43:57.284 | INFO | hydit.inference:init:181 - Loading T5 Text Encoder and T5 Tokenizer...
You are using the default legacy behaviour of the <class 'transformers.models.t5.tokenization_t5.T5Tokenizer'>. This is expected, and simply means that the legacy (previous) behavior will be used so nothing changes for you. If you want to use the new behaviour, set legacy=False. This should only be set if you understand what it means, and thoroughly read the reason why this was added as explained in huggingface/transformers#24565
/opt/conda/lib/python3.8/site-packages/transformers/convert_slow_tokenizer.py:550: UserWarning: The sentencepiece tokenizer that you are converting to a fast tokenizer uses the byte fallback option which is not implemented in the fast tokenizers. In practice this means that the fast version of the tokenizer can produce unknown tokens whereas the sentencepiece version would have converted these unknown tokens into a sequence of byte tokens matching the original piece of text.
warnings.warn(
You are using a model of type mt5 to instantiate a model of type t5. This is not supported for all configurations of models and can yield errors.
Killed

I don't know how to resolve it, could you help me to deal with the problem? Thanks!

@PiPiNam PiPiNam changed the title Why the torch version in docker cu11 you provided is the cpu version? Error! Why the torch version in docker cu11 you provided is the cpu version? Jul 7, 2024
@PiPiNam PiPiNam changed the title Error! Why the torch version in docker cu11 you provided is the cpu version? Error! can not run the hydit_app.py Jul 7, 2024
@zobinimm
Copy link

Has this problem been resolved?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants