Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

gradio app #1

Open
AK391 opened this issue Jan 28, 2025 · 9 comments
Open

gradio app #1

AK391 opened this issue Jan 28, 2025 · 9 comments
Assignees
Labels
enhancement New feature or request

Comments

@AK391
Copy link

AK391 commented Jan 28, 2025

would be great to setup a gradio app for this

@oliverban
Copy link

+1 for this! :) Seems like it would be a great addition!

@kostyk348
Copy link

Also maybe HuggingFace demo

@alisson-anjos
Copy link

Hello all, I made a fork and implemented a simple gradio interface that can be used through docker and I also created a template in runpod for anyone who wants to use it there, in the fork repository there are more details on how to use docker as a template in runpod.

https://github.com/alisson-anjos/YuE-Interface

@a43992899
Copy link
Collaborator

a43992899 commented Jan 29, 2025

@hf-lin could you set this up? maybe try https://github.com/alisson-anjos/YuE-Interface

see #14

@motowntalent
Copy link

Hello all, I made a fork and implemented a simple gradio interface that can be used through docker and I also created a template in runpod for anyone who wants to use it there, in the fork repository there are more details on how to use docker as a template in runpod.

https://github.com/alisson-anjos/YuE-Interface

Hi, there is no Issues tab on your Repo. The Runpod fails for me:

Inference started. Outputs will be saved in /workspace/outputs...The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling transformers.utils.move_cache().

0it [00:00, ?it/s]
0it [00:00, ?it/s]
Traceback (most recent call last):
File "/opt/conda/envs/pyenv/lib/python3.12/site-packages/transformers/utils/hub.py", line 403, in cached_file
resolved_file = hf_hub_download(
^^^^^^^^^^^^^^^^
File "/opt/conda/envs/pyenv/lib/python3.12/site-packages/huggingface_hub/utils/_validators.py", line 106, in _inner_fn
validate_repo_id(arg_value)
File "/opt/conda/envs/pyenv/lib/python3.12/site-packages/huggingface_hub/utils/_validators.py", line 154, in validate_repo_id
raise HFValidationError(
huggingface_hub.errors.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': '/workspace/models/YuE-s1-7B-anneal-en-cot'. Use repo_type argument if needed.

The above exception was the direct cause of the following exception:

Traceback (most recent call last):
File "/workspace/YuE-Interface/inference/infer.py", line 112, in
model = load_model(stage1_model, quantization_stage1)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/workspace/YuE-Interface/inference/infer.py", line 82, in load_model
model = AutoModelForCausalLM.from_pretrained(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/conda/envs/pyenv/lib/python3.12/site-packages/transformers/models/auto/auto_factory.py", line 487, in from_pretrained
resolved_config_file = cached_file(
^^^^^^^^^^^^
File "/opt/conda/envs/pyenv/lib/python3.12/site-packages/transformers/utils/hub.py", line 469, in cached_file
raise EnvironmentError(
OSError: Incorrect path_or_model_id: '/workspace/models/YuE-s1-7B-anneal-en-cot'. Please provide either the path to a local folder or the repo_id of a model on the Hub.

@alisson-anjos
Copy link

Hello all, I made a fork and implemented a simple gradio interface that can be used through docker and I also created a template in runpod for anyone who wants to use it there, in the fork repository there are more details on how to use docker as a template in runpod.
https://github.com/alisson-anjos/YuE-Interface

Hi, there is no Issues tab on your Repo. The Runpod fails for me:

Inference started. Outputs will be saved in /workspace/outputs...The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling transformers.utils.move_cache().

0it [00:00, ?it/s] 0it [00:00, ?it/s] Traceback (most recent call last): File "/opt/conda/envs/pyenv/lib/python3.12/site-packages/transformers/utils/hub.py", line 403, in cached_file resolved_file = hf_hub_download( ^^^^^^^^^^^^^^^^ File "/opt/conda/envs/pyenv/lib/python3.12/site-packages/huggingface_hub/utils/_validators.py", line 106, in _inner_fn validate_repo_id(arg_value) File "/opt/conda/envs/pyenv/lib/python3.12/site-packages/huggingface_hub/utils/_validators.py", line 154, in validate_repo_id raise HFValidationError( huggingface_hub.errors.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': '/workspace/models/YuE-s1-7B-anneal-en-cot'. Use repo_type argument if needed.

The above exception was the direct cause of the following exception:

Traceback (most recent call last): File "/workspace/YuE-Interface/inference/infer.py", line 112, in model = load_model(stage1_model, quantization_stage1) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/workspace/YuE-Interface/inference/infer.py", line 82, in load_model model = AutoModelForCausalLM.from_pretrained( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/envs/pyenv/lib/python3.12/site-packages/transformers/models/auto/auto_factory.py", line 487, in from_pretrained resolved_config_file = cached_file( ^^^^^^^^^^^^ File "/opt/conda/envs/pyenv/lib/python3.12/site-packages/transformers/utils/hub.py", line 469, in cached_file raise EnvironmentError( OSError: Incorrect path_or_model_id: '/workspace/models/YuE-s1-7B-anneal-en-cot'. Please provide either the path to a local folder or the repo_id of a model on the Hub.

When did you get the docker image? If it was yesterday then you will need to update the image with a docker pull alissonpereiraanjos/yue-interface:latest, because since yesterday this docker image has been updated many times.

@alisson-anjos
Copy link

alisson-anjos commented Jan 29, 2025

Hello all, I made a fork and implemented a simple gradio interface that can be used through docker and I also created a template in runpod for anyone who wants to use it there, in the fork repository there are more details on how to use docker as a template in runpod.
https://github.com/alisson-anjos/YuE-Interface

Hi, there is no Issues tab on your Repo. The Runpod fails for me:
Inference started. Outputs will be saved in /workspace/outputs...The cache for model files in Transformers v4.22.0 has been updated. Migrating your old cache. This is a one-time only operation. You can interrupt this and resume the migration later on by calling transformers.utils.move_cache().
0it [00:00, ?it/s] 0it [00:00, ?it/s] Traceback (most recent call last): File "/opt/conda/envs/pyenv/lib/python3.12/site-packages/transformers/utils/hub.py", line 403, in cached_file resolved_file = hf_hub_download( ^^^^^^^^^^^^^^^^ File "/opt/conda/envs/pyenv/lib/python3.12/site-packages/huggingface_hub/utils/_validators.py", line 106, in _inner_fn validate_repo_id(arg_value) File "/opt/conda/envs/pyenv/lib/python3.12/site-packages/huggingface_hub/utils/_validators.py", line 154, in validate_repo_id raise HFValidationError( huggingface_hub.errors.HFValidationError: Repo id must be in the form 'repo_name' or 'namespace/repo_name': '/workspace/models/YuE-s1-7B-anneal-en-cot'. Use repo_type argument if needed.
The above exception was the direct cause of the following exception:
Traceback (most recent call last): File "/workspace/YuE-Interface/inference/infer.py", line 112, in model = load_model(stage1_model, quantization_stage1) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/workspace/YuE-Interface/inference/infer.py", line 82, in load_model model = AutoModelForCausalLM.from_pretrained( ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/opt/conda/envs/pyenv/lib/python3.12/site-packages/transformers/models/auto/auto_factory.py", line 487, in from_pretrained resolved_config_file = cached_file( ^^^^^^^^^^^^ File "/opt/conda/envs/pyenv/lib/python3.12/site-packages/transformers/utils/hub.py", line 469, in cached_file raise EnvironmentError( OSError: Incorrect path_or_model_id: '/workspace/models/YuE-s1-7B-anneal-en-cot'. Please provide either the path to a local folder or the repo_id of a model on the Hub.

When did you get the docker image? If it was yesterday then you will need to update the image with a docker pull alissonpereiraanjos/yue-interface:latest, because since yesterday this docker image has been updated many times.

ah ok, you ran it through runpod, so I'm running it right now through runpod and I didn't have this problem, could it be some network block that prevented the models from being downloaded to the /workspace/models folder? This model download process can take a while, so if for some reason you manage to access the interface before the models finish downloading and running the audio generation, there is a chance that you will get this type of error. You have to monitor the logs to see if the models have finished downloading.

@philpilkington
Copy link

@alisson-anjos Thank you very much for the gradio interface and docker image in your fork as I am new to all this. It is working for me with a 4070Super and 12GB VRAM but very slow with INT8. Will have to try the NF4 and stay tuned for updates. It would be great for others if they merged that into the main repo I would think.

@alisson-anjos
Copy link

@alisson-anjos Thank you very much for the gradio interface and docker image in your fork as I am new to all this. It is working for me with a 4070Super and 12GB VRAM but very slow with INT8. Will have to try the NF4 and stay tuned for updates. It would be great for others if they merged that into the main repo I would think.

Yes, it would be interesting, I can try to open a PR with the changes I have in my fork, I did some things like disabling warnings, the seed (it has already been implemented), quantized models (INT8, INT4, NF4) and I will later try to optimize the code but only when I have time. But thank you very much for the feedback :D

I've also added the Dual Track support they implemented, when you can do a docker pull on the image.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

8 participants