Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question: Fine tuning LibriTTS with StyleTTS #61

Closed
Yahya-khodr opened this issue Oct 16, 2023 · 5 comments
Closed

Question: Fine tuning LibriTTS with StyleTTS #61

Yahya-khodr opened this issue Oct 16, 2023 · 5 comments

Comments

@Yahya-khodr
Copy link

Yahya-khodr commented Oct 16, 2023

I'm trying to start from the LibritTTS model and train it on more dataset,
My dataset is small,
I set the pretrained model in the config file to the LibritTTS pretrained model epoch_2nd_0050,
Is that the right track ?
Also do I need bigger dataset for the model to perform well ?
Thank you
@yl4579

@yl4579
Copy link
Owner

yl4579 commented Oct 19, 2023

You will need to do finetuning for both the first and second stages, see #10 (comment). If your new data is drastically different from the base model, you will need more data to perform well, and vice versa.

@yl4579 yl4579 closed this as completed Oct 19, 2023
@Yahya-khodr
Copy link
Author

@yl4579
I already set up the data for training, my dataset is for English and Portuguese but with new speakers.
My dataset is around 20 mins for each speaker so it is ~ 40 mins in totally for both speakers
If I want to start from the LibriTTS model in first stage how my config file should be ?
This is my confg.yaml

pretrained_model: "Models/LibriTTS/epoch_2nd_0050.pth"
second_stage_load_pretrained: false 
load_only_params: false 

And for the 2nd stage it will start from the first_stage.pth file anyway.
so I don't need to set a pretrained_model parameter as starting point.

Thank you

@yl4579
Copy link
Owner

yl4579 commented Oct 19, 2023

You should do load_only_params: true for the first stage because you probably don't want to load the optimizers. You are correct for the second stage.

@Yahya-khodr
Copy link
Author

@yl4579
How can I track the training ?
How do I know it is done and that it has been trained enough ?
Thank you

@yl4579
Copy link
Owner

yl4579 commented Oct 28, 2023

You can trace the training on the tensorboard. The stop condition should be the same as non-finetuning, basically you just look at each loss term and see if they have stopped decreasing and that is probably the time to stop.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants