-
Notifications
You must be signed in to change notification settings - Fork 4.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Update ParallelWaveGAN config + Tacotron2 masked loss #1545
Conversation
@@ -135,6 +134,8 @@ class TacotronConfig(BaseTTSConfig): | |||
ga_alpha (float): | |||
Weight for the guided attention loss. If set less than or equal to zero, it disables the corresponding loss | |||
function. Defaults to 5. | |||
stopnet_alpha (float): | |||
Weight for the guided attention loss. Defaults to 100.0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This comment incorrectly explains the parameter.
optimizer (torch.optim.Optimizer): | ||
Optimizer used for the training. Defaults to `AdamW`. | ||
optimizer_params (dict): | ||
Optimizer kwargs. Defaults to `{"betas": [0.8, 0.99], "weight_decay": 0.0}` | ||
lr_scheduler_gen (torch.optim.Scheduler): | ||
Learning rate scheduler for the generator. Defaults to `ExponentialLR`. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is the reason for these config changes?
Thanks for the PR. As you said, it'd be better to have separate PRs. |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. You might also look our discussion channels. |
Referencing #1187 and #1192
I did not realize that a PR must include all commits in a branch and i should have split into a separate branch for the ParallelWaveGAN config and the Tacotron2 masked loss.