forked from PaddlePaddle/PaddleSpeech
-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
[TTS] [黑客松]Add JETS (PaddlePaddle#3109)
- Loading branch information
Showing
25 changed files
with
4,481 additions
and
1 deletion.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,98 @@ | ||
# JETS with CSMSC | ||
This example contains code used to train a [JETS](https://arxiv.org/abs/2203.16852v1) model with [Chinese Standard Mandarin Speech Copus](https://www.data-baker.com/open_source.html). | ||
|
||
## Dataset | ||
### Download and Extract | ||
Download CSMSC from it's [Official Website](https://test.data-baker.com/data/index/source). | ||
|
||
### Get MFA Result and Extract | ||
We use [MFA](https://github.com/MontrealCorpusTools/Montreal-Forced-Aligner) to get phonemes and durations for JETS. | ||
You can download from here [baker_alignment_tone.tar.gz](https://paddlespeech.bj.bcebos.com/MFA/BZNSYP/with_tone/baker_alignment_tone.tar.gz), or train your MFA model reference to [mfa example](https://github.com/PaddlePaddle/PaddleSpeech/tree/develop/examples/other/mfa) of our repo. | ||
|
||
## Get Started | ||
Assume the path to the dataset is `~/datasets/BZNSYP`. | ||
Assume the path to the MFA result of CSMSC is `./baker_alignment_tone`. | ||
Run the command below to | ||
1. **source path**. | ||
2. preprocess the dataset. | ||
3. train the model. | ||
4. synthesize wavs. | ||
- synthesize waveform from `metadata.jsonl`. | ||
- synthesize waveform from a text file. | ||
|
||
```bash | ||
./run.sh | ||
``` | ||
You can choose a range of stages you want to run, or set `stage` equal to `stop-stage` to use only one stage, for example, running the following command will only preprocess the dataset. | ||
```bash | ||
./run.sh --stage 0 --stop-stage 0 | ||
``` | ||
### Data Preprocessing | ||
```bash | ||
./local/preprocess.sh ${conf_path} | ||
``` | ||
When it is done. A `dump` folder is created in the current directory. The structure of the dump folder is listed below. | ||
|
||
```text | ||
dump | ||
├── dev | ||
│ ├── norm | ||
│ └── raw | ||
├── phone_id_map.txt | ||
├── speaker_id_map.txt | ||
├── test | ||
│ ├── norm | ||
│ └── raw | ||
└── train | ||
├── feats_stats.npy | ||
├── norm | ||
└── raw | ||
``` | ||
The dataset is split into 3 parts, namely `train`, `dev`, and` test`, each of which contains a `norm` and `raw` subfolder. The raw folder contains wave、mel spectrogram、speech、pitch and energy features of each utterance, while the norm folder contains normalized ones. The statistics used to normalize features are computed from the training set, which is located in `dump/train/feats_stats.npy`. | ||
|
||
Also, there is a `metadata.jsonl` in each subfolder. It is a table-like file that contains phones, text_lengths, the path of feats, feats_lengths, the path of pitch features, the path of energy features, the path of raw waves, speaker, and the id of each utterance. | ||
|
||
### Model Training | ||
```bash | ||
CUDA_VISIBLE_DEVICES=${gpus} ./local/train.sh ${conf_path} ${train_output_path} | ||
``` | ||
`./local/train.sh` calls `${BIN_DIR}/train.py`. | ||
Here's the complete help message. | ||
```text | ||
usage: train.py [-h] [--config CONFIG] [--train-metadata TRAIN_METADATA] | ||
[--dev-metadata DEV_METADATA] [--output-dir OUTPUT_DIR] | ||
[--ngpu NGPU] [--phones-dict PHONES_DICT] | ||
Train a JETS model. | ||
optional arguments: | ||
-h, --help show this help message and exit | ||
--config CONFIG config file to overwrite default config. | ||
--train-metadata TRAIN_METADATA | ||
training data. | ||
--dev-metadata DEV_METADATA | ||
dev data. | ||
--output-dir OUTPUT_DIR | ||
output dir. | ||
--ngpu NGPU if ngpu == 0, use cpu. | ||
--phones-dict PHONES_DICT | ||
phone vocabulary file. | ||
``` | ||
1. `--config` is a config file in yaml format to overwrite the default config, which can be found at `conf/default.yaml`. | ||
2. `--train-metadata` and `--dev-metadata` should be the metadata file in the normalized subfolder of `train` and `dev` in the `dump` folder. | ||
3. `--output-dir` is the directory to save the results of the experiment. Checkpoints are saved in `checkpoints/` inside this directory. | ||
4. `--ngpu` is the number of gpus to use, if ngpu == 0, use cpu. | ||
5. `--phones-dict` is the path of the phone vocabulary file. | ||
|
||
### Synthesizing | ||
|
||
`./local/synthesize.sh` calls `${BIN_DIR}/synthesize.py`, which can synthesize waveform from `metadata.jsonl`. | ||
|
||
```bash | ||
CUDA_VISIBLE_DEVICES=${gpus} ./local/synthesize.sh ${conf_path} ${train_output_path} ${ckpt_name} | ||
``` | ||
|
||
`./local/synthesize_e2e.sh` calls `${BIN_DIR}/synthesize_e2e.py`, which can synthesize waveform from text file. | ||
```bash | ||
CUDA_VISIBLE_DEVICES=${gpus} ./local/synthesize_e2e.sh ${conf_path} ${train_output_path} ${ckpt_name} | ||
``` |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,224 @@ | ||
# This configuration tested on 4 GPUs (V100) with 32GB GPU | ||
# memory. It takes around 2 weeks to finish the training | ||
# but 100k iters model should generate reasonable results. | ||
########################################################### | ||
# FEATURE EXTRACTION SETTING # | ||
########################################################### | ||
|
||
n_mels: 80 | ||
fs: 22050 # sr | ||
n_fft: 1024 # FFT size (samples). | ||
n_shift: 256 # Hop size (samples). 12.5ms | ||
win_length: null # Window length (samples). 50ms | ||
# If set to null, it will be the same as fft_size. | ||
window: "hann" # Window function. | ||
fmin: 0 # minimum frequency for Mel basis | ||
fmax: null # maximum frequency for Mel basis | ||
f0min: 80 # Minimum f0 for pitch extraction. | ||
f0max: 400 # Maximum f0 for pitch extraction. | ||
|
||
|
||
########################################################## | ||
# TTS MODEL SETTING # | ||
########################################################## | ||
model: | ||
# generator related | ||
generator_type: jets_generator | ||
generator_params: | ||
adim: 256 # attention dimension | ||
aheads: 2 # number of attention heads | ||
elayers: 4 # number of encoder layers | ||
eunits: 1024 # number of encoder ff units | ||
dlayers: 4 # number of decoder layers | ||
dunits: 1024 # number of decoder ff units | ||
positionwise_layer_type: conv1d # type of position-wise layer | ||
positionwise_conv_kernel_size: 3 # kernel size of position wise conv layer | ||
duration_predictor_layers: 2 # number of layers of duration predictor | ||
duration_predictor_chans: 256 # number of channels of duration predictor | ||
duration_predictor_kernel_size: 3 # filter size of duration predictor | ||
use_masking: True # whether to apply masking for padded part in loss calculation | ||
encoder_normalize_before: True # whether to perform layer normalization before the input | ||
decoder_normalize_before: True # whether to perform layer normalization before the input | ||
encoder_type: transformer # encoder type | ||
decoder_type: transformer # decoder type | ||
conformer_rel_pos_type: latest # relative positional encoding type | ||
conformer_pos_enc_layer_type: rel_pos # conformer positional encoding type | ||
conformer_self_attn_layer_type: rel_selfattn # conformer self-attention type | ||
conformer_activation_type: swish # conformer activation type | ||
use_macaron_style_in_conformer: true # whether to use macaron style in conformer | ||
use_cnn_in_conformer: true # whether to use CNN in conformer | ||
conformer_enc_kernel_size: 7 # kernel size in CNN module of conformer-based encoder | ||
conformer_dec_kernel_size: 31 # kernel size in CNN module of conformer-based decoder | ||
init_type: xavier_uniform # initialization type | ||
init_enc_alpha: 1.0 # initial value of alpha for encoder | ||
init_dec_alpha: 1.0 # initial value of alpha for decoder | ||
transformer_enc_dropout_rate: 0.2 # dropout rate for transformer encoder layer | ||
transformer_enc_positional_dropout_rate: 0.2 # dropout rate for transformer encoder positional encoding | ||
transformer_enc_attn_dropout_rate: 0.2 # dropout rate for transformer encoder attention layer | ||
transformer_dec_dropout_rate: 0.2 # dropout rate for transformer decoder layer | ||
transformer_dec_positional_dropout_rate: 0.2 # dropout rate for transformer decoder positional encoding | ||
transformer_dec_attn_dropout_rate: 0.2 # dropout rate for transformer decoder attention layer | ||
pitch_predictor_layers: 5 # number of conv layers in pitch predictor | ||
pitch_predictor_chans: 256 # number of channels of conv layers in pitch predictor | ||
pitch_predictor_kernel_size: 5 # kernel size of conv leyers in pitch predictor | ||
pitch_predictor_dropout: 0.5 # dropout rate in pitch predictor | ||
pitch_embed_kernel_size: 1 # kernel size of conv embedding layer for pitch | ||
pitch_embed_dropout: 0.0 # dropout rate after conv embedding layer for pitch | ||
stop_gradient_from_pitch_predictor: true # whether to stop the gradient from pitch predictor to encoder | ||
energy_predictor_layers: 2 # number of conv layers in energy predictor | ||
energy_predictor_chans: 256 # number of channels of conv layers in energy predictor | ||
energy_predictor_kernel_size: 3 # kernel size of conv leyers in energy predictor | ||
energy_predictor_dropout: 0.5 # dropout rate in energy predictor | ||
energy_embed_kernel_size: 1 # kernel size of conv embedding layer for energy | ||
energy_embed_dropout: 0.0 # dropout rate after conv embedding layer for energy | ||
stop_gradient_from_energy_predictor: false # whether to stop the gradient from energy predictor to encoder | ||
generator_out_channels: 1 | ||
generator_channels: 512 | ||
generator_global_channels: -1 | ||
generator_kernel_size: 7 | ||
generator_upsample_scales: [8, 8, 2, 2] | ||
generator_upsample_kernel_sizes: [16, 16, 4, 4] | ||
generator_resblock_kernel_sizes: [3, 7, 11] | ||
generator_resblock_dilations: [[1, 3, 5], [1, 3, 5], [1, 3, 5]] | ||
generator_use_additional_convs: true | ||
generator_bias: true | ||
generator_nonlinear_activation: "leakyrelu" | ||
generator_nonlinear_activation_params: | ||
negative_slope: 0.1 | ||
generator_use_weight_norm: true | ||
segment_size: 64 # segment size for random windowed discriminator | ||
|
||
# discriminator related | ||
discriminator_type: hifigan_multi_scale_multi_period_discriminator | ||
discriminator_params: | ||
scales: 1 | ||
scale_downsample_pooling: "AvgPool1D" | ||
scale_downsample_pooling_params: | ||
kernel_size: 4 | ||
stride: 2 | ||
padding: 2 | ||
scale_discriminator_params: | ||
in_channels: 1 | ||
out_channels: 1 | ||
kernel_sizes: [15, 41, 5, 3] | ||
channels: 128 | ||
max_downsample_channels: 1024 | ||
max_groups: 16 | ||
bias: True | ||
downsample_scales: [2, 2, 4, 4, 1] | ||
nonlinear_activation: "leakyrelu" | ||
nonlinear_activation_params: | ||
negative_slope: 0.1 | ||
use_weight_norm: True | ||
use_spectral_norm: False | ||
follow_official_norm: False | ||
periods: [2, 3, 5, 7, 11] | ||
period_discriminator_params: | ||
in_channels: 1 | ||
out_channels: 1 | ||
kernel_sizes: [5, 3] | ||
channels: 32 | ||
downsample_scales: [3, 3, 3, 3, 1] | ||
max_downsample_channels: 1024 | ||
bias: True | ||
nonlinear_activation: "leakyrelu" | ||
nonlinear_activation_params: | ||
negative_slope: 0.1 | ||
use_weight_norm: True | ||
use_spectral_norm: False | ||
# others | ||
sampling_rate: 22050 # needed in the inference for saving wav | ||
cache_generator_outputs: True # whether to cache generator outputs in the training | ||
use_alignment_module: False # whether to use alignment module | ||
|
||
########################################################### | ||
# LOSS SETTING # | ||
########################################################### | ||
# loss function related | ||
generator_adv_loss_params: | ||
average_by_discriminators: False # whether to average loss value by #discriminators | ||
loss_type: mse # loss type, "mse" or "hinge" | ||
discriminator_adv_loss_params: | ||
average_by_discriminators: False # whether to average loss value by #discriminators | ||
loss_type: mse # loss type, "mse" or "hinge" | ||
feat_match_loss_params: | ||
average_by_discriminators: False # whether to average loss value by #discriminators | ||
average_by_layers: False # whether to average loss value by #layers of each discriminator | ||
include_final_outputs: True # whether to include final outputs for loss calculation | ||
mel_loss_params: | ||
fs: 22050 # must be the same as the training data | ||
fft_size: 1024 # fft points | ||
hop_size: 256 # hop size | ||
win_length: null # window length | ||
window: hann # window type | ||
num_mels: 80 # number of Mel basis | ||
fmin: 0 # minimum frequency for Mel basis | ||
fmax: null # maximum frequency for Mel basis | ||
log_base: null # null represent natural log | ||
|
||
########################################################### | ||
# ADVERSARIAL LOSS SETTING # | ||
########################################################### | ||
lambda_adv: 1.0 # loss scaling coefficient for adversarial loss | ||
lambda_mel: 45.0 # loss scaling coefficient for Mel loss | ||
lambda_feat_match: 2.0 # loss scaling coefficient for feat match loss | ||
lambda_var: 1.0 # loss scaling coefficient for duration loss | ||
lambda_align: 2.0 # loss scaling coefficient for KL divergence loss | ||
# others | ||
sampling_rate: 22050 # needed in the inference for saving wav | ||
cache_generator_outputs: True # whether to cache generator outputs in the training | ||
|
||
|
||
# extra module for additional inputs | ||
pitch_extract: dio # pitch extractor type | ||
pitch_extract_conf: | ||
reduction_factor: 1 | ||
use_token_averaged_f0: false | ||
pitch_normalize: global_mvn # normalizer for the pitch feature | ||
energy_extract: energy # energy extractor type | ||
energy_extract_conf: | ||
reduction_factor: 1 | ||
use_token_averaged_energy: false | ||
energy_normalize: global_mvn # normalizer for the energy feature | ||
|
||
|
||
########################################################### | ||
# DATA LOADER SETTING # | ||
########################################################### | ||
batch_size: 32 # Batch size. | ||
num_workers: 4 # Number of workers in DataLoader. | ||
|
||
########################################################## | ||
# OPTIMIZER & SCHEDULER SETTING # | ||
########################################################## | ||
# optimizer setting for generator | ||
generator_optimizer_params: | ||
beta1: 0.8 | ||
beta2: 0.99 | ||
epsilon: 1.0e-9 | ||
weight_decay: 0.0 | ||
generator_scheduler: exponential_decay | ||
generator_scheduler_params: | ||
learning_rate: 2.0e-4 | ||
gamma: 0.999875 | ||
|
||
# optimizer setting for discriminator | ||
discriminator_optimizer_params: | ||
beta1: 0.8 | ||
beta2: 0.99 | ||
epsilon: 1.0e-9 | ||
weight_decay: 0.0 | ||
discriminator_scheduler: exponential_decay | ||
discriminator_scheduler_params: | ||
learning_rate: 2.0e-4 | ||
gamma: 0.999875 | ||
generator_first: True # whether to start updating generator first | ||
|
||
########################################################## | ||
# OTHER TRAINING SETTING # | ||
########################################################## | ||
num_snapshots: 10 # max number of snapshots to keep while training | ||
train_max_steps: 350000 # Number of training steps. == total_iters / ngpus, total_iters = 1000000 | ||
save_interval_steps: 1000 # Interval steps to save checkpoint. | ||
eval_interval_steps: 250 # Interval steps to evaluate the network. | ||
seed: 777 # random seed number |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,15 @@ | ||
#!/bin/bash | ||
|
||
train_output_path=$1 | ||
|
||
stage=0 | ||
stop_stage=0 | ||
|
||
if [ ${stage} -le 0 ] && [ ${stop_stage} -ge 0 ]; then | ||
python3 ${BIN_DIR}/inference.py \ | ||
--inference_dir=${train_output_path}/inference \ | ||
--am=jets_csmsc \ | ||
--text=${BIN_DIR}/../sentences.txt \ | ||
--output_dir=${train_output_path}/pd_infer_out \ | ||
--phones_dict=dump/phone_id_map.txt | ||
fi |
Oops, something went wrong.