Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merge r1.9.0 main #4331

Merged
merged 46 commits into from
Jun 7, 2022
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
46 commits
Select commit Hold shift + click to select a range
680b099
update branch
ericharper May 2, 2022
d25fa43
update package info
ericharper May 2, 2022
be2b7c9
cleaned up TN/ ITN doc (#4119)
yzhang123 May 6, 2022
42f6a38
Draft: Fix restoring from checkpoint for case when `model.common_data…
PeganovAnton May 9, 2022
47c138c
fix doc (#4146)
yzhang123 May 11, 2022
b4699ac
Tacotron2 retrain (#4103)
treacker May 11, 2022
0e95b67
Multiprocess improvements (#4127)
nithinraok May 11, 2022
afca46a
notebooks' link, typo and import fix (#4158)
fayejf May 12, 2022
f55c02b
update speaker docs (#4164)
nithinraok May 13, 2022
d10f605
small fix (#4180)
fayejf May 16, 2022
7872fc1
fix the server key value problem (#4196)
yidong72 May 18, 2022
ffc0744
Fix/punctuation/trainer required for setting test data (#4199)
PeganovAnton May 19, 2022
5a33417
Update ContextNet version (#4207)
titu1994 May 19, 2022
c9a5c1f
fix bugs for dialogue tutorial (#4211)
Zhilin123 May 19, 2022
a5214a3
Dialogue tutorial fix (#4214)
Zhilin123 May 20, 2022
871ad0c
Add docs for Thutmose Tagger (#4173)
bene-ges May 20, 2022
4e05ce5
Dialogue tutorial fix (#4218)
Zhilin123 May 21, 2022
ea411b9
Dialogue tutorial fix (#4221)
Zhilin123 May 21, 2022
778ca15
fix syntax error in ipynb-file (#4228)
bene-ges May 23, 2022
76111d9
fix json serialize (#4235)
yidong72 May 23, 2022
524d584
Prompt Learning Typo Fixes (#4238)
vadam5 May 24, 2022
8c26234
fixing bug 3642622 (#4250)
pasandi20 May 24, 2022
33c4202
fix broken link in the tutorial (#4257)
bene-ges May 25, 2022
274a396
Typo fix, branch change, better download messagae (#4262)
vadam5 May 25, 2022
56dd272
Raise error if bicleaner is not installed in NMT Data preprocesing no…
MaximumEntropy May 25, 2022
b02748e
Fix missing validation dataset, whitelist certain keywords for datase…
titu1994 May 26, 2022
59f1ddf
Update asr configs with num_workers and pin_memory (#4270)
titu1994 May 26, 2022
112fbf4
Fix epoch end (#4265)
MaximumEntropy May 26, 2022
c84a3df
Set Save on train end to false (#4274)
vadam5 May 26, 2022
65a72be
Update YAML (#4261)
MaximumEntropy May 26, 2022
dfdb10c
Updated config to fix CI test OOM error (#4279)
vadam5 May 28, 2022
ea798df
verbose k2 install, skip if failed (#4289)
GNroy May 30, 2022
7005667
Changed total virtual prompt tokens (#4295)
vadam5 Jun 1, 2022
7e14b13
upper bound lightning
ericharper Jun 1, 2022
304856f
update branch
ericharper Jun 3, 2022
be63728
update config
ericharper Jun 3, 2022
a280fca
remove duplicate test
ericharper Jun 3, 2022
4d0b79b
Merge branch 'main' into merge_r1.9.0_main
vadam5 Jun 5, 2022
68eaac2
Merge branch 'main' into merge_r1.9.0_main
ericharper Jun 6, 2022
dc4bc08
fix tn test cases
ericharper Jun 7, 2022
ace5750
Merge branch 'merge_r1.9.0_main' of github.com:NVIDIA/NeMo into merge…
ericharper Jun 7, 2022
6d719a9
Merge branch 'main' into merge_r1.9.0_main
ericharper Jun 7, 2022
3d84c14
Merge branch 'main' into merge_r1.9.0_main
ericharper Jun 7, 2022
e3c9559
add another safe.directory
ericharper Jun 7, 2022
d55f765
typo
ericharper Jun 7, 2022
fd070b1
Merge branch 'main' into merge_r1.9.0_main
ericharper Jun 7, 2022
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -55,7 +55,7 @@ RUN for f in $(ls requirements*.txt); do pip install --disable-pip-version-check

# install k2, skip if installation fails
COPY scripts /tmp/nemo/scripts/
RUN /bin/bash /tmp/nemo/scripts/speech_recognition/k2/setup.sh; exit 0
RUN /bin/bash /tmp/nemo/scripts/speech_recognition/k2/setup.sh || exit 0

# copy nemo source into a scratch image
FROM scratch as nemo-src
Expand Down
54 changes: 26 additions & 28 deletions Jenkinsfile
Original file line number Diff line number Diff line change
Expand Up @@ -15,6 +15,7 @@ pipeline {
stage('Add git safe directory'){
steps{
sh 'git config --global --add safe.directory /var/lib/jenkins/workspace/NeMo_$GIT_BRANCH'
sh 'git config --global --add safe.directory /raid/JenkinsWorkDir/workspace/NeMo_$GIT_BRANCH'
}
}

Expand Down Expand Up @@ -1590,22 +1591,20 @@ pipeline {
}
failFast true
stages {
stage('Punctuation & Capitalization, Using model.common_dataset_parameters.label_vocab_dir') {
stage('Punctuation & Capitalization, Using model.common_datasest_parameters.label_vocab_dir') {
steps {
sh 'cd examples/nlp/token_classification && \
output_dir="$(mktemp -d -p "$(pwd)")" && \
data_dir="$(mktemp -d -p "$(pwd)")" && \
cp /home/TestData/nlp/token_classification_punctuation/*.txt "${data_dir}"/ && \
label_vocab_dir="$(mktemp -d -p "$(pwd)")" && \
label_vocab_dir=label_vocab_dir && \
mkdir -p ${label_vocab_dir} && \
punct_label_vocab="${label_vocab_dir}/punct_label_vocab.csv" && \
capit_label_vocab="${label_vocab_dir}/capit_label_vocab.csv" && \
printf "O\n,\n.\n?\n" > "${punct_label_vocab}" && \
printf "O\nU\n" > "${capit_label_vocab}" && \
python punctuation_capitalization_train_evaluate.py \
CUDA_LAUNCH_BLOCKING=1 python punctuation_capitalization_train_evaluate.py \
model.train_ds.use_tarred_dataset=false \
model.train_ds.ds_item="${data_dir}" \
model.validation_ds.ds_item="${data_dir}" \
model.test_ds.ds_item="${data_dir}" \
model.train_ds.ds_item=/home/TestData/nlp/token_classification_punctuation \
model.validation_ds.ds_item=/home/TestData/nlp/token_classification_punctuation \
model.test_ds.ds_item=/home/TestData/nlp/token_classification_punctuation \
model.language_model.pretrained_model_name=distilbert-base-uncased \
model.common_dataset_parameters.label_vocab_dir="${label_vocab_dir}" \
model.class_labels.punct_labels_file="$(basename "${punct_label_vocab}")" \
Expand All @@ -1616,69 +1615,68 @@ pipeline {
trainer.devices=[0,1] \
trainer.strategy=ddp \
trainer.max_epochs=1 \
+exp_manager.explicit_log_dir="${output_dir}" \
+exp_manager.explicit_log_dir=/home/TestData/nlp/token_classification_punctuation/output \
+do_testing=false && \
python punctuation_capitalization_train_evaluate.py \
CUDA_LAUNCH_BLOCKING=1 python punctuation_capitalization_train_evaluate.py \
+do_training=false \
+do_testing=true \
~model.train_ds \
~model.validation_ds \
model.test_ds.ds_item="${data_dir}" \
pretrained_model="${output_dir}/checkpoints/Punctuation_and_Capitalization.nemo" \
model.test_ds.ds_item=/home/TestData/nlp/token_classification_punctuation \
pretrained_model=/home/TestData/nlp/token_classification_punctuation/output/checkpoints/Punctuation_and_Capitalization.nemo \
+model.train_ds.use_cache=false \
+model.validation_ds.use_cache=false \
+model.test_ds.use_cache=false \
trainer.devices=[0,1] \
trainer.strategy=ddp \
trainer.max_epochs=1 \
exp_manager=null && \
rm -rf "${label_vocab_dir}" "${data_dir}" "${output_dir}"'
rm -r "${label_vocab_dir}" && \
rm -rf /home/TestData/nlp/token_classification_punctuation/output/*'
}
}
stage('Punctuation & Capitalization, Using model.common_dataset_parameters.{punct,capit}_label_ids') {
stage('Punctuation & Capitalization, Using model.common_datasest_parameters.{punct,capit}_label_ids') {
steps {
sh 'cd examples/nlp/token_classification && \
output_dir="$(mktemp -d -p "$(pwd)")" && \
data_dir="$(mktemp -d -p "$(pwd)")" && \
cp /home/TestData/nlp/token_classification_punctuation/*.txt "${data_dir}"/ && \
conf_path="$(mktemp -d -p "$(pwd)")" && \
conf_path=/home/TestData/nlp/token_classification_punctuation && \
conf_name=punctuation_capitalization_config_with_ids && \
cp conf/punctuation_capitalization_config.yaml "${conf_path}/${conf_name}.yaml" && \
sed -i $\'s/punct_label_ids: null/punct_label_ids: {O: 0, \\\',\\\': 1, .: 2, \\\'?\\\': 3}/\' \
"${conf_path}/${conf_name}.yaml" && \
sed -i $\'s/capit_label_ids: null/capit_label_ids: {O: 0, U: 1}/\' \
"${conf_path}/${conf_name}.yaml" && \
python punctuation_capitalization_train_evaluate.py \
CUDA_LAUNCH_BLOCKING=1 python punctuation_capitalization_train_evaluate.py \
--config-path "${conf_path}" \
--config-name "${conf_name}" \
model.train_ds.use_tarred_dataset=false \
model.train_ds.ds_item="${data_dir}" \
model.validation_ds.ds_item="${data_dir}" \
model.test_ds.ds_item="${data_dir}" \
model.train_ds.ds_item=/home/TestData/nlp/token_classification_punctuation \
model.validation_ds.ds_item=/home/TestData/nlp/token_classification_punctuation \
model.test_ds.ds_item=/home/TestData/nlp/token_classification_punctuation \
model.language_model.pretrained_model_name=distilbert-base-uncased \
+model.train_ds.use_cache=false \
+model.validation_ds.use_cache=false \
+model.test_ds.use_cache=false \
trainer.devices=[0,1] \
trainer.strategy=ddp \
trainer.max_epochs=1 \
+exp_manager.explicit_log_dir="${output_dir}" \
+exp_manager.explicit_log_dir=/home/TestData/nlp/token_classification_punctuation/output \
+do_testing=false && \
python punctuation_capitalization_train_evaluate.py \
CUDA_LAUNCH_BLOCKING=1 python punctuation_capitalization_train_evaluate.py \
+do_training=false \
+do_testing=true \
~model.train_ds \
~model.validation_ds \
model.test_ds.ds_item="${data_dir}" \
pretrained_model="${output_dir}/checkpoints/Punctuation_and_Capitalization.nemo" \
model.test_ds.ds_item=/home/TestData/nlp/token_classification_punctuation \
pretrained_model=/home/TestData/nlp/token_classification_punctuation/output/checkpoints/Punctuation_and_Capitalization.nemo \
+model.train_ds.use_cache=false \
+model.validation_ds.use_cache=false \
+model.test_ds.use_cache=false \
trainer.devices=[0,1] \
trainer.strategy=ddp \
trainer.max_epochs=1 \
exp_manager=null && \
rm -rf "${output_dir}" "${data_dir}" "${conf_path}"'
rm -rf /home/TestData/nlp/token_classification_punctuation/output/* && \
rm "${conf_path}/${conf_name}.yaml"'
}
}
}
Expand Down
31 changes: 21 additions & 10 deletions docs/source/nlp/prompt_learning.rst
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,8 @@ Instead of selecting discrete text prompts in a manual or automated fashion, pro

Our continuous learning capability for combined p-tuning and prompt tuning with GPT style models is a NeMo specific extension of the author's original work.

Please also checkout our `prompt learning tutorial notebook. <https://github.com/NVIDIA/NeMo/blob/main/tutorials/nlp/Multitask_Prompt_and_PTuning.ipynb>`_


Terminology
^^^^^^^^^^
Expand Down Expand Up @@ -89,14 +91,17 @@ the input will be translated into ``VVV Hypothesis: And he said, Mama, I'm home.
"prompt_template": "<|VIRTUAL_PROMPT_0|> {sentence} sentiment: {label}",
"total_virtual_tokens": 10,
"virtual_token_splits": [10],
"truncate_field": "sentence"
"truncate_field": "sentence",
"answer_only_loss": False,
},
{
"taskname": "intent_and_slot",
"prompt_template": "<|VIRTUAL_PROMPT_0|> Predict intent and slot <|VIRTUAL_PROMPT_1|> :\n{utterance}{label}",
"total_virtual_tokens": 10,
"virtual_token_splits": [7, 3],
"truncate_field": None
"truncate_field": None,
"answer_only_loss": True,
"answer_field": "label"
}
]

Expand Down Expand Up @@ -198,9 +203,9 @@ Setting New Tasks

After you p-tune or prompt-tune your model, you can always go back and p-tune or prompt-tune your model on more tasks without over writing the virtual prompts who've trained already. You can also use a different number of ``total_virtual_tokens`` between each training session as long as tasks ptuned or prompt tuned at the same time have the same number of ``total_virtual_tokens``. For this reason, when you ptune on a new task, you need to tell your model which of your tasks are new and which ones already exist (and thus you don't want to tune them). You do this by setting the ``new_tasks`` and ``existing_tasks`` values in the config file.

Example Multi-Task Prompt Tuning Command
Example Multi-Task Prompt Tuning Config and Command
^^^^^^^^^^
First define a config called ``multitask-prompt-learning.yaml`` that looks like:
First define a config called ``multitask-prompt-learning.yaml`` demonstrated below. **In the** ``exp_manager`` **portion of the config,** ``save_on_train_end`` **should be set to** ``False`` **to avoid unnecessarily saving the incorrect model weights.** This is already done in the example `megatron_gpt_prompt_learning_config.yaml config <https://github.com/NVIDIA/NeMo/blob/main/examples/nlp/language_modeling/conf/megatron_gpt_prompt_learning_config.yaml>`_ that you should use as your starting point. The correct prompt learning model will be saved at the ``model.nemo_path`` you set.

.. code::

Expand Down Expand Up @@ -229,12 +234,15 @@ First define a config called ``multitask-prompt-learning.yaml`` that looks like:
total_virtual_tokens: 100
virtual_token_splits: [100]
truncate_field: null
answer_only_loss: False

- taskname: "intent_and_slot"
prompt_template: "<|VIRTUAL_PROMPT_0|> Predict intent and slot <|VIRTUAL_PROMPT_1|> :\n{utterance}{label}"
total_virtual_tokens: 100
virtual_token_splits: [80, 20]
truncate_field: null
answer_only_loss: True
answer_field: "label"

prompt_tuning:
new_prompt_init_methods: ["text", "text"]
Expand All @@ -259,7 +267,7 @@ Then run the command
python megatron_gpt_prompt_learning.py --config-name=multitask-prompt-learning.yaml


Example Multi-Task P-Tuning Command After Prompt-Tuning
Example Multi-Task P-Tuning Config and Command After Prompt-Tuning
^^^^^^^^^^
Update ``multitask-prompt-learning.yaml`` from the example above with p-tuning parameters for the new task. Be sure to update ``model.existing_tasks`` with the tasknames from previous prompt learning runs and to use the ``.nemo`` file saved at the end of your last prompt learning session. Values different from the config above have stars commented next to them.

Expand All @@ -284,28 +292,31 @@ In this example, the SQuAD task includes the question context as part of the pro
restore_path: multitask_prompt_tuning.nemo # ***
language_model_path: models/megatron_125M_gpt.nemo
existing_tasks: ["sentiment", "intent_and_slot"] # ***
new_tasks: ["sentiment", "intent_and_slot"]
new_tasks: ["squad"]

task_templates:
- taskname: "sentiment"
prompt_template: "<|VIRTUAL_PROMPT_0|> {sentence} sentiment: {label}"
total_virtual_tokens: 100
virtual_token_splits: [100]
truncate_field: null
answer_only_loss: False

- taskname: "intent_and_slot"
prompt_template: "<|VIRTUAL_PROMPT_0|> Predict intent and slot <|VIRTUAL_PROMPT_1|> :\n{utterance}{label}"
total_virtual_tokens: 100
virtual_token_splits: [80, 20]
truncate_field: null
answer_only_loss: True
answer_field: "label"

- taskname: "squad" # ***
prompt_template: "<|VIRTUAL_PROMPT_0|> Answer the question from the context <|VIRTUAL_PROMPT_1|> {question} <|VIRTUAL_PROMPT_2|> {context} <|VIRTUAL_PROMPT_3|> Answer: {answer}" # ***
total_virtual_tokens: 16 # ***
virtual_token_splits: [4, 4, 4, 4] # ***
prompt_template: "<|VIRTUAL_PROMPT_0|> Answer the question from the context {question} {context} Answer: {answer}" # ***
total_virtual_tokens: 9 # ***
virtual_token_splits: [9] # ***
truncate_field: context # ***
answer_only_loss: True # ***
answer_field: 'answer # ***
answer_field: "answer" # ***

p_tuning: # ***
dropout: 0.0 # ***
Expand Down
7 changes: 4 additions & 3 deletions docs/source/nlp/text_normalization/intro.rst
Original file line number Diff line number Diff line change
@@ -1,6 +1,8 @@
(Inverse) Text Normalization
============================

NeMo supports Text Normalization (TN) and Inverse Text Normalization (ITN) tasks via rule-based `nemo_text_processing` python package and Neural-based TN/ITN models.

Rule-based (WFST) TN/ITN:

.. toctree::
Expand All @@ -9,11 +11,10 @@ Rule-based (WFST) TN/ITN:
wfst/intro


Neural TN/ITN:
Neural-based TN/ITN:

.. toctree::
:maxdepth: 1

nn_text_normalization

neural_models

23 changes: 23 additions & 0 deletions docs/source/nlp/text_normalization/neural_models.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
.. _neural_models:

Neural Models for (Inverse) Text Normalization
==============================================

NeMo provides two types of neural models:


Duplex T5-based TN/ITN:

.. toctree::
:maxdepth: 1

nn_text_normalization


Single-pass Tagger-based ITN:

.. toctree::
:maxdepth: 1

text_normalization_as_tagging

Loading