Skip to content

The code for paper "Diversifying Dialog Generation via Adaptive Label Smoothing" in ACL 2021.

License

Notifications You must be signed in to change notification settings

lemon234071/AdaLabel

Repository files navigation

AdaLabel

Code/data for ACL'21 paper "Diversifying Dialog Generation via Adaptive Label Smoothing".

We implemented an Adaptive Label Smoothing (AdaLabel) approach that can adaptively estimate a target label distribution at each time step for different contexts. Our method is an extension of the traditional MLE loss. The current implementation is designed for the task of dialogue generation. However, our approach can be readily extended to other text generation tasks such as summarization. Please refer to our paper for more details.

Our implementation is based on the OpenNMT-py project, therefore most behaviors of our code follow the default settings in OpenNMT-py. Specifically, we forked from this commit of OpenNMT-py, and implemented our code on top of it. This repo reserves all previous commits of OpenNMT-py and ignores all the follow-up commits. Our changes can be viewed by comparing the commits.

Our code is tested on Ubuntu 16.04 using python 3.7.4 and PyTorch 1.7.1.

How to use

Step 1: Setup

Install dependencies:

conda create -n adalabel python=3.7.4
conda activate adalabel
conda install pytorch==1.7.1 cudatoolkit=10.1 -c PyTorch -n adalabel 
pip install -r requirement.txt

Make folders to store training and testing files:

mkdir checkpoint  # Model checkpoints will be saved here
mkdir log_dir     # The training log will be placed here
mkdir result      # The inferred results will be saved here

Step 2: Preprocess the data

The data can be downloaded from this link. After downloading and unzipping, the DailyDialog and OpenSubtitle datasets used in our paper can be found in the data_daily and data_ost folders, respectively. We provide a script scripts/preprocess.sh to preprocess the data.

bash scripts/preprocess.sh

Note:

  • Before running scripts/preprocess.sh, remember to modify its first line (i.e., the value of DATA_DIR) to specify the correct data folder.
  • The default choice of our tokenizer is bert-base-uncased

Step 3: Train the model

The training of our model can be performed using the following script:

bash scripts/train_daily.sh   # Train models on the DailyDialog dataset

or

bash scripts/train_ost.sh     # Train models on the OpenSubtitle dataset

Note:

  • The resulting checkpoints will be written to the checkpoint folder.
  • By default, our script uses the first available GPU.
  • Once the training is completed, the training script will log out the best performing model on the validation set.
  • Experiments in our paper are performed using TITAN XP with 12GB memory.

Step 4: Inference

The inference of our model can be performed using the following script:

bash scripts/inference_daily.sh {which GPU to use} {path to your model checkpoint}   # Infer models on the DailyDialog dataset

or

bash scripts/inference_ost.sh {which GPU to use} {path to your model checkpoint}     # Infer models on the OpenSubtitle dataset

Note:

  • Inferred outputs will be saved to the result folder.

Step 5: Evaluation

The following script can be used to evaluate our model based on the inferred outputs obtained in Step 4:

python scripts/eval.py {path to the data folder} {path to the inferred output file}

Citation

Please cite our paper if you find this repo useful :)

@inproceedings{wang2021adalabel,
  title={Diversifying Dialog Generation via Adaptive Label Smoothing},
  author={Wang, Yida and Zheng, Yinhe and Jiang, Yong and Huang, Minlie},
  booktitle={Proceedings of the 59th Annual Meeting of the Association for Computational Linguistics},
  year={2021}
}

Issues and pull requests are welcomed.

About

The code for paper "Diversifying Dialog Generation via Adaptive Label Smoothing" in ACL 2021.

Topics

Resources

License

Stars

Watchers

Forks

Packages

No packages published