Skip to content

basiclab/MAD

Folders and files

NameName
Last commit message
Last commit date

Latest commit

95d95df Β· Apr 15, 2025

History

7 Commits
Oct 14, 2024
Oct 20, 2024
Oct 14, 2024
Oct 20, 2024
Oct 14, 2024
Apr 4, 2025
Oct 14, 2024
Apr 7, 2025
Oct 14, 2024
Oct 14, 2024
Oct 14, 2024
Oct 14, 2024
Oct 14, 2024
Apr 15, 2025
Oct 14, 2024
Oct 20, 2024
Oct 20, 2024
Apr 7, 2025
Apr 7, 2025
Oct 20, 2024
Oct 14, 2024

Repository files navigation

MAD: Makeup All-in-One with Cross-Domain Diffusion Model

A unified cross-domain diffusion model for various makeup tasks

License: Apache2.0

Pipeline Image

Bo-Kai Ruan, Hong-Han Shuai

πŸš€ A. Installation

Step 1: Create Environment

  • Ubuntu 22.04 with Python β‰₯ 3.10 (tested with GPU using CUDA 11.8)
conda create --name mad python=3.10 -y
conda activate mad

Step 2: Install Dependencies

conda install pytorch==2.0.1 torchvision==0.15.2 torchaudio==2.0.2 pytorch-cuda=11.8 -c pytorch -c nvidia -y
conda install xformers -c xformers -y
pip install -r requirements.txt

# Weights for landmarks
wget http://dlib.net/files/shape_predictor_68_face_landmarks.dat.bz2
bzip2 -d shape_predictor_68_face_landmarks.dat.bz2 && mkdir weights && mv shape_predictor_68_face_landmarks.dat weights

Step 3: Prepare the Dataset

The following table provides download links for the datasets:

Dataset Link
MT Dataset all
BeautyFace Dataset images, parsing map

We recommend unzipping and placing the datasets in the same folder with the following structure:

πŸ“¦ data
┣ πŸ“‚ mtdataset
┃ ┣ πŸ“‚ images
┃ ┃ ┣ πŸ“‚ makeup
┃ ┃ β”— πŸ“‚ non-makeup
┃ ┣ πŸ“‚ parsing
┃ ┃ ┣ πŸ“‚ makeup
┃ ┃ β”— πŸ“‚ non-makeup
┣ πŸ“‚ beautyface
┃ ┣ πŸ“‚ images
┃ β”— πŸ“‚ parsing
β”— ...

Run misc/convert_beauty_face.py to convert the parsing maps for the BeautyFace dataset:

python misc/convert_beauty_face.py --original data/beautyface/parsing --output data/beautyface/parsing

We also provide the labeling text dataset here.

πŸ“¦ B. Usage

The pretrained weight is uploaded to Hugging Face πŸ€—.

B.1 Training a Model

  • With our model
# Single GPU
python main.py --config configs/model_256_256.yaml

# Multi-GPU
accelerate launch --multi_gpu --num_processes={NUM_OF_GPU} main.py --config configs/model_256_256.yaml
  • With stable diffusion
./script/train_text_to_image.sh

B.2 Beauty Filter or Makeup Removal

To use the beauty filter or perform makeup removal, create a .txt file listing the images. Here's an example:

makeup/xxxx1.jpg
makeup/xxxx2.jpg

Use the source-label and target-label arguments to choose between beauty filtering or makeup removal. 0 is for makeup images and 1 is for non-makeup images.

For makeup removal:

python generate_translation.py \
    --config configs/model_256_256.yaml \
    --save-folder removal_results \
    --source-root data/mtdataset/images \
    --source-list assets/mt_makeup.txt \
    --source-label 0 \
    --target-label 1 \
    --num-process {NUM_PROCESS} \
    --opts MODEL.PRETRAINED Justin900/MAD

B.3 Makeup Transfer

For makeup transfer, prepare two .txt files: one for source images and one for target images. Example:

# File 1           |   # File 2
makeup/xxxx1.jpg   |   non-makeup/xxxx1.jpg
makeup/xxxx2.jpg   |   non-makeup/xxxx2.jpg
...                |   ...

To apply makeup transfer:

python generate_transfer.py \
    --config configs/model_256_256.yaml \
    --save-folder transfer_result \
    --source-root data/mtdataset/images \
    --target-root data/beautyface/images \
    --source-list assets/nomakeup.txt \
    --target-list assets/beauty_makeup.txt \
    --source-label 1 \
    --target-label 0 \
    --num-process {NUM_PROCESS} \
    --inpainting \
    --cam \
    --opts MODEL.PRETRAINED Justin900/MAD

B.4 Text Modification

For text modification, prepare a JSON file:

[
  {"image": "xxx.jpg", "style": "makeup with xxx"}
  ...
]
python generate_text_editing.py \
    --save-folder text_editing_results \
    --source-root data/mtdataset/images \
    --source-list assets/text_editing.json \
    --num-process {NUM_PROCESS} \
    --model-path Justin900/MAD

🎨 C. Web UI (Beta)

Users can start the web UI by add access our designed UI with gradio from localhost:7860:

python app.py

Note: Please put the weights with the following way:

πŸ“¦ {PROJECT_ROOT}
┣ πŸ“‚ makeup_checkpoint.pth  # For our model
┃ πŸ“‚ text_checkpoint.pth    # For SD model
β”— ...

gradio

Citation

@article{ruan2025mad,
  title={MAD: Makeup All-in-One with Cross-Domain Diffusion Model},
  author={Ruan, Bo-Kai and Shuai, Hong-Han},
  journal={arXiv preprint arXiv:2504.02545},
  year={2025}
}

Releases

No releases published

Packages

No packages published