Inofficial simplified version of TMF-Matting with minimal dependencies.
Given an image and a trimap, compute its alpha matte.
Input image | Input trimap | Output alpha matte |
---|---|---|
![]() |
![]() |
![]() |
The test image is from https://alphamatting.com/datasets.php.
- Install PyTorch
pip install pillow
git clone https://github.com/99991/Simple-TMF-Matting.git
cd Simple-TMF-Matting
- Download the pretrained
comp1k.pth
model from the original authors' repository and place it in this directory. python test_single_image.py
# downloads test images and computes alpha matte
If you find TMFNet useful in your research, please consider citing the original authors:
@article{jiang2023trimap,
title={Trimap-guided feature mining and fusion network for natural image matting},
author={Jiang, Weihao and Yu, Dongdong and Xie, Zhaozhi and Li, Yaoyi and Yuan, Zehuan and Lu, Hongtao},
journal={Computer Vision and Image Understanding},
volume={230},
pages={103645},
year={2023},
publisher={Elsevier}
}
- Download the pretrained model as above.
- Ask Brain Price to send you
Adobe_Deep_Matting_Dataset.zip
and place it in this directory. Do not unzip. - Download and extract the images of the Pascal VOC2012 dataset to the directory
PascalVOC2012
. You can also link them withln -s YOUR_PASCAL_DIR/VOCdevkit/VOC2012/JPEGImages/ PascalVOC2012
if you already have them somewhere else. - Run
test_composition_1k_dataset.py
MSE × 1000 | SAD / 1000 |
---|---|
4.547 | 22.410 |
MSE is slightly worse and SAD is slightly better than original, but minor details such as background interpolation method result in a large difference, so this is probably acceptable.