Semantic analysis on visible (RGB) and infrared (IR) images has gained attention for its ability to be more accurate and robust under low-illumination and complex weather conditions. Due to the lack of pre-trained foundation models on the large-scale infrared image datasets, existing methods prefer to design task-specific frameworks and directly fine-tune them with pre-trained foundation models on their RGB-IR semantic relevance datasets, which results in poor scalability and limited generalization. In this work, we propose a general and efficient framework called UniRGB-IR to unify RGB-IR semantic tasks, in which a novel adapter is developed to efficiently introduce richer RGB-IR features into the pre-trained RGB-based foundation model. Specifically, our framework consists of a RGB-based foundation model, a Multi-modal Feature Pool (MFP) module and a Supplementary Feature Injector (SFI) module. The MFP and SFI modules cooperate with each other as an adapter to effectively complement the RGB-based features with the rich RGB-IR features. During training process, we freeze the entire foundation model to inherit prior knowledge and only optimize the proposed adapter. Furthermore, to verify the effectiveness of our framework, we utilize the vanilla vision transformer (ViT-Base) as the pre-trained foundation model to perform extensive experiments. Experimental results on various RGB-IR downstream tasks demonstrate that our method can achieve state-of-the-art performance.
- Create and activate the conda environment:
conda env create -f environment.yml
- Install detection package:
cd detection/
pip install -e -v .
- Install segmentation package:
cd segmentation/
pip install -e -v .
FLIR, KAIST, and LLVIP all need to be reformatted. Taking FLIR as an example, the directory structure should be:
FLIR_align/
├── train/
├── test/
├── Annotation_train.json
├── Annotation_test.json
Then, replace /path/to/Datasets/
in the configuration file with the parent path of the local FLIR_align/
.
Semantic Segmentation Both MFNet and PST900 need to be formatted according to the mmsegmentation format. Taking MFNet as an example, the directory structure should be:
mfnet_mmseg/
├── annotations/
│ ├── train/
│ └── val/
├── images/
│ ├── train/
│ └── val/
Then, replace /path/to/Datasets/
in the configuration file with the parent path of the local mfnet_mmseg/
.
All datasets in the VT series need to be reformatted. Using VT5000 as an example, the directory structure should be:
VT5000/
├── Train/
│ ├── RGB/
│ ├── T/
│ └── GT/
├── Test/
│ ├── RGB/
│ ├── T/
│ └── GT/
Then, replace /path/to/Datasets/
in the configuration file with the parent path of the local VT5000/
.
Visit the ViTDet page in Detectron2, download the ViT-Base model weight, convert it to the OpenMMLab weight format.
Then, you can replace the /path/to/vitb_coco_IN1k_mae_coco_cascade-mask-rcnn_224x224_withClsToken_noRel.pth
in the configuration file with the local path of the converted weight.
cd detection/
sh scripts/train_od.sh
cd segmentation/
sh scripts/train_sod.sh
cd segmentation/
sh scripts/train_seg.sh
- Release the core code.
- Release pre-trained weights(on-going).
If you find this code useful for your research, please consider citing:
@article{yuan2024unirgb,
title={UniRGB-IR: A Unified Framework for Visible-Infrared Downstream Tasks via Adapter Tuning},
author={Yuan Maoxun and Cui Bo and Zhao Tianyi and Wei Xingxing},
journal={arXiv preprint arXiv:2404.17360},
year={2024}
}