Semantic analysis on visible (RGB) and infrared (IR) images has gained attention for its ability to be more accurate and robust under low-illumination and complex weather conditions. Due to the lack of pre-trained foundation models on the large-scale infrared image datasets, existing methods prefer to design task-specific frameworks and directly fine-tune them with pre-trained foundation models on their RGB-IR semantic relevance datasets, which results in poor scalability and limited generalization. In this work, we propose a general and efficient framework called UniRGB-IR to unify RGB-IR semantic tasks, in which a novel adapter is developed to efficiently introduce richer RGB-IR features into the pre-trained RGB-based foundation model. Specifically, our framework consists of a RGB-based foundation model, a Multi-modal Feature Pool (MFP) module and a Supplementary Feature Injector (SFI) module. The MFP and SFI modules cooperate with each other as an adapter to effectively complement the RGB-based features with the rich RGB-IR features. During training process, we freeze the entire foundation model to inherit prior knowledge and only optimize the proposed adapter. Furthermore, to verify the effectiveness of our framework, we utilize the vanilla vision transformer (ViT-Base) as the pre-trained foundation model to perform extensive experiments. Experimental results on various RGB-IR downstream tasks demonstrate that our method can achieve state-of-the-art performance.
- Create and activate the conda environment:
conda env create -f environment.yml
- Install detection package:
cd detection/
pip install -e -v .
- Install segmentation package:
cd segmentation/
pip install -e -v .
cd detection/
sh scripts/train_od.sh
cd segmentation/
sh scripts/train_sod.sh
cd segmentation/
sh scripts/train_seg.sh
- Release the core code.
- Release pre-trained weights.
If you find this code useful for your research, please consider citing:
@article{yuan2024unirgb,
title={UniRGB-IR: A Unified Framework for Visible-Infrared Downstream Tasks via Adapter Tuning},
author={Yuan, Maoxun and Cui, Bo and Zhao, Tianyi and Wei, Xingxing},
journal={arXiv preprint arXiv:2404.17360},
year={2024}
}