LLaVA-VSD: Large Language-and-Vision Assistant for Visual Spatial Description [arXiv] [huggingface]
Yizhang Jin12, Jian Li1, Jiangning Zhang1, Jianlong Hu1, Zhenye Gan1, Xin Tan3, Yong Liu1, Yabiao Wang1, Chengjie Wang1, Lizhuang Ma2
1Tencent YouTu Lab, 2SJTU, 3ECNU
⚡If you have any questions, please contact swordli@tencent.com. Welcome to collaborate on academic research and writing papers together.(欢迎学术合作).
@article{jin2024llava,
title={LLaVA-VSD: Large Language-and-Vision Assistant for Visual Spatial Description},
author={Jin, Yizhang and Li, Jian and Zhang, Jiangning and Hu, Jianlong and Gan, Zhenye and Tan, Xin and Liu, Yong and Wang, Yabiao and Wang, Chengjie and Ma, Lizhuang},
journal={arXiv preprint arXiv:2408.04957},
year={2024}
}
Visual Spatial Description (VSD) aims to generate texts that describe the spatial relationships between objects within images. Traditional visual spatial relationship classification (VSRC) methods typically output the spatial relationship between two objects in an image, often neglecting world knowledge and lacking general language capabilities. In this paper, we propose a Large Language-and-Vision Assistant for Visual Spatial Description, named LLaVA-VSD, which is designed for the classification, description, and open-ended description of visual spatial relationships. Specifically, the model first constructs a VSD instruction-following dataset using given figure-caption pairs for the three tasks. It then employs LoRA to fine-tune a Large Language and Vision Assistant for VSD, which has 13 billion parameters and supports high-resolution images. Finally, a large language model is used to refine the generated sentences, enhancing their diversity and accuracy. LLaVA-VSD demonstrates excellent multimodal conversational capabilities and can follow open-ended instructions to assist with inquiries about object relationships in images.
We release our instruction-tuning dataset LLaVA-VSD-120K, please visit huggingface.