Skip to content

Framework for aerial imagery localization using different VPR systems

License

Notifications You must be signed in to change notification settings

prime-slam/aero-vloc

Repository files navigation

aero-vloc

Lint&Tests Code style: black

This is the official repository for the paper "Visual place recognition for aerial imagery: A survey".

Summary

This paper introduces a methodology tailored for evaluating VPR techniques specifically in the domain of aerial imagery, providing a comprehensive assessment of various methods and their performance. However, we not only compare various VPR methods, but also demonstrate the importance of selecting appropriate zoom and overlap levels when constructing map tiles to achieve maximum efficiency of VPR algorithms in the case of aerial imagery.

Our benchmark tool supports AnyLoc, CosPlace, EigenPlaces, MixVPR, NetVLAD, SALAD and SelaVPR VPR systems as well as LightGlue, SelaVPR and SuperGlue re-ranking techniques.

Getting started

The tool has been tested on Python 3.10 with versions of the libraries from requirements.txt. We recommend using the same parameters for creating a virtual environment.

Please check example.ipynb for an example of downloading the satellite map, localizing of aerial imagery and using the Recall metric. Weights for MixVPR, NetVLAD, SuperGlue and SelaVPR as well as cluster centers for AnyLoc can be downloaded here. To use SelaVPR you will also have to download the pre-trained DINOv2 model here. All other necessary files for CosPlace, EigenPlaces, LightGlue and SALAD will be downloaded automatically via TorchHub.

Datasets

We used the VPAir datasets (from the Anyloc repo) as well as ALTO and MARS-LVIG for our experiments.

However, you can use any dataset as a query sequence, please check aero-vloc/primitives/uav_seq.py for the test data format.

Citation

If this repository aids your research, please consider starring it ⭐️ and citing the paper:

@misc{moskalenko2024visual,
      title={Visual place recognition for aerial imagery: A survey}, 
      author={Ivan Moskalenko and Anastasiia Kornilova and Gonzalo Ferrer},
      year={2024},
      eprint={2406.00885},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}