Skip to content

mhyoosefian/Monocular-Visual-Odometry

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

25 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Monocular-Visual-Odometry

This repo contains an implementation of a conventional Monocular Visual Odometry (MVO) and an enhanced MVO referred to as Fast MVO (FMVO). MVO refers to the process of incrementally estimating the position and orientation of a single camera moving in the 3D space.

How to use the code

To obtain the exact same results as presented below, please run the function plotBothResults.m. To run algorithms separately, please refer to their respective sections. At the end of the following two sections, it is explained how to run the corresponding algorithm.

Conventional MVO

There are several approaches to MVO, among which I have focused on "2D-2D motion estimation". In this approach, The following steps are implemented for each two consecutive images:

  1. A number of features are extracted in the first image.
  2. These features are tracked in the second image.
  3. The motion is estimated using the Essential or Fundamental matrix.
  4. A local optimization is performed to minimize the reprojection error.

To run this algorithm, The first step is to download the MH-01 sequence of the EuRoC mav dataset. The downloaded folder "mav0" should be placed next to the "MVO" and "FMVO" folders. Then, the runMe.m code in the "MVO" folder should be run. The results will be stored, and plotBothResults.m could be used to generate the plots shown below.

Fast MVO

In the fast MVO, instead of extracting features in every two images, features are only extracted in some images, referred to as keyframes. The criterion based on which these keyframes are chosen is different from prevailing methods in the literature. An image is a keyframe if its feature track is below a threshold. In essence, when the first image is received, a number of features are extracted in it. Steps 2 and 3 of the conventional MVO are performed to find the motion of the camera. As the camera moves, some features will no longer be in its field of view. As a result, as the camera moves, the number of features being tracked decreases. When the number of features is below a certain threshold, all remaining features are used to perform a local optimization, and new features are extracted in the last image (i.e., the keyframe). Hence, the FMVO approach can be summarized in the following steps.

  1. A number of features are extracted from the keyframe.
  2. These features are tracked in future images.
  3. The motion is estimated using the Essential or Fundamental matrix.
  4. If the number of tracked features is less than a threshold, a local optimization is performed to minimize the reprojection error.

To run this algorithm, The first step is to download the MH-01 sequence of the EuRoC mav dataset. The downloaded folder "mav0" should be placed next to the "MVO" and "FMVO" folders. Then, the runMe.m code in the "FMVO" folder should be run. The results will be stored, and plotBothResults.m could be used to generate the plots shown below.

Results

The conventional MVO and the Fast MVO are tested on the EuRoC mav dataset, the sequence MH-01.

Position and orientation

The MVO and FMVO results of estimating x-y and x-z trajectories and orientation are shown below.

RMSE of position and orientation

The RMSE of the estimations are plotted below.

Run time

The time each iteration takes for both MVO and FMVO are shown below. Also, the accumulative run-time is plotted.

Comparison

A comparison of the performance of the two algorithms is provided in the Table below. The first six rows in this table are the median of the RMSE results. The last row is the median run-time of each iteration of the algorithms.

Citation

If you use the code in your research work, please cite the following paper, as the idea behind the FMVO was developed in this paper.

@article{abdollahi2022improved,
  title={An Improved Multi-State Constraint Kalman Filter for Visual-Inertial Odometry},
  author={Abdollahi, MR and Pourtakdoust, Seid H and Nooshabadi, MH and Pishkenari, Hossein Nejat},
  journal={arXiv preprint arXiv:2210.08117},
  year={2022}
}

About

Monocular Visual Odomtery implemented on EuRoC dataset

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages