diff --git a/README.md b/README.md index 6931134..8f007b5 100644 --- a/README.md +++ b/README.md @@ -1,6 +1,6 @@ # illumiGrad -Automatically calibrate RGBD cameras online with PyTorch. The main idea is that the camera matrices are wrapped in ```torch.nn.Parameter``` so they can be updated with Adam optimizer based on the reprojection error after differentiably projecting the depth camera into the color camera, which is essentially bundle adjustment without the pose graph. In practice, illumiGrad can improve your computer vision accuracy in the wild without needing precise calibration since the calibration is continuously updated for you, a decent initial calibration is necessary though. No calibration targets, such as checkerboards, are required. I tested on semi-rectified color and Kinect V1 depth cameras from the [NYU Depth V2 dataset](https://cs.nyu.edu/~silberman/datasets/nyu_depth_v2.html). Related work using reprojection error as a loss signal: [Direct Visual Odometry](https://openaccess.thecvf.com/content_iccv_2013/papers/Engel_Semi-dense_Visual_Odometry_2013_ICCV_paper.pdf), [RGBD Direct Visual Odometry](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.402.5544&rep=rep1&type=pdf), [CVPR 2017](https://arxiv.org/pdf/1704.07813.pdf), [ICCV 2019](https://arxiv.org/pdf/1806.01260.pdf). +Automatically calibrate RGBD cameras online with PyTorch. The main idea is that the camera matrices are wrapped in ```torch.nn.Parameter``` so they can be updated with Adam optimizer based on the reprojection error after differentiably projecting the depth camera into the color camera with ```torch.nn.functional.grid_sample```, which is essentially bundle adjustment without the pose graph. In practice, illumiGrad can improve your computer vision accuracy in the wild without needing precise calibration since the calibration is continuously updated for you, a decent initial calibration is necessary though. No calibration targets, such as checkerboards, are required. I tested on semi-rectified color and Kinect V1 depth cameras from the [NYU Depth V2 dataset](https://cs.nyu.edu/~silberman/datasets/nyu_depth_v2.html). Related work using reprojection error as a loss signal: [Direct Visual Odometry](https://openaccess.thecvf.com/content_iccv_2013/papers/Engel_Semi-dense_Visual_Odometry_2013_ICCV_paper.pdf), [RGBD Direct Visual Odometry](http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.402.5544&rep=rep1&type=pdf), [CVPR 2017](https://arxiv.org/pdf/1704.07813.pdf), [ICCV 2019](https://arxiv.org/pdf/1806.01260.pdf).