Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about the pretrained weights #8

Open
Tiam2Y opened this issue Apr 25, 2022 · 4 comments
Open

Question about the pretrained weights #8

Tiam2Y opened this issue Apr 25, 2022 · 4 comments

Comments

@Tiam2Y
Copy link

Tiam2Y commented Apr 25, 2022

Hello! Thanks for the great work! @AlessioTonioni
But I have some questions about pretrained weights.

The pretrained weights you provide are exactly the same as the weights in this repository (Real-time-self-adaptive-deep-stereo).
So how do I get the weights pretrained on Carla or synthia (with meta learning)?

@AlessioTonioni
Copy link
Member

Hi, unfortunatelly I don't have these weights anymore.
The code provided covers all the training and pretraining phases so it should be possible to retrain the network on your own if needed.

@Tiam2Y
Copy link
Author

Tiam2Y commented May 1, 2022

Well, thanks for your answer and code! @AlessioTonioni
In order to train the weights of L2A+Wad, there are still several problems, as follows:

  1. Is the Synthia dataset you used before from this link? http://synthia-dataset.net/downloads/
    datasets

  2. This dataset provides the ground-truth of depth. In order to obtain the ground-truth of disparity, is it converted according to the following calibration information after decoding the depth value? (Like converting the depth value of KITTI Raw data?)
    calib_kitti
    calibration file on KITTI format. P0, P1, P2, P3 corresponds to the same intrinsic camera matrix.

In order to express question 2 more clearly, take the calibration file calib_cam_to_cam.txt in KITTI Raw data 2011_09_30_calib as an example, assuming that the depth is known, is the disparity calculated by the following formula?
depth2disp

标定文件

Sorry for the troublesome questions, but I'd appreciate your answers!

@AlessioTonioni
Copy link
Member

Hello,

  1. We used the Synthia Video Sequences images from the link you provided above.
  2. disparity = (focal*baseline)/depth, you can get baseline and focal of the camera system from the camera calibration files in every dataset. Remember to express baseline and depth in the same unit of measure (e.g., both in meters).

@Tiam2Y
Copy link
Author

Tiam2Y commented May 2, 2022

All right, I got it. Thanks a lot!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants