Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Not able to replicate results of paper; cannot find equations from paper in code #8

Open
aj96 opened this issue May 7, 2019 · 1 comment

Comments

@aj96
Copy link

aj96 commented May 7, 2019

I have spent a lot of time studying this paper and this code. However, similar to this thread: #4, I cannot replicate the results of the paper. I am training on kitti dataset. The pixel loss is fluctuating between 2-4; the edge loss quickly converged to 2.6e-3. Is this to be expected? In the paper, it is mentioned that the lambda terms are chosen to make sure that the losses are of similar scale. However, the pixel loss is of scale 1e0, while all the other losses are of scale 1e-3.

When I try to visualize the results using the provided visualization code, I get the pictures as attached, which do not look anything like the examples shown in the paper. In fact, I get similar results without any training.

initial_depth_results_after_training

Finally, the paper states that equations 6 and 7 are used to compute the depth and normal smoothness losses. However, after reading the code, it seems that only equation 2 is used. Most of the functions added relative to the original SFMLearner, seem to be different ways of computing the smoothness loss, and only one of them is used: compute_smooth_loss_wedge(), the implementation of equation 2 from the paper. Am I mis-reading the code?

Can @zhenheny please comment on this? I think that the ideas presented in this paper are fascinating and would like to learn more.

Thank you.

@834810269
Copy link

I am also studying this paper and this code. Similarly, I can't find the implement of equations 6 and 7 in the code. I think this can't train a valid edge only by smoothness loss and the L2 edge mask loss without the equation 6 and 7.In addition, using multiple scales to train causes smoothing losses to converge too fast and thus outputting an ineffective depth.And there're also some problems in the implement of the depth2normal_layer and normal2depth_layer.

Are you still studying the project and whether have solved these above problems?

Thank you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants