You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, I was wondering if you performed the same alignment on all the different depth predictions during your evaluations? I tried to apply the same alignment (lad2) to other prediction results such as depthcrafter but it leads to some outlier values (extremely large values in some pixels) that leads to strange results in metrics such as video abs rel. I was wondering if you have experienced this in your experiments.
Additionally, given that the metrics seem to differ using different alignment methods, I was wondering if you have any insights on how to pick the best alignment methods during evaluation.
Thank you so much for your help! Much Appreciated!
The text was updated successfully, but these errors were encountered:
We applied the optimal depth alignment approach for each method. And that's why we are applying lad2 for our method and lstsq for DepthCrafter. On finding the better alignment approaches, we will first try a few traditional ways, and will use the one that gives overall better results across different metrics.
Hi, I was wondering if you performed the same alignment on all the different depth predictions during your evaluations? I tried to apply the same alignment (lad2) to other prediction results such as depthcrafter but it leads to some outlier values (extremely large values in some pixels) that leads to strange results in metrics such as video abs rel. I was wondering if you have experienced this in your experiments.
Additionally, given that the metrics seem to differ using different alignment methods, I was wondering if you have any insights on how to pick the best alignment methods during evaluation.
Thank you so much for your help! Much Appreciated!
The text was updated successfully, but these errors were encountered: