Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

find point from image coord in point cloud and mesh #91

Merged
merged 9 commits into from
May 18, 2023

Conversation

marinagmoreira
Copy link
Member

No description provided.

@marinagmoreira marinagmoreira marked this pull request as ready for review May 17, 2023 00:02
@marinagmoreira marinagmoreira requested a review from trey0 May 17, 2023 00:02
@marinagmoreira
Copy link
Member Author

@trey0 I have in my todo list to do another PR where I clean up documentation a bit more on the pano folder

Copy link
Contributor

@trey0 trey0 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is great! Other than a bunch of cosmetic comments, and suggestions that may turn into adding a comment about a fancier way we could do it in the future, I guess the one area where I think it needs work is incorporating the camera lens distortion.

namespace pano_view {

double intersect(const Eigen::Vector3d origin, const Eigen::Vector3d dir,
const pcl::PointCloud<pcl::PointXYZ> point_cloud, Eigen::Vector3d& intersection) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'll make some general points here that may not change how we do this implementation, but maybe we could add a comment to show we at least understand the trade-offs:

  • If you were using the depth image rather than the point cloud, it would be trivial to interpret the image as a mesh where the triangles are formed from adjacent pixels of the image. Then you could apply the same mesh logic used with the OBJ file. Not clear if the (probably small) accuracy improvement vs. using the nearest neighbor point would be worth making the code more complicated and increasing runtime.
  • If intersecting the ray with a mesh, you could also tell if it doesn't intersect anything. However, in some of those no-intersection cases, you might prefer to report the nearest neighbor point rather than just giving up. Not obvious what the best approach is overall. It would be logical to add an argument that limits the maximum distance of the nearest neighbor.
  • Another advantage of using the depth image is that it would make it possible to restrict the intersection search to the epipolar line within the depth image that corresponds to the query ray. That would probably be much faster than your current approach. But I guess there would be a learning curve to figuring out how to calculate the epipolar line, probably taking advantage of some built-in support in OpenCV.
  • Again, it would be good to document the approach this function is taking (nearest neighbor point in point cloud), since there are other logical ways it could work.

bool SetW(const double W);

bool BuildViewMatrix(const geometry_msgs::Pose robot_pose, Eigen::Matrix4d &V);
bool GetVectorFromCamXY(const geometry_msgs::Pose robot_pose, const int x, const int y,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems like this function is not handling camera distortion at all, unless I missed it. Was that intentional? Note: The targets are currently picked in the user interface using the raw (distorted) SciCam images, so the pixel coordinates of targets in the output JSON will also be distorted.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

the distortion matrix is part of the class in the var P_ and is used in the calculation

Copy link
Contributor

@trey0 trey0 May 17, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think P_ is the projection matrix. I'm talking about the lens distortion. For example, the parameters here https://github.com/nasa/astrobee/blob/ea41de975fd3ed5320cc474c061ad6305ae6b646/astrobee/config/robots/bumble.config#L89

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah I'm tempted to delay this and merge the solution of this with #71.
My overall plan is to make CameraView inherit from CameraModel and use all the common functionality. The current issue is that CameraModel does not consider some parameters in the projection matrix like the CameraView (the s, cx and cy), only focal vector (any change to CameraModel needs to be thoroughly tested and evaluate the impact across fsw. Moreover the IsInFov() has a comment which is worth looking at:

// TODO(oalexan1): This looks buggy. Because ImageCoordinates()
// returns an undistorted pixel, it must compare to GetUndistortedHalfSize().

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That seems reasonable, but in that case we should add a TODO comment that mentions issue #71. I also added a comment there about a tweak to the approach that should address Oleg's concern.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants