Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature/visualization++ #802

Merged
merged 28 commits into from
Jan 26, 2021
Merged

Conversation

S-Dafarra
Copy link
Contributor

It does the following:

  • It adds a camera animator to move the camera via the mouse
  • It fixes the aspect ratio to avoid strange effects when resizing
  • It adds an interface to add custom frames to the visualization
  • It fixes the floor grid colors
  • It adds an interface to have additional textures (helpful if you need to render to a different size with a different environment and to get the pixels of the corresponding image)
  • It fixes the visualizations of STL meshes.

Comment on lines +97 to +110
for (size_t width = 0; width < textureDim.Width; ++width)
{
for (size_t height = 0; height < textureDim.Height; ++height)
{
pixelIrrlicht = irr::video::SColor(*(unsigned int*)(buffer + (height * pitch) + (width * bytes)));
pixels[i].width = width;
pixels[i].height = height;
pixels[i].r = pixelIrrlicht.getRed();
pixels[i].g = pixelIrrlicht.getGreen();
pixels[i].b = pixelIrrlicht.getBlue();
pixels[i].a = pixelIrrlicht.getAlpha();
++i;
}
}
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FYI, this is pretty inefficient. I would have liked to output directly buffer, but it should have been necessary to implement also the lock/unlock mechanism required by irrlicht. In addition, it was not clear how to pass the information on how to interpret that buffer. For RGB images, having the raw buffer may be handy with YARP images, thus avoiding two double for loops. In fact, irrlicht and YARP seem to store RGB images in the same way. It is not the same for RGBA. In fact, irrlicht uses ARGB instead.
Bottom line is, I did not find a better way to output the raw image. In any case, this has an effect only when the corresponding texture is large, or when getting the pixels of many textures.

CMakeLists.txt Outdated
@@ -8,7 +8,7 @@

cmake_minimum_required(VERSION 3.16)

project(iDynTree VERSION 2.0.2
project(iDynTree VERSION 2.0.3
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Feel free to bump more (like 2.0.100), as 2.0.3 in the future could be be a patch release on the master branch.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done in b10c4b8

@@ -466,6 +576,41 @@ class IModelVisualization
virtual Transform getWorldLinkTransform(const LinkIndex& linkIndex) = 0;
};

class ITexture
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you add here or in some other related interface what you mean by "Texture"? In 3D Graphics texture is typically used to refer to 2D images that is then mapped to the surface of a 3D model to describe its appearance (see https://en.wikipedia.org/wiki/Texture_mapping). In this context instead if I understood correctly the texture is a sort of "render target" (see the second meaning in https://www.khronos.org/opengl/wiki/Texture). As this could be confusing for users, could you clarify this? Thanks!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, the correct interpretation would be that the texture is a different target for the renderer. Do you think that adding some documentation may be sufficient, or you about a better naming?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As you prefer, I think that even just documentation is ok.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have updated the documentation in b7267fc

@@ -73,7 +66,25 @@ inline iDynTree::Rotation irr2idyntree_rot(const irr::core::vector3df & rotIrr)

inline const irr::core::vector3df idyntree2irr_rot(const iDynTree::Rotation & rot)
{
return idyntree2irr_rpy(rot.asRPY());
irr::core::matrix4 irrTransform(irr::core::matrix4::EM4CONST_IDENTITY);
//Transposing the matrix since irrlicht uses left handed frames
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you elaborate a bit more on this? I thought that the rendering problem of STL meshes was solved by changing the scale of the X variable, instead also this is necessary? Did you tested this with the iCub 2.5 DAE models after this changes?

Copy link
Contributor Author

@S-Dafarra S-Dafarra Jan 22, 2021

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I reverted the changes in 0a6f821.
Regarding the visualization, I run idyntree-model-view with:

  • the `iCubGenova09`` urdf
    image
  • the iCubGenova04 urdf
    image
  • a custom iCubGenova04 urdf with dae meshes
    image

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok thanks! However, if I recall correctly is that the problem with the stl meshes on iCub3 (the one that we imagine was fixed by the STL load fix) only appeared when the legs or arm were moved from the zero position, so after you reverted the change to the rotation did you check if the visualization is still working as expected?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here the result when reading the joints from the control boards:

  • iCubGenova09 urdf
    image
  • iCubGenova04 urdf
    image
  • iCubGenova04 urdf with custom DAE meshes
    image

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great!

* Remember to call draw() first.
* @param pixels The output pixels
* @return True in case of success, false otherwise
*/
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can you clarify here:

  • the size of std::vector
  • if the pixel are stored in row major or col major order?

Now that this docs seems to be duplicated also in the concrete Texture implementation, so either update also that one, or remove it.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have updated the docs in b7267fc and removed the duplicated documentation in d337204.

Copy link
Member

@traversaro traversaro left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some comments, but overall it seems great, thanks!

@traversaro
Copy link
Member

By the way, this new features (especially the mouse control and the frames) could be used to enhance idyntree-model-view ( https://github.com/robotology/idyntree/blob/master/src/tools/idyntree-model-view.cpp ) and make it a quick and dirty tool for easy inspection of URDF models frame placement. fyi @Nicogene

@S-Dafarra
Copy link
Contributor Author

By the way, this new features (especially the mouse control and the frames) could be used to enhance idyntree-model-view ( https://github.com/robotology/idyntree/blob/master/src/tools/idyntree-model-view.cpp ) and make it a quick and dirty tool for easy inspection of URDF models frame placement. fyi @Nicogene

I enabled the mouse control for idyntree-model-view in 2e71c6f.

* @brief Add a frame in the visualization
* @return The frame index.
*/
virtual size_t addFrame(const Transform& transformation, double arrowLength = 1.0) final;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe it would also be convenient to have a method that removes a frame?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have tried adding the method to remove a frame, but the logic behind was not the best. Since I am returning an index as an identifier, I do not like what happens when you remove a frame. Should that index be reused or not? Also, with the getNumberOfFrames method you can understand whether an index is valid or not. But if you remove one frame, this does not hold anymore as the total number of frames will be lower or equal than the maximum index. Hence, for the time being, I preferred to allow controlling the visibility of a frame with 98e804a. The net result is the same and the logic behind remains straightforward.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Great!

@traversaro
Copy link
Member

Ok for me, waiting for @prashanthr05 approval as well.

@S-Dafarra it is ok for me to Squash and Merge or you want to revise the commits?

@S-Dafarra
Copy link
Contributor Author

Ok for me, waiting for @prashanthr05 approval as well.

@S-Dafarra it is ok for me to Squash and Merge or you want to revise the commits?

It is ok to squash for me!

Copy link
Contributor

@prashanthr05 prashanthr05 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In general, ok for me. I just had one clarification request in the CameraAnimator class.

Comment on lines +173 to +183
initialTransformation(0,0) = xAxis.X;
initialTransformation(0,1) = xAxis.Y;
initialTransformation(0,2) = xAxis.Z;

initialTransformation(1,0) = yAxis.X;
initialTransformation(1,1) = yAxis.Y;
initialTransformation(1,2) = yAxis.Z;

initialTransformation(2,0) = zAxis.X;
initialTransformation(2,1) = zAxis.Y;
initialTransformation(2,2) = zAxis.Z;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could you just document this row-major order stacking for the axis vectors and why we use this ordering and the difference from column-major ordering? (rotation and its inverse)

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have added the relevant comment in e20efda

// Getting the rotation directly from camera results in undesired rotations when getting close to the singularity
// due to the fact that irrlicht stores rotations using RPY. As a consequence, we reconstruct the camera frame by
// having the z-axis parallel to the line connecting the camera position to the target position. The x-axis is
// reconstructed from the view frustrum. The view frustrum is the region of space that appears in the camera. It
// is basically a truncated rectangular pyramid. It is delimited by 8 points. We use the bottom left and top
// left points in the "far" plane to define the up direction, the x-axis. The y-axis is obtained by cross-product
// between the other two axes, obtaining a right-handed frame. Since irrlicht uses left-handed frames, the obtained
// frame is converted into a left-handed frame by taking its transpose (a left hand rotation corresponds to the
// inverse of a right hand rotation since the axis is the same, but the angle is on the opposite direction).

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you. Very clear!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Indeed, that transpose caused me a lot of thinking in the first place. Initially, it was not transposed, but the visualization was clearly wrong. So I thought I was doing some mistakes in the interpretation, I tried the transpose and everything went fine. Later on, when I was debugging a problem with the stl meshes being in the wrong position, I discovered this thing about irrlicht using left-handed frames, and then everything made sense 😅

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants