-
Notifications
You must be signed in to change notification settings - Fork 69
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feature/visualization++ #802
Feature/visualization++ #802
Conversation
…t ratio. This bug was also affecting the camera animator, with the controls along x being mirrored.
for (size_t width = 0; width < textureDim.Width; ++width) | ||
{ | ||
for (size_t height = 0; height < textureDim.Height; ++height) | ||
{ | ||
pixelIrrlicht = irr::video::SColor(*(unsigned int*)(buffer + (height * pitch) + (width * bytes))); | ||
pixels[i].width = width; | ||
pixels[i].height = height; | ||
pixels[i].r = pixelIrrlicht.getRed(); | ||
pixels[i].g = pixelIrrlicht.getGreen(); | ||
pixels[i].b = pixelIrrlicht.getBlue(); | ||
pixels[i].a = pixelIrrlicht.getAlpha(); | ||
++i; | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FYI, this is pretty inefficient. I would have liked to output directly buffer
, but it should have been necessary to implement also the lock/unlock mechanism required by irrlicht
. In addition, it was not clear how to pass the information on how to interpret that buffer. For RGB images, having the raw buffer may be handy with YARP
images, thus avoiding two double for loops. In fact, irrlicht
and YARP
seem to store RGB images in the same way. It is not the same for RGBA. In fact, irrlicht
uses ARGB instead.
Bottom line is, I did not find a better way to output the raw image. In any case, this has an effect only when the corresponding texture is large, or when getting the pixels of many textures.
CMakeLists.txt
Outdated
@@ -8,7 +8,7 @@ | |||
|
|||
cmake_minimum_required(VERSION 3.16) | |||
|
|||
project(iDynTree VERSION 2.0.2 | |||
project(iDynTree VERSION 2.0.3 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Feel free to bump more (like 2.0.100), as 2.0.3 in the future could be be a patch release on the master branch.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done in b10c4b8
@@ -466,6 +576,41 @@ class IModelVisualization | |||
virtual Transform getWorldLinkTransform(const LinkIndex& linkIndex) = 0; | |||
}; | |||
|
|||
class ITexture |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you add here or in some other related interface what you mean by "Texture"? In 3D Graphics texture is typically used to refer to 2D images that is then mapped to the surface of a 3D model to describe its appearance (see https://en.wikipedia.org/wiki/Texture_mapping). In this context instead if I understood correctly the texture is a sort of "render target" (see the second meaning in https://www.khronos.org/opengl/wiki/Texture). As this could be confusing for users, could you clarify this? Thanks!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, the correct interpretation would be that the texture is a different target for the renderer. Do you think that adding some documentation may be sufficient, or you about a better naming?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As you prefer, I think that even just documentation is ok.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have updated the documentation in b7267fc
@@ -73,7 +66,25 @@ inline iDynTree::Rotation irr2idyntree_rot(const irr::core::vector3df & rotIrr) | |||
|
|||
inline const irr::core::vector3df idyntree2irr_rot(const iDynTree::Rotation & rot) | |||
{ | |||
return idyntree2irr_rpy(rot.asRPY()); | |||
irr::core::matrix4 irrTransform(irr::core::matrix4::EM4CONST_IDENTITY); | |||
//Transposing the matrix since irrlicht uses left handed frames |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you elaborate a bit more on this? I thought that the rendering problem of STL meshes was solved by changing the scale of the X variable, instead also this is necessary? Did you tested this with the iCub 2.5 DAE models after this changes?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I reverted the changes in 0a6f821.
Regarding the visualization, I run idyntree-model-view
with:
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok thanks! However, if I recall correctly is that the problem with the stl meshes on iCub3 (the one that we imagine was fixed by the STL load fix) only appeared when the legs or arm were moved from the zero position, so after you reverted the change to the rotation did you check if the visualization is still working as expected?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great!
* Remember to call draw() first. | ||
* @param pixels The output pixels | ||
* @return True in case of success, false otherwise | ||
*/ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you clarify here:
- the size of std::vector
- if the pixel are stored in row major or col major order?
Now that this docs seems to be duplicated also in the concrete Texture implementation, so either update also that one, or remove it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some comments, but overall it seems great, thanks!
By the way, this new features (especially the mouse control and the frames) could be used to enhance idyntree-model-view ( https://github.com/robotology/idyntree/blob/master/src/tools/idyntree-model-view.cpp ) and make it a quick and dirty tool for easy inspection of URDF models frame placement. fyi @Nicogene |
I enabled the mouse control for |
* @brief Add a frame in the visualization | ||
* @return The frame index. | ||
*/ | ||
virtual size_t addFrame(const Transform& transformation, double arrowLength = 1.0) final; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe it would also be convenient to have a method that removes a frame?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have tried adding the method to remove a frame, but the logic behind was not the best. Since I am returning an index as an identifier, I do not like what happens when you remove a frame. Should that index be reused or not? Also, with the getNumberOfFrames
method you can understand whether an index is valid or not. But if you remove one frame, this does not hold anymore as the total number of frames will be lower or equal than the maximum index. Hence, for the time being, I preferred to allow controlling the visibility of a frame with 98e804a. The net result is the same and the logic behind remains straightforward.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great!
Ok for me, waiting for @prashanthr05 approval as well. @S-Dafarra it is ok for me to Squash and Merge or you want to revise the commits? |
It is ok to squash for me! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In general, ok for me. I just had one clarification request in the CameraAnimator class.
initialTransformation(0,0) = xAxis.X; | ||
initialTransformation(0,1) = xAxis.Y; | ||
initialTransformation(0,2) = xAxis.Z; | ||
|
||
initialTransformation(1,0) = yAxis.X; | ||
initialTransformation(1,1) = yAxis.Y; | ||
initialTransformation(1,2) = yAxis.Z; | ||
|
||
initialTransformation(2,0) = zAxis.X; | ||
initialTransformation(2,1) = zAxis.Y; | ||
initialTransformation(2,2) = zAxis.Z; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could you just document this row-major order stacking for the axis vectors and why we use this ordering and the difference from column-major ordering? (rotation and its inverse)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have added the relevant comment in e20efda
idyntree/src/visualization/src/CameraAnimator.cpp
Lines 172 to 180 in e20efda
// Getting the rotation directly from camera results in undesired rotations when getting close to the singularity | |
// due to the fact that irrlicht stores rotations using RPY. As a consequence, we reconstruct the camera frame by | |
// having the z-axis parallel to the line connecting the camera position to the target position. The x-axis is | |
// reconstructed from the view frustrum. The view frustrum is the region of space that appears in the camera. It | |
// is basically a truncated rectangular pyramid. It is delimited by 8 points. We use the bottom left and top | |
// left points in the "far" plane to define the up direction, the x-axis. The y-axis is obtained by cross-product | |
// between the other two axes, obtaining a right-handed frame. Since irrlicht uses left-handed frames, the obtained | |
// frame is converted into a left-handed frame by taking its transpose (a left hand rotation corresponds to the | |
// inverse of a right hand rotation since the axis is the same, but the angle is on the opposite direction). |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you. Very clear!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Indeed, that transpose caused me a lot of thinking in the first place. Initially, it was not transposed, but the visualization was clearly wrong. So I thought I was doing some mistakes in the interpretation, I tried the transpose and everything went fine. Later on, when I was debugging a problem with the stl
meshes being in the wrong position, I discovered this thing about irrlicht
using left-handed frames, and then everything made sense 😅
It does the following:
STL
meshes.