Replies: 1 comment 14 replies
-
@marcizhu , in general I think your idea is quite good, and I am open to having backend-specific, shader-accelerated support for our plotting functions so long as we maintain a pure ImGui fallback. Similarly, GPU accelerations could be made on thick line triangulation as well (#185 is tangentially related). I think the logical approach is to start slow, and try to build a few examples that work in OpenGL and then maybe we can explore other backends in the future. I'd love to explore this together, so let's keep the discussion going. I've been meaning to explore custom ImGui draw commands, as I think this is our path forward to inserting platform specific drawing code. Are you at all familiar with the process? |
Beta Was this translation helpful? Give feedback.
-
I've been thinking about this for a while and I think we could greatly improve the performance of heatmaps if we change the way the rendering works. If I'm not mistaken, right now the CPU computes the color for each pixel, saving that data in a 2D array of some sort and then it sends to the GPU a bunch of quads to render, each one with its color and coordinates. This approach is quite effective, fast enough for most uses but it uses the CPU when in reality we could do 90% of that work on the GPU.
This is what I suggest: each colormap (the 1D array of colors indicating the range of colors we should use to render the colormap) will be stored on the GPU as a
texture1D
. The input data (the raw 2D array of points) will be atexture2D
offloat
/int
(as desired, probablyfloat
), and using a pretty basic fragment shader we render the heatmap onto an OpenGL framebuffer. Finally, instead of rendering all the quads as we do now, we just need to render a single quad and use that framebuffer as a texture. As far as I know, implot does already support rendering images on plots, so this change shouldn't be too hard to implement.The proposed solution has quite some benefits, mainly:
uint32_t
RGBA or fourfloat32
for colors.However, I know this change has its own drawbacks. For example:
pixel = texture(colormap, mix(0.0f, 1.0f, texture(heatmap_data, coords)));
) we still need to handle creation, destruction and binding of shaders and framebuffers. This con is not so bad though, as it would open the door to other similar optimizations for other kinds of plots, or even adding post-processing effects if the user wants to.Lin-Log
,Log-Lin
orLog-Log
must be implemented in the shader, and selected through a shader uniform (probably a couple of booleans, each one specifying if the X or Y axis is logarithmic or not).As usual I'm open to any feedback, discussion or proposal. Maybe this is a dumb idea, but maybe it would allow us to draw massive heatmaps with close to zero CPU usage, less memory and fully taking advantage of the GPU rendering capabilities :D
Let me know what you guys think! 😄
Beta Was this translation helpful? Give feedback.
All reactions