-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Aligned frames returned blank depth #4224
Comments
@filipematosinov Did you have the issue if you only record and replay this issue camera? And for another good camera, did you also align frames? |
Basically I use the same code to align the frames from 2 cameras' individual .bag files. While for one of the cameras (cam 2) everything goes fine, for the other (cam1) after the alignment I just get a zero output on the depth frame (the RGB is fine). Sometimes, there is one pixel that outputs some value but only one. |
@filipematosinov Glad to see the issue resolved. Will close the ticket accordingly. Thanks! |
@RealSenseCustomerSupport the issue is not solved. Can you please re-open it as you didn't give me any feedback? |
Is "align"ment different than "synchronization" #774 .... do we need synchronization instead? |
@MartyG-RealSense can you please look into this? redM0nk is my colleague. We are facing issue with alignment of the frames as well. While trying to pick objects on moving conveyor, our robot is unable to go to the correct depth as the depth pixels are not aligned with the color pixels. We have a two camera system. We use the second camera is RGB mode only. Only in the first camera we are trying to align depth and color. |
@tispratik As you are a Python user, a good starting point would be to check how you are implementing the alignment against Intel's Python tutorial for aligning depth and RGB streams into a single aligned image. https://github.com/IntelRealSense/librealsense/blob/jupyter/notebooks/distance_to_object.ipynb |
Below is the python code that we've been using import pyrealsense2 as rs
align_to = rs.stream.color
align_handler = rs.align(align_to)
pipeline = rs.pipeline()
pipeline.start(config)
frames = pipeline.wait_for_frames()
frames = align_handler.process(frames)
color_frame = frames.get_color_frame()
depth_frame = frames.get_depth_frame()
depth_image_np = np.array(np.asanyarray(depth_frame.get_data()))
color_image_np = np.array(np.asanyarray(color_frame.get_data())) |
The SDK's Python wrapper has another depth to color alignment example. In that, they do the align operation after the pipeline has started and not before. |
@MartyG-RealSense interesting. |
thanks @MartyG-RealSense Another question, on the forum some people advice to apply the following changes on rgb sensor:
Do you have any insights into what the above command does? and would you recommend using the above setting. |
@redM0nk It looks as though that instruction would disable Auto-Exposure Priority by setting it to '0'. Disabling it, whilst having Auto-Exposure enabled, enforces a constant FPS speed instead of having the FPS able to vary depending on current environmental conditions. It is certainly worth using if you need a constant FPS instead of, for example, drifting between 50 and 60 FPS when the stream is set to 60. |
I'm using the camera in the following configuration:
between frames, i'm doing some computation which roughly takes around 30 - 50ms. Sometimes i do notice frame drop, but it is mostly because the process is not consuming the frames fast enough. Since rgb sensor is not set to auto-exposure, maybe i can disable the auto-exposure priority. what are your thoughts @MartyG-RealSense ? |
@redM0nk Assuming the lighting is indoor and so controlled and stable (not changing much), I would be inclined to enable RGB auto-exposure to let the camera take care of the calculations. Then experiment with turning Auto Exposure Priority on or off to see how it affects FPS. |
@MartyG-RealSense we're using 32ms with additional lighting to fight motion blur for moving objects. Normally the auto exposure on color by default is 156ms... i would think that color stream is not slowing down when we are at 32ms. Though its very important for us to have depth exposure set to auto in order to get a good accuracy on depth data. |
@redM0nk If you have strong lighting in the scene then it is possible to reduce exposure to around 1 ms, which also helps to avoid motion artifacts. It sounds as though you have a manual exposure setup that works well for your application though. |
@MartyG-RealSense thanks for the help. Anytime that i can do to optimize this? |
@redM0nk It is possible to change the size of the frame queue to change the balance between performance and latency. The more that it is weighted towards performance though, the higher the risk that there will be skipped frames. https://dev.intelrealsense.com/docs/frame-management#section-frame-drops-vs-latency Bear in mind though that on less powerful hardware, running a logging / profiling program at the same time as your own program may in itself be a cause of lag as the logging consumes processing power. |
Makese sense. Regarding logging, yes that's true. We have a powerful hardware (i9 cpu) and logging is currently being done using a seperate background process (to avoid any unnecessary i/o) |
If you have a powerful computer with a lot of memory then an SDK feature called Keep() may give you better performance. It stores frames in memory instead of writing it to storage. When the pipeline is closed then you can then perform a batch processing operation such as post-processing or align on the stored frames and then save the frames all at the same time. Because Keep() keeps the frames in memory though, the recording period may be about 30 seconds maximum before closing the pipeline, otherwise the computer would run out of memory. The link below is an informative read. |
Before opening a new issue, we wanted to provide you with some useful suggestions (Click "Preview" above for a better view):
All users are welcomed to report bugs, ask questions, suggest or request enhancements and generally feel free to open new issue, even if they haven't followed any of the suggestions above :)
Issue Description
I am having a issue with the alignment. I have recorded a bag file and when trying to replay it I get empty (i.e everything zeros) on the depth frame after aligning the frames (before is okay) while the color frame is always okay. Any clue why is this happening?
Something interesting is that I am recording two cameras at the same time and only one has problems. Here is the code I am using to record them:
The text was updated successfully, but these errors were encountered: