Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Aligned frames returned blank depth #4224

Closed
filipematosinov opened this issue Jun 17, 2019 · 21 comments
Closed

Aligned frames returned blank depth #4224

filipematosinov opened this issue Jun 17, 2019 · 21 comments

Comments

@filipematosinov
Copy link

  • Before opening a new issue, we wanted to provide you with some useful suggestions (Click "Preview" above for a better view):

  • All users are welcomed to report bugs, ask questions, suggest or request enhancements and generally feel free to open new issue, even if they haven't followed any of the suggestions above :)


Required Info
Camera Model { D400 }
Firmware Version (Latest)
Operating System & Version {Win (8.1/10) / Linux (Ubuntu 14/16/17) / MacOS
Kernel Version (Linux Only) (4.18.0-21-generic)
Platform Up Board Squared
SDK Version { 2.23.0 }
Language {python}
Segment { }

Issue Description

I am having a issue with the alignment. I have recorded a bag file and when trying to replay it I get empty (i.e everything zeros) on the depth frame after aligning the frames (before is okay) while the color frame is always okay. Any clue why is this happening?
Something interesting is that I am recording two cameras at the same time and only one has problems. Here is the code I am using to record them:

import pyrealsense2 as rs
import numpy as np
import cv2
import sys,os
from datetime import datetime
import time

if len(sys.argv) > 1: 
  record_time = int(sys.argv[1])
else:
  record_time = 30

path = 'Data/bag/'

if len(sys.argv) > 2: 
  if sys.argv[2] == 'local':
    local = True
  else:
    local = False
else:
  exit()

if len(sys.argv) > 3: 
  n_cams = int(sys.argv[3])
else:
  n_cams = 1

if os.path.isfile('rec_id.npy'):
  rec_id = np.load('rec_id.npy',allow_pickle = True) + 1
else:
  rec_id = 0

def start_record(cam_id = 1):

  pipeline = rs.pipeline()
  config = rs.config()

  path_name = path+str(rec_id)+"_cam_"+str(cam_id)+".bag"
  config.enable_stream(rs.stream.depth, 848,480, rs.format.z16, 15)
  config.enable_stream(rs.stream.color, 848,480, rs.format.rgb8, 15)
  config.enable_record_to_file(path_name)

  pipeline.start(config)

  return pipeline, path_name
  
pip, path_name = start_record(1)

if n_cams == 2:
  pip2, path_name2 = start_record(2)

print("Start recording")
time0 = time.time()

while (time.time() - time0) < record_time:
  continue

if n_cams == 2:
  pip.stop()
  pip2.stop()
else:
  pip.stop()

@RealSenseCustomerSupport
Copy link
Collaborator


@filipematosinov Did you have the issue if you only record and replay this issue camera? And for another good camera, did you also align frames?

@filipematosinov
Copy link
Author

filipematosinov commented Jul 19, 2019

Basically I use the same code to align the frames from 2 cameras' individual .bag files. While for one of the cameras (cam 2) everything goes fine, for the other (cam1) after the alignment I just get a zero output on the depth frame (the RGB is fine). Sometimes, there is one pixel that outputs some value but only one.

@RealSenseCustomerSupport
Copy link
Collaborator


@filipematosinov Glad to see the issue resolved. Will close the ticket accordingly. Thanks!

@filipematosinov
Copy link
Author

@RealSenseCustomerSupport the issue is not solved. Can you please re-open it as you didn't give me any feedback?

@redM0nk
Copy link

redM0nk commented Mar 16, 2020

Hello.

I'm noticing the similar issues when i align the depth frame to the color frame.
Camera is at height ~700mm and facing down at an empty conveyor.

Required Info  
Camera Model { D415 }
Firmware Version (5.12.3.00)
Operating System & Version Linux (Ubuntu 16.04 LTS)
Kernel Version (Linux Only) (4.15.0-88-generic)
Platform (PC)
SDK Version { 2.33.1 }
Language {python}
Segment { }
-- --

RGB frame (facing down at the empty conveyor):
color

Corresponding Depth frame:
depth
After aligning: The depth map above contains 0 values for all the pixels.
before aligning: the depth map contained good believable numbers.

Alignment is needed in my case since the depth map was a little behind the RGB color image (case when alignment was disabled).

Any feedback is appreciated.

@tispratik
Copy link

tispratik commented Mar 16, 2020

Is "align"ment different than "synchronization" #774 .... do we need synchronization instead?

@tispratik
Copy link

tispratik commented Mar 17, 2020

@MartyG-RealSense can you please look into this? redM0nk is my colleague. We are facing issue with alignment of the frames as well. While trying to pick objects on moving conveyor, our robot is unable to go to the correct depth as the depth pixels are not aligned with the color pixels.

We have a two camera system. We use the second camera is RGB mode only. Only in the first camera we are trying to align depth and color.

@MartyG-RealSense
Copy link
Collaborator

@tispratik As you are a Python user, a good starting point would be to check how you are implementing the alignment against Intel's Python tutorial for aligning depth and RGB streams into a single aligned image.

https://github.com/IntelRealSense/librealsense/blob/jupyter/notebooks/distance_to_object.ipynb

@tispratik
Copy link

tispratik commented Mar 17, 2020

Below is the python code that we've been using

import pyrealsense2 as rs

align_to = rs.stream.color
align_handler = rs.align(align_to)

pipeline = rs.pipeline()
pipeline.start(config)

frames = pipeline.wait_for_frames()
frames = align_handler.process(frames)
color_frame = frames.get_color_frame()
depth_frame = frames.get_depth_frame()
depth_image_np = np.array(np.asanyarray(depth_frame.get_data()))
color_image_np = np.array(np.asanyarray(color_frame.get_data()))

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Mar 17, 2020

The SDK's Python wrapper has another depth to color alignment example. In that, they do the align operation after the pipeline has started and not before.

https://github.com/IntelRealSense/librealsense/blob/master/wrappers/python/examples/align-depth2color.py

@redM0nk
Copy link

redM0nk commented Mar 17, 2020

@MartyG-RealSense interesting.
Perhaps the order is important. we are initializing the align process before starting the pipeline.
will change the order and try again.

@redM0nk
Copy link

redM0nk commented Mar 17, 2020

thanks @MartyG-RealSense
After applying the above changes and powercycling the realsense device, it is now giving out good depth frame.

Another question, on the forum some people advice to apply the following changes on rgb sensor:

sensor.set_option(rs.option.auto_exposure_priority, 0.0)

Do you have any insights into what the above command does? and would you recommend using the above setting.

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Mar 17, 2020

@redM0nk It looks as though that instruction would disable Auto-Exposure Priority by setting it to '0'. Disabling it, whilst having Auto-Exposure enabled, enforces a constant FPS speed instead of having the FPS able to vary depending on current environmental conditions. It is certainly worth using if you need a constant FPS instead of, for example, drifting between 50 and 60 FPS when the stream is set to 60.

@redM0nk
Copy link

redM0nk commented Mar 17, 2020

I'm using the camera in the following configuration:

RGB: 1280x720 @30 fps with exposure set to 32 ms
Depth: 1280x720 @30 fps with exposure set to Auto 

between frames, i'm doing some computation which roughly takes around 30 - 50ms. Sometimes i do notice frame drop, but it is mostly because the process is not consuming the frames fast enough.

Since rgb sensor is not set to auto-exposure, maybe i can disable the auto-exposure priority. what are your thoughts @MartyG-RealSense ?

@MartyG-RealSense
Copy link
Collaborator

@redM0nk Assuming the lighting is indoor and so controlled and stable (not changing much), I would be inclined to enable RGB auto-exposure to let the camera take care of the calculations. Then experiment with turning Auto Exposure Priority on or off to see how it affects FPS.

@tispratik
Copy link

@MartyG-RealSense we're using 32ms with additional lighting to fight motion blur for moving objects. Normally the auto exposure on color by default is 156ms... i would think that color stream is not slowing down when we are at 32ms. Though its very important for us to have depth exposure set to auto in order to get a good accuracy on depth data.

@MartyG-RealSense
Copy link
Collaborator

@redM0nk If you have strong lighting in the scene then it is possible to reduce exposure to around 1 ms, which also helps to avoid motion artifacts. It sounds as though you have a manual exposure setup that works well for your application though.

@redM0nk
Copy link

redM0nk commented Mar 18, 2020

@MartyG-RealSense thanks for the help.
so after enabling the align_frame logic i'm noticing some frame drops.
below is the snapshot from Datadog (where i'm logging the fps):
image

Anytime that i can do to optimize this?
I'm also looking into the following approach to help with the fps drop:
https://github.com/IntelRealSense/librealsense/blob/master/wrappers/python/examples/frame_queue_example.py

@MartyG-RealSense
Copy link
Collaborator

@redM0nk It is possible to change the size of the frame queue to change the balance between performance and latency. The more that it is weighted towards performance though, the higher the risk that there will be skipped frames.

https://dev.intelrealsense.com/docs/frame-management#section-frame-drops-vs-latency

Bear in mind though that on less powerful hardware, running a logging / profiling program at the same time as your own program may in itself be a cause of lag as the logging consumes processing power.

@redM0nk
Copy link

redM0nk commented Mar 18, 2020

Makese sense.
@MartyG-RealSense Is it possible to store aligned frames (instead of regular frames) in the frame queue buffer. I looked for an example implementation but couldn't find one.

Regarding logging, yes that's true. We have a powerful hardware (i9 cpu) and logging is currently being done using a seperate background process (to avoid any unnecessary i/o)

@MartyG-RealSense
Copy link
Collaborator

If you have a powerful computer with a lot of memory then an SDK feature called Keep() may give you better performance. It stores frames in memory instead of writing it to storage. When the pipeline is closed then you can then perform a batch processing operation such as post-processing or align on the stored frames and then save the frames all at the same time.

Because Keep() keeps the frames in memory though, the recording period may be about 30 seconds maximum before closing the pipeline, otherwise the computer would run out of memory.

The link below is an informative read.

#1000

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants