You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
All users are welcomed to report bugs, ask questions, suggest or request enhancements and generally feel free to open new issue, even if they haven't followed any of the suggestions above :)
Required Info
Camera Model
{ D455 }
Firmware Version
(Open RealSense Viewer --> Click info)
Operating System & Version
{jetson xavier nx
Kernel Version (Linux Only)
(e.g. 4.14.13)
Platform
NVIDIA Jetson / etc..
SDK Version
{ legacy / 2.. }
Language
{python}
Segment
{others }
Issue Description
<Describe your issue / question / feature request / etc..>
It is very smooth to only use RGB information to display the video, but as long as the actual distance is calculated in combination with the depth information, it is very lagging. Part of the code is as follows:
#############################################################################
Cpu is almost used. The fps of rgb video reaches 25, but the fps when using depth information to calculate the actual distance is only 2-4.
How to solve the problem of freeze?
The text was updated successfully, but these errors were encountered:
Hi @Try-Hello With both alignment and image cropping processes, you could be placing a processing burden on the CPU. As you are using a Jetson, you should be able to reduce the CPU usage percentage by offloading work from the CPU onto the Jetson's Nvidia graphics GPU if support for CUDA is enabled in the librealsense SDK. CUDA support provides acceleration in the SDK for color conversion, depth-color alignment and point cloud operations.
If you built librealsense from packages then CUDA support should be included in the packages. If you built the SDK from source code with CMake then CUDA support ccan be enabled by including the build flag -DBUILD_WITH_CUDA=true in the CMake build instruction.
When you mention that the FPS of the RGB stream reaches 25, do you mean 15 please, as that is the color FPS defined in your script.
If your project has Auto-Exposure enabled then you may be able to enforce a constant FPS for color and depth by disabling the Auto-Exposure Priority setting using the Python code in the link below
Before opening a new issue, we wanted to provide you with some useful suggestions (Click "Preview" above for a better view):
All users are welcomed to report bugs, ask questions, suggest or request enhancements and generally feel free to open new issue, even if they haven't followed any of the suggestions above :)
Issue Description
<Describe your issue / question / feature request / etc..>
It is very smooth to only use RGB information to display the video, but as long as the actual distance is calculated in combination with the depth information, it is very lagging. Part of the code is as follows:
#############################################################################
Configure depth and color streams
W = 848
H = 480 # 1280 720 (640, 480) (424, 240)
pipeline = rs.pipeline()
config = rs.config()
config.enable_stream(rs.stream.depth, W, H, rs.format.z16, 15)
config.enable_stream(rs.stream.color, W, H, rs.format.bgr8, 15)
print("[INFO] start streaming...")
pipeline.start(config)
aligned_stream = rs.align(rs.stream.color)
Get RGB and Depth information
while True:
frames = pipeline.wait_for_frames()
frames = aligned_stream.process(frames)
depth_frame = frames.get_depth_frame()
color_frame = frames.get_color_frame()
depth_queue.put(depth_frame)
color_queue.put(color_frame)
depth_frame = depth_queue.get()
color_frame = color_queue.get()
color_profile = color_frame.get_profile()
cvsprofile = rs.video_stream_profile(color_profile)
color_intrin = cvsprofile.get_intrinsics()
color_intrin_part = [color_intrin.ppx, color_intrin.ppy, color_intrin.fx, color_intrin.fy]
ppx = color_intrin_part[0]
ppy = color_intrin_part[1]
fx = color_intrin_part[2]
fy = color_intrin_part[3]
if not depth_frame or not color_frame:
continue
bbox_cropping = convert4cropping(frame, bbox) # yolov4: Picture frame, mark
Get the coordinates of the box
left = bbox_cropping[0]
top = bbox_cropping[1]
right = bbox_cropping[2]
bottom = bbox_cropping[3]
width = right - left
height = bottom - top
bbox = (int(left), int(top), int(width), int(height))
p1 = (int(bbox[0]), int(bbox[1]))
p2 = (int(bbox[0] + bbox[2]), int(bbox[1] + bbox[3]))
target_xy_pixel = [int(round((p2[0] - p1[0]) / 2) + p1[0]), int(round((p2[1] - p1[1]) / 2) + p1[1])]
actual distance
target_depth = depth_frame.get_distance(target_xy_pixel[0], target_xy_pixel[1])
print(' class:{} , depth(m):{:.3f}'.format(class_names[0], target_depth))
#############################################################################
Cpu is almost used. The fps of rgb video reaches 25, but the fps when using depth information to calculate the actual distance is only 2-4.
How to solve the problem of freeze?
The text was updated successfully, but these errors were encountered: