Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Hardware synchronization of multiple cameras #3504

Closed
jbohnslav opened this issue Mar 15, 2019 · 6 comments
Closed

Hardware synchronization of multiple cameras #3504

jbohnslav opened this issue Mar 15, 2019 · 6 comments

Comments

@jbohnslav
Copy link

Required Info
Camera Model D435
Firmware Version 05.11.01.100
Operating System & Version Windows 10, 1803
Platform PC
SDK Version pyrealsense2 2.18.1.549
Language Python
Segment Multiple cameras

Issue Description

I have multiple RealSense devices on one computer, and I would like to precisely synchronize the capture of all cameras with an external synchronization signal. It's very important for my application to know when each camera's exposure occurs, as I'm synchronizing the capture with other, non-realsense devices.

In traditional camera hardware, the "strobe" (or voltage signal indicating exposure or frame capture) is independent from the "trigger." This means that it's possible to both measure when frame capture occurred, and to manually trigger capture at any arbitrary time. Here is an example from Point Grey. You can also look at section 6.4.3 of a Basler camera for an example of a hardware trigger. I would like to hardware trigger my RealSense cameras--acquire an image when a voltage pulse is detected.

I thought that setting the camera to "slave" as follows would enable hardware triggering:

import pyrealsense2 as rs
SERIAL = '555...'

config = rs.config()
resolution_width = 480
resolution_height = 270
framerate = 90
config.enable_stream(rs.stream.infrared, 1, resolution_width, resolution_height, rs.format.y8, framerate)
config.enable_stream(rs.stream.infrared, 2, resolution_width, resolution_height, rs.format.y8, framerate)
config.enable_device(SERIAL)

pipe = rs.pipeline()
prof = pipe.start(config)
dev = prof.get_device()
ds = dev.query_sensors()[0]
ds.set_option(rs.option.inter_cam_sync_mode, 2)  

However, the camera still acquires data in the absence of a hardware trigger. It appears that in "slave" mode, if the camera does not detect a voltage signal, it acquires according to its own internal clock. This is quite confusing behavior. The following other issues relate to this:
#2295
#2148
#2179

I want to know the exact timing of each camera's acquisition. If it is truly impossible to hardware trigger the realsense, I can see two options:

  1. Setting one camera to master, the other cameras to slave. I can read the master strobe with my digital signal acquisition board
  • Problem with this: if a slave camera drops the frame, I will still record the sync signal as though the frame was acquired. This means that I no longer know when the slave camera acquired.
  1. Setting all cameras to master, and recording their triggers via the DAQ.
  • If a camera drops the frame, is a trigger signal still sent via pin 5?

TL;DR: I would like the camera to only acquire when a voltage signal was sent via pin 5. If this is not possible, does the camera trigger still fire if a frame was dropped?

@agrunnet
Copy link
Contributor

We don’t currently have a genlock mode but we have a HW sync. This means that you have one master and all others are slave. Whether you use one pc or multiple, once they are connected and activated they will all acquire at exactly same time. In this case the remaining challenge is “lining them up”. I recommend using the frame counters. They offer many benefits. Once you have lined them up once they should stay aligned (basically offset each counter differently). It also allows you to track missing frames.

@jbohnslav
Copy link
Author

jbohnslav commented Mar 20, 2019

Thanks for your response, @agrunnet!

This means that you have one master and all others are slave.

I'm running some experiments to see if this strategy works.

Hardware: I have four cameras running at low resolution at 90 Hz. I have a super bright LED that I turned on for ~2 seconds at 30 second intervals for ~10 minutes. The cameras are hooked up via cabling described in the white paper. I'm recording the voltage at the same time, so the signal is properly being sent.
Software: One camera is set to master, and the other 3 set to slave. I'm running on Windows 10, without the registry edits for metadata or compilation for the RealSense SDK. I'm using pyrealsense2 2.18.1.549. I didn't modify the frame queue capacity. I'm using custom software for saving the images and metadata. To get the metadata, I use:

frames = device.pipeline.wait_for_frames()
framecount = frames.get_frame_number()
timestamp = frames.get_timestamp()

Analysis: I take the mean brightness for every frame, and threshold this to turn it into a square wave. The raw and cleaned data look like this:
image
I then detect the onsets and offsets of this square wave. Here are the frame counts when brightness increases were observed for one example camera:
image
Here are the timestamps when brightness increases were observed:
image

I recommend using the frame counters. They offer many benefits. Once you have lined them up once they should stay aligned (basically offset each counter differently). It also allows you to track missing frames.

In my experiment, the frame counter drifted over time. If they were still aligned, the difference between cameras' frame counters to a flash of light should be constant over time, but rather it drifts. I don't see the drift in the timestamps described in the white paper. If the timestamps are correct, then I can just use them because they are only off by a maximum of ~100ms, and there's no constant drift over time. If the frame count is correct, then ~10 minutes into recording the cameras have drifted by ~140 frames, or 1.5 seconds, and there's no way to tell when these drifts occurred.

@agrunnet
Copy link
Contributor

You don’t currently see the time stamps drift? Then that means they are not slaved to each other. You do have the cables attached properly?
It also sounds like you did not quite follow all the steps required to lock the cameras to each other. Any reason why not?

@jbohnslav
Copy link
Author

You don’t currently see the time stamps drift? Then that means they are not slaved to each other. You do have the cables attached properly?

I'm successfully reading a voltage signal from an attached DAQ board, but I'll verify the cabling today.

It also sounds like you did not quite follow all the steps required to lock the cameras to each other. Any reason why not?

What steps do you mean?

  • I tried setting ir_sensors.set_option(rs.option.frames_queue_size,1), but any value less than 4 resulted in no frames being detected within the time limit. To set the frame queue capacity, is it required to move away from the pipeline API? This document says that the default capacity is 1 anyway.
  • Is it required to make the registry edits for accurate hardware synchronization on Windows? I tried to avoid this, as it would be nice to just require pyrealsense2 instead of such an involved installation process.

@didory123
Copy link

@jbohnslav I'm stumped by the same issue as you, I'm unable to see a drift in the timestamps. Have you verified your cabling and concluded that it was a faulty circuit that was causing this?

@RealSenseCustomerSupport
Copy link
Collaborator


Hi @jbohnslav,
Since this ticket is still open, I just want to add that, for testing purposes, a quick way to confirm HW sync is working, would be to use a high speed LED panel or digital stop watches, then use RealSense Viewer to take a snapshot of IR images.
Refer to section 2-G in the Multi-Camera configuraiton whitepaper.
https://dev.intelrealsense.com/docs/multiple-depth-cameras-configuration

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants