-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Slow capture time versus the hardware sync option? #9583
Comments
Hi @sklipnoty My understanding from the L515 multiple camera paper is that the 'up time' of the pulse determines how many frames are captured. There is an initial setup time of 60 ms and then each frame that is captured uses 33 ms of time. So 60 + (33 x 3) = 159 = 160 ms when rounded up. So my interpretation is that in theory, a longer up-time could result in more than 3 frames being captured so long as there was time for another 33 ms cycle to be completed before the up-time ended, with each 33 ms that could be fit into the total up-time resulting in an additional capture. This would be in line with the L515 paper's statement that when applying an external sync pulse, the HW SYNC input pulse width will determine how many images will be taken. So if for example the up-time was 320 ms instead of 160 then you could fit in 7 consecutive frame captures instead of 3, with the calculation being 60 + (33 x 7) = 291 (with no space left before the 320 ms up-time ends to complete an eighth 33 ms capture because the total would be 324). The above is my personal interpretation of the paper rather than an official documentation statement, as I have not personally used an L515 multicam system. In the L515 multicam case at the link below, a RealSense team member confirms that a full 33 ms is required for each capture. Could you therefore confirm please if the problem you are experiencing is that it takes 2.712 seconds to capture 3 frames instead of 160 ms, please? If this is not the case, please provide further information about your problem. Thanks! In regard to your post-processing filters, I note that you are setting both depth_to_disparity and disparity_to_depth to False. depth_to_disparity = rs.disparity_transform(False) My understanding is that one of them must be true and the other false - they cannot both be set to the same status, as confirmed in #2399 (comment) |
Hello @MartyG-RealSense, thanks for the quick response. No my problem is related to doing the capturing without trigger cables (And thus without hardware sync) vs doing it with hardware sync. I currently don't have trigger cables yet, I am working on that. I am just very surprised that the difference is that big and I was wondering if I am doing something wrong? Practical (which I tested)
Theoretical (Using hardware sync - I am still in the process of acquiring trigger cables and setting that up)
The difference between those two is huge, so I am wondering if I am doing anything wrong in my script? |
If you are aiming not to use hardware sync then that explains why your script above does not have an Inter_Cam_Sync_Mode definition to set it as a Slave camera. The eight L515 cameras will therefore be operating unsynced, independently of each other, and be vulnerable to being interfered with by each other. Without using hardware sync for reducing interference, you may have to place the L515 cameras just out of range of each other if they are facing towards one another so that they do not overlap, as indicated in the center panel of the image below from the L515 multiple camera paper. Basically, you would not have to get involved with sync cabling or pulse widths at all and could ignore that aspect of L515 multicam systems completely. Without hardware sync, you do not need it if the eight cameras are going to be capturing independently, each capturing at approximately the same time but unaware of each others' existence. Also, the code of your script looks suited to single-camera use. This will be okay if each camera is on a separate computer with its own copy of the script. If all eight cameras are attached to the same computer and controlled with one script though then it would be recommendable to build your script around multicam supporting code that can detect all attached cameras and automatically build a list of devices. An example of this is the Python example project multiple_realsense_cameras.py in the link below. https://github.com/ivomarvan/samples_and_experiments/tree/master/Multiple_realsense_cameras |
I am trying to understand why the capture time as claimed in the white paper (~ 160 ms) is sooo fast compared to capturing 3 frames in python? I don't quite get the difference? I am currently using multiple odroids with the same version of librealsense and ubuntu. (But this is not that relevant for my question ...) |
As you are not using hardware sync, it may be best to not follow the advice of the L515 multiple camera paper except for the part about having the cameras spaced far enough apart for their fields of view not to overlap and cause mutual interference. Otherwise, you will be considering a lot of factors that will not be relevant to your particular non-synced project. If you are not using a sync system then that also frees you from the need to capture three frames, instead capturing a single frame. The save_single_frameset() SDK instruction may be helpful for this. A complete standalone pyrealsense2 script that demonstrates the instruction is in the link below. You should of course edit the resolutions to ones that are L515-compatible. https://github.com/soarwing52/RealsensePython/blob/master/separate%20functions/single_frameset.py Another example saves depth as an array of scaled matrices saved as an .npy file. These two example scripts will at least provide a means of benchmarking their write speed against your own script. |
Hi @sklipnoty Do you require further assistance with this case, please? Thanks! |
Hello @MartyG-RealSense I am still in the process of processing your last comment. Today my JST SH hardware plugs arrived and I am planning on doing some hardware sync testing; However is there a python snippet somewhere?
The whitepaper is in C++? |
The L515 Multi Camera Configurations white paper is labeled as using Python code, though it does look like C++ code to me. A simple example of a Python script that configures Inter Cam Sync Mode that you could try can be found at #3504 On L515, Inter Cam Sync Mode should be set to '1'. This sets the L515 camera as a Slave camera. There are only slaves in an L515 multiple camera hardware sync system, as the master trigger pulse is provided by an external signal generator device. |
I was able to do a first hardware synced scan, which indeed is a lot faster < 2 seconds for 6 camera's at this moment. I need to further testing, will get back to you. |
Thanks very much @sklipnoty for the update - I look forward to hearing the results of your further tests. Good luck! |
So we managed to hook up all camera's and achieve a faster timing then before. Currently for 6 camera's we are able to achieve 1.8 seconds acquisition time. This is capturing 4 frames per camera. Which we can live with and is a huge speed bump compared to capturing over ethernet. (Also no interference) So I have another questions; What kind of filtering do you recommend for L515's?
Seems to me disparity is difficult for the L515, not really stereo? |
So we should only use the temporal filter ? And leave out the disparity (Which make sense ... ) |
Yes, leave out the disparity. |
I was wondering if there is an example how to explicitly align the depth to the color frames? Since I am only capturing like x depth frames? ` for x in range(6):
How would I know they are well-aligned? (Timestamp wise) |
In #1548 a RealSense team member advises that wait_for_frames() provides a 'best match' of frame timestamps when syncing between depth and RGB streams. |
Hi @sklipnoty Do you require further assistance with this case, please? Thanks! |
Currently no. So perhaps for future reference, we were able to use hardware syncing to reduce our total acquisition time to 2 seconds for 6 camera's. Currently results are sufficient for now! The connectors are a pain though, I would highly advice you guys to have the connectors a lot closer to the outer shell of the devices. So that pluggin in and out is easier ... But anyway we managed to do it so end good all good. Thanks for the support @MartyG-RealSense |
Thanks very much @sklipnoty for the update! |
Issue Description
On https://dev.intelrealsense.com/docs/lidar-camera-l515-multi-camera-setup, a setup is described using hardware triggers for the L515. One figure is of very particular interest to me namely:
With the following explanation:
Figure 12. Example timing diagram of trigger signals sent to the 8 different L515 cameras used in the body scanning demonstration. Each camera (placed in slave mode), will only capture during signal-high. This diagram shows how pairs of cameras were turned on at a time, for 160ms captures, for a total capture time of 640ms.
As far as I understand for each camera a total of 3 depth frames (and color frames) are being captured? Now I am struggling simply getting down to this speed without trigger cables. What I want to do is simply run a script on some odroid's capturing 3 depth frames, keeping the last one and I will send that one to the cloud for processing. However I cannot seem to get my timings anywhere close to the claimed 160 ms. I was wondering if anyone can point me in the right direction here;
Currently using this python code:
Which succeeds in getting both depth and color frame but at a speed of Captured frames in ~2.7120392322540283 secs
So I am probably making a very obvious mistake or misunderstanding something here but could someone enlighten me why this difference is sooo big? (I am trying to scan people, the less acquisition time, the less problems in registration. )
The text was updated successfully, but these errors were encountered: