-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
wait_for_frames() seemingly retrieving framesets from the past #13397
Comments
Hi @ndaley7 You are correct, Frames 2, 4 and 6 have the same timestamp (10:55.47). That would suggest to me that 'hiccups' are taking place during streaming, where a new frame has a problem and is judged to be 'bad'. In that situation, the RealSense SDK jumps back to the last known good frame and progresses forwards again from that point. So the timestamps of the frames may appear out of order because the SDK is jumping back repeatedly to a previous 'good' frame. |
Thank you for the quick response @MartyG-RealSense... -Most importantly, what could cause these kind of issues? I also don't see this issue when recording a rosbag with the same settings and then single stepping through the frames.... Speaking of settings however, I have only changed a few from the default values:
|
I had the same thought about why 11:04.01 would not be judged to be a good frame. #7837 (comment) offers an alternative explanation for why old frames may be returned when wait_for_frames() is used. FPS has a direct relationship with the exposure value - as described at #1957 (comment) - so setting a very small manual exposure value of 1732 for the depth sensor could have unpredictable consequences (the default depth exposure is 33000). |
The exposure time I set was the minimum allowable amount according to the realsenseviewer program, and also does not cause these kind of issues when recording a ROSBAG file there. If internally the frames are being queued asynchronously as mentioned here: Then why are the same problems not seen? |
I note that your script uses the following lines twice in different parts of the script.
The code might be more reliable if both blocks under the 'for int' 6-frame counting condition were placed under a single one if it is possible to do so. Something like this:
|
That code was set up only to help debug this problem and nothing more. The actual application where we use wait_for_frames() is below (and where we first started noticing frames coming from the past).
|
Hello. I also made an alternate function using poll for frames and saw similar results. |
The 'for int' instruction directly before the wait_for_frames() line would likely not be collecting 6 frames. It would instead cause the program to skip past the first several frames after the camera pipeline starts. Such a mechanism is usually used to avoid frames having bad exposure in the short time period whilst the auto-exposure is still settling down after the stream is activated. This mechanism is demonstrated at the C++ / OpenCV official example program at the link below.
|
How would you recommend I go about collecting 6 frames in a row once a signal is sent? |
The genlock hardware sync system allows the number of frames sent after a trigger signal is received to be defined. This feature is called a burst count. My understanding is that the number of frames to produce is calculated by 4 + the number of frames required. So to produce 6 frames when the trigger signal is received, the camera's Inter Cam Sync Mode hardware sync option would be set to '10' (which is 4 + 6 frames). Once the required number of frames have been generated, the camera will then wait for the next trigger signal before it produces another set of frames. Intel no longer provide technical support for genlock mode and the online documentation for it was removed. However, genlock is still supported within the RealSense SDK if you need to use it. You can access an archived PDF document version of the removed documentation at the link below. The FPS of the camera in genlock mode should be set to 2x the frequency of the trigger signal. So if the trigger's frequency is 30 Hz then the camera's FPS should be set to 60. |
I'll note this down as a potential alternate option. Does this method also do the valid frame checking that poll_for_frames() and wait_for_frames() conduct? I'd also like to focus back on the original issue..... Is there no wait to use wait_for_frames() or poll_for_frames() and ensure its not grabbing a previous set of "good" frames? I want to make sure that is the case before I have to change the whole structure of our code. |
As far as I am aware, genlock sync does not check whether a frame is valid when generating 'burst' frames. With poll_for_frames, new frames arrive instantly without a wait period and are not subject to checks to confirm if the frame is complete before it becomes available. According to #1686 (comment) a condition under which the SDK can return to the last known good frame is if frame drops on the stream occur, suggesting that the chances of going back to a previous frame are reduced if frame drops are minimized. A way to do this is to increase the frame queue size of a stream so that it can hold a greater number of frames simultaneously. |
Oooh I had not heard of this before. Is there a link to any examples of implementing the frame queue, and are there detrimental effects to increasing it? |
There is a official C++ example script for setting the queue size at the link below. The default value for the queue is '1', and this script sets it to '10'. The main detrimental side-effect of increasing the frame queue size is that the program consumes a higher amount of the computer's available memory capacity due to having to hold a greater number of frames in memory simultaneously. By default when the capacity is '1', the queue can hold 16 frames of each stream type at a time, with the oldest frames dropping out the queue as new frames enter, like a continuously moving conveyor belt. |
Hi @ndaley7 Do you require further assistance with this case, please? Thanks! |
I'm still testing things, but I am curious about the syncer... Is it possible to use with the D405 and if that is the case is there an example showing where the color sensor is referenced from? |
If you mean hardware sync then no, the D405 model does not have a set of physical 'sync pin' connectors that is required for hardware sync, My apologies for overlooking that you had a D405 when suggesting it. |
Thanks very much for the clarification. I do not see any reason why you could not use Syncer code with a D405. However, the D405 does not have a separate RGB sensor and instead obtains its RGB image from the depth sensor. So when calling RGB options in a script, you use depth_sensor instead of color_sensor (you do not need to define a color sensor). As depth and RGB are provided by the same sensor, stopping depth will also stop color, so you do not need a color stop command either. |
I get a frame_ref error when it tries to get the color frame from the captured frameset. For comparison the frameset has 2 elements when retreived via pipeline: I seem to only be getting the depth frame when I try this modification to the sample code (for configuration specification): Configuration:
Frame Retreival:
|
@MartyG-RealSense I think we can rule out a settings issue as well. I enabled the pipeline recording functionality and stepped through the result step by step. At least when it comes to writing out the ROSBAG file, all frames show the time on the phone increasing as expected with none showing previous timestamps. (Settings and a few sequential frames seen below): |
I do not have any further suggestions about this out of order frames issue at this time, unfortunately. I see that you reposted this issue on the Intel RealSense Help Center support forum, so a member of the team there may be able to offer advice. |
Thats understandable. Are there any examples of using the syncer to get color and depth frames on the D405? Even Python will suffice if it exsits. Concerning the frame grabbing, I'll delve further in to the source code to see if I can figure out what the enable recording flag is doing differently than the wait_for_frames()/poll_for_frames() approach. |
D405-specific scripts are rare, with the D405 version of the Python depth-color alignment script at #11329 (which is not a syncer script) being the main example of one made specially for D405. #774 advises that both syncer and pipeline match frames using the same logic. There is typically not a performance advantage from using syncer instead of pipeline. It is also recommended to the RealSense user in that case to build librealsense from source code with the **-DCMAKE_BUILD_TYPE=release ** flag included in the CMake build instruction if it has not been used already. |
other issues are coming up but I was able to get the color/ depth sensors up by adding the configurations at the same time as mentioned here:
|
Thanks so much @ndaley7 for sharing your solution to the configuration issue! |
Hi @ndaley7 Do you require further assistance with this case, please? Thanks! |
Case closed due to no further comments received. |
Including Two Sets of System Info as this behavior is occurring on both.
Issue Description
We have a system based on the Jetson Orin Nano that acquires aligned images upon receiving a trigger signal.
Upon noticing that some of the images that were being saved out reflected a state prior to when the trigger single was received, I made the following code to wait for 10 framesets from the realsense camera, add them to an openCV array and then write them out:
Image Output:
The results showed that some of the frames are out of order (see timestamp on phone). I was under the impression that wait_for_frame() calls block the thread until a set of frames is successfully received after the function is called.... Is that not the case?
If so what would be the correct way to implement the above code snipped so 6frames are saved out in the correct order starting from the time the trigger signal is received?
As I try more things to fix this issue I'll update the topic but any ideas or solutions would be greatly appreciated!
The text was updated successfully, but these errors were encountered: