Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

wait_for_frames() seemingly retrieving framesets from the past #13397

Closed
ndaley7 opened this issue Oct 3, 2024 · 28 comments
Closed

wait_for_frames() seemingly retrieving framesets from the past #13397

ndaley7 opened this issue Oct 3, 2024 · 28 comments

Comments

@ndaley7
Copy link

ndaley7 commented Oct 3, 2024


Including Two Sets of System Info as this behavior is occurring on both.

Required Info  
Camera Model D405
Firmware Version 5.15.0.2
Operating System & Version Ubuntu 20.04, Jetpack 5.1.2-b104
Kernel Version (Linux Only) 5.10.120-tegra
Platform NVIDIA Jetson Orin Nano
SDK Version 2.54.1
Language C++
Segment Robot
Required Info  
Camera Model D405
Firmware Version 5.12.14.100
Operating System & Version Ubuntu 20.04, 5.1.2-b104
Kernel Version (Linux Only) 5.10.120-tegra
Platform NVIDIA Jetson Orin Nano
SDK Version 2.54.2
Language C++
Segment Robot

Issue Description

We have a system based on the Jetson Orin Nano that acquires aligned images upon receiving a trigger signal.
Upon noticing that some of the images that were being saved out reflected a state prior to when the trigger single was received, I made the following code to wait for 10 framesets from the realsense camera, add them to an openCV array and then write them out:

//Debug Terminal statement
        std::cout << "DEBUG MODE. Set debutMode parameter in openCVimagesave function to false if not intentional" << std::endl;
        // Debug Variable Init
        Mat image_holder,imageWithText;

        // Initialize an array (vector) of cv::Mat images
        std::vector<cv::Mat> debugImages(imageQuantity);

        // For loop  based on imageQuantity to add images to debugimage array
        for (int i = 0; i < imageQuantity; ++i)
        {
            rs2::frameset data = g_pipe.wait_for_frames();               // Wait for next set of frames from the camera
            rs2::frameset aligned_frames = align_to_color.process(data); // Align the depth frame to color frame

            //color_img = aligned_frames.get_color_frame(); // Get the aligned color frame
            color_img = aligned_frames.get_color_frame(); // Get color frame
            depth = aligned_frames.get_depth_frame();     // Get the aligned depth frame

            // Query frame size (width and height)
            w = depth.as<rs2::video_frame>().get_width();
            h = depth.as<rs2::video_frame>().get_height();

            wc = color_img.as<rs2::video_frame>().get_width();
            hc = color_img.as<rs2::video_frame>().get_height();

            image_holder = frame_to_mat(color_img);

            debugImages[6]=image_holder;
            
        }

         // For loop  based on imageQuantity to click through and view images in debugimage array
        for (int i = 0; i < 6; ++i)
        {
            // Copy the image to avoid modifying the original
            imageWithText = debugImages[i].clone();

            // Define the text to display (image number)
        std::string text = "Image " + std::to_string(i + 1);

        // Define the position of the text
        cv::Point textPosition(10, 30); // (x, y) coordinates for the upper-left corner

         // Define font properties
        int fontFace = cv::FONT_HERSHEY_SIMPLEX;
        double fontScale = 1.0;
        cv::Scalar fontColor(255, 255, 255); // White color for the text
        int thickness = 2;

        // Add the text to the image
        cv::putText(imageWithText, text, textPosition, fontFace, fontScale, fontColor, thickness);

        // Display the image with the text
        cv::imshow("Debug Images", imageWithText);

        // Wait for a key press before displaying the next image
        imwrite(g_globalPath +"Image " + std::to_string(i + 1)+".jpg", imageWithText); // Color image write
        cv::waitKey(0);
}

Image Output:
Image 1
Image 2
Image 3
Image 4
Image 5
Image 6

The results showed that some of the frames are out of order (see timestamp on phone). I was under the impression that wait_for_frame() calls block the thread until a set of frames is successfully received after the function is called.... Is that not the case?

If so what would be the correct way to implement the above code snipped so 6frames are saved out in the correct order starting from the time the trigger signal is received?

As I try more things to fix this issue I'll update the topic but any ideas or solutions would be greatly appreciated!

@MartyG-RealSense
Copy link
Collaborator

Hi @ndaley7 You are correct, wait_for_frames() blocks until a complete frame is available.

Frames 2, 4 and 6 have the same timestamp (10:55.47). That would suggest to me that 'hiccups' are taking place during streaming, where a new frame has a problem and is judged to be 'bad'. In that situation, the RealSense SDK jumps back to the last known good frame and progresses forwards again from that point. So the timestamps of the frames may appear out of order because the SDK is jumping back repeatedly to a previous 'good' frame.

@ndaley7
Copy link
Author

ndaley7 commented Oct 4, 2024

Thank you for the quick response @MartyG-RealSense...

-Most importantly, what could cause these kind of issues?
-I was under the impression that .wait_for_frame() did that check for "good" frames but is there a way to do that manually?
-Why would it not jump back to the frame showing 11:04:11 if that was judged as a good frame at the time?

I also don't see this issue when recording a rosbag with the same settings and then single stepping through the frames....

Speaking of settings however, I have only changed a few from the default values:

        pipe_config.enable_stream(RS2_STREAM_COLOR, 848, 480, RS2_FORMAT_BGR8, 30);
        pipe_config.enable_stream(RS2_STREAM_DEPTH, 848, 480, RS2_FORMAT_Z16, 30);
        rs2::pipeline_profile profile = g_pipe.start(pipe_config);

        // Set exposure 
        auto depthSensor = profile.get_device().query_sensors()[0];
        depthSensor.set_option(RS2_OPTION_ENABLE_AUTO_EXPOSURE, 0);
        depthSensor.set_option(RS2_OPTION_EXPOSURE, 1732);
        depthSensor.set_option(RS2_OPTION_GAIN, 23);
        depthSensor.set_option(RS2_OPTION_WHITE_BALANCE, 4120);

@MartyG-RealSense
Copy link
Collaborator

I had the same thought about why 11:04.01 would not be judged to be a good frame.

#7837 (comment) offers an alternative explanation for why old frames may be returned when wait_for_frames() is used.

FPS has a direct relationship with the exposure value - as described at #1957 (comment) - so setting a very small manual exposure value of 1732 for the depth sensor could have unpredictable consequences (the default depth exposure is 33000).

@ndaley7
Copy link
Author

ndaley7 commented Oct 4, 2024

I had the same thought about why 11:04.01 would not be judged to be a good frame.

#7837 (comment) offers an alternative explanation for why old frames may be returned when wait_for_frames() is used.

FPS has a direct relationship with the exposure value - as described at #1957 (comment) - so setting a very small manual exposure value of 1732 for the depth sensor could have unpredictable consequences (the default depth exposure is 33000).

The exposure time I set was the minimum allowable amount according to the realsenseviewer program, and also does not cause these kind of issues when recording a ROSBAG file there.

If internally the frames are being queued asynchronously as mentioned here:

#7837 (comment)

Then why are the same problems not seen?

@MartyG-RealSense
Copy link
Collaborator

I note that your script uses the following lines twice in different parts of the script.

// For loop  based on imageQuantity to add images to debugimage array
for (int i = 0; i < imageQuantity; ++i)
{

The code might be more reliable if both blocks under the 'for int' 6-frame counting condition were placed under a single one if it is possible to do so. Something like this:

 // For loop  based on imageQuantity to add images to debugimage array
        for (int i = 0; i < imageQuantity; ++i)
        {
            rs2::frameset data = g_pipe.wait_for_frames();               // Wait for next set of frames from the camera
            rs2::frameset aligned_frames = align_to_color.process(data); // Align the depth frame to color frame

            //color_img = aligned_frames.get_color_frame(); // Get the aligned color frame
            color_img = aligned_frames.get_color_frame(); // Get color frame
            depth = aligned_frames.get_depth_frame();     // Get the aligned depth frame

            // Query frame size (width and height)
            w = depth.as<rs2::video_frame>().get_width();
            h = depth.as<rs2::video_frame>().get_height();

            wc = color_img.as<rs2::video_frame>().get_width();
            hc = color_img.as<rs2::video_frame>().get_height();

            image_holder = frame_to_mat(color_img);

            debugImages[6]=image_holder;
         
// SECOND BLOCK OF CODE INSERTED WITHIN THE FIRST

// Copy the image to avoid modifying the original
            imageWithText = debugImages[i].clone();

            // Define the text to display (image number)
        std::string text = "Image " + std::to_string(i + 1);

        // Define the position of the text
        cv::Point textPosition(10, 30); // (x, y) coordinates for the upper-left corner

         // Define font properties
        int fontFace = cv::FONT_HERSHEY_SIMPLEX;
        double fontScale = 1.0;
        cv::Scalar fontColor(255, 255, 255); // White color for the text
        int thickness = 2;

        // Add the text to the image
        cv::putText(imageWithText, text, textPosition, fontFace, fontScale, fontColor, thickness);

        // Display the image with the text
        cv::imshow("Debug Images", imageWithText);

        // Wait for a key press before displaying the next image
        imwrite(g_globalPath +"Image " + std::to_string(i + 1)+".jpg", imageWithText); // Color image write
        cv::waitKey(0);

   // END OF THE FOR INT CONDITION
        
}

@ndaley7
Copy link
Author

ndaley7 commented Oct 5, 2024

I note that your script uses the following lines twice in different parts of the script.

That code was set up only to help debug this problem and nothing more.
I needed code that would first collect 6 frames and then display the 6 that were collected.

The actual application where we use wait_for_frames() is below (and where we first started noticing frames coming from the past).
It runs when a signal is received (upto ~twice per second). All it does is align the depth to color frame and save both images out via open CV:

else if (!debugMode) // STANDARD- Captures and saves out Color Image and Aligned Depth Image
    {
        rs2::frameset data = g_pipe.wait_for_frames(); // Wait for next set of frames from the camera

        // Check for Color Alignment Boolean and make appropriate changes
        if (g_colorAlignedDepth)
        {
            // Change Global
            g_output_color = "_Aligned_Color.png";
            g_output_depth = "_Aligned_Depth.png";

            rs2::frameset aligned_frames = align_to_color.process(data); // Align the depth frame to color frame

            color_img = aligned_frames.get_color_frame(); // Get the aligned color frame
            depth = aligned_frames.get_depth_frame();     // Get the aligned depth frame
        }
        // Query frame size (width and height)
        w = depth.as<rs2::video_frame>().get_width();
        h = depth.as<rs2::video_frame>().get_height();

        wc = color_img.as<rs2::video_frame>().get_width();
        hc = color_img.as<rs2::video_frame>().get_height();

        // Create OpenCV matrix of size (w,h) from the colorized depth data
        // doing depth frame to meeters to get the output scaling factor
        // Mat depth_imageM=depth_frame_to_meters(depth);

        Mat depth_image = frame_to_mat(depth);

        Mat color_image = frame_to_mat(color_img);

        // Path Output


        imwrite(appendTimecode(g_output_depth, g_globalPath), depth_image); // Color image write
        imwrite(appendTimecode(g_output_color, g_globalPath), color_image); // Depth image write

}

@ndaley7
Copy link
Author

ndaley7 commented Oct 5, 2024

Hello. I also made an alternate function using poll for frames and saw similar results.

@MartyG-RealSense
Copy link
Collaborator

I needed code that would first collect 6 frames and then display the 6 that were collected.

The 'for int' instruction directly before the wait_for_frames() line would likely not be collecting 6 frames. It would instead cause the program to skip past the first several frames after the camera pipeline starts. Such a mechanism is usually used to avoid frames having bad exposure in the short time period whilst the auto-exposure is still settling down after the stream is activated.

This mechanism is demonstrated at the C++ / OpenCV official example program at the link below.

https://github.com/IntelRealSense/librealsense/blob/master/doc/stepbystep/getting_started_with_openCV.md

  // Camera warmup - dropping several first frames to let auto-exposure stabilize
    rs2::frameset frames;
    for(int i = 0; i < 30; i++)
    {
        //Wait for all configured streams to produce a frame
        frames = pipe.wait_for_frames();
    }

@ndaley7
Copy link
Author

ndaley7 commented Oct 6, 2024

I needed code that would first collect 6 frames and then display the 6 that were collected.

The 'for int' instruction directly before the wait_for_frames() line would likely not be collecting 6 frames. It would instead cause the program to skip past the first several frames after the camera pipeline starts. Such a mechanism is usually used to avoid frames having bad exposure in the short time period whilst the auto-exposure is still settling down after the stream is activated.

This mechanism is demonstrated at the C++ / OpenCV official example program at the link below.

https://github.com/IntelRealSense/librealsense/blob/master/doc/stepbystep/getting_started_with_openCV.md

  // Camera warmup - dropping several first frames to let auto-exposure stabilize
    rs2::frameset frames;
    for(int i = 0; i < 30; i++)
    {
        //Wait for all configured streams to produce a frame
        frames = pipe.wait_for_frames();
    }

How would you recommend I go about collecting 6 frames in a row once a signal is sent?

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Oct 7, 2024

The genlock hardware sync system allows the number of frames sent after a trigger signal is received to be defined. This feature is called a burst count.

My understanding is that the number of frames to produce is calculated by 4 + the number of frames required. So to produce 6 frames when the trigger signal is received, the camera's Inter Cam Sync Mode hardware sync option would be set to '10' (which is 4 + 6 frames).

Once the required number of frames have been generated, the camera will then wait for the next trigger signal before it produces another set of frames.

Intel no longer provide technical support for genlock mode and the online documentation for it was removed. However, genlock is still supported within the RealSense SDK if you need to use it. You can access an archived PDF document version of the removed documentation at the link below.

External Synchronization.pdf

The FPS of the camera in genlock mode should be set to 2x the frequency of the trigger signal. So if the trigger's frequency is 30 Hz then the camera's FPS should be set to 60.

@ndaley7
Copy link
Author

ndaley7 commented Oct 7, 2024

I'll note this down as a potential alternate option. Does this method also do the valid frame checking that poll_for_frames() and wait_for_frames() conduct?

I'd also like to focus back on the original issue..... Is there no wait to use wait_for_frames() or poll_for_frames() and ensure its not grabbing a previous set of "good" frames?

I want to make sure that is the case before I have to change the whole structure of our code.

@MartyG-RealSense
Copy link
Collaborator

As far as I am aware, genlock sync does not check whether a frame is valid when generating 'burst' frames.

With poll_for_frames, new frames arrive instantly without a wait period and are not subject to checks to confirm if the frame is complete before it becomes available.

According to #1686 (comment) a condition under which the SDK can return to the last known good frame is if frame drops on the stream occur, suggesting that the chances of going back to a previous frame are reduced if frame drops are minimized. A way to do this is to increase the frame queue size of a stream so that it can hold a greater number of frames simultaneously.

@ndaley7
Copy link
Author

ndaley7 commented Oct 8, 2024

As far as I am aware, genlock sync does not check whether a frame is valid when generating 'burst' frames.

With poll_for_frames, new frames arrive instantly without a wait period and are not subject to checks to confirm if the frame is complete before it becomes available.

According to #1686 (comment) a condition under which the SDK can return to the last known good frame is if frame drops on the stream occur, suggesting that the chances of going back to a previous frame are reduced if frame drops are minimized. A way to do this is to increase the frame queue size of a stream so that it can hold a greater number of frames simultaneously.

Oooh I had not heard of this before. Is there a link to any examples of implementing the frame queue, and are there detrimental effects to increasing it?

@MartyG-RealSense
Copy link
Collaborator

There is a official C++ example script for setting the queue size at the link below. The default value for the queue is '1', and this script sets it to '10'.

https://github.com/IntelRealSense/librealsense/wiki/Frame-Buffering-Management-in-RealSense-SDK-2.0#frame-buffering-examples-in-the-sdk

The main detrimental side-effect of increasing the frame queue size is that the program consumes a higher amount of the computer's available memory capacity due to having to hold a greater number of frames in memory simultaneously.

By default when the capacity is '1', the queue can hold 16 frames of each stream type at a time, with the oldest frames dropping out the queue as new frames enter, like a continuously moving conveyor belt.

@MartyG-RealSense
Copy link
Collaborator

Hi @ndaley7 Do you require further assistance with this case, please? Thanks!

@ndaley7
Copy link
Author

ndaley7 commented Oct 15, 2024

Hi @ndaley7 Do you require further assistance with this case, please? Thanks!

I'm still testing things, but I am curious about the syncer... Is it possible to use with the D405 and if that is the case is there an example showing where the color sensor is referenced from?

@MartyG-RealSense
Copy link
Collaborator

MartyG-RealSense commented Oct 16, 2024

If you mean hardware sync then no, the D405 model does not have a set of physical 'sync pin' connectors that is required for hardware sync, My apologies for overlooking that you had a D405 when suggesting it.

@ndaley7
Copy link
Author

ndaley7 commented Oct 16, 2024

If you mean hardware sync then no, the D405 model does not have a set of physical 'sync pin' connectors that is required for hardware sync, My apologies for overlooking that you had a D405 when suggesting it.

I'm Refering to this sync acquisition method:

image

@MartyG-RealSense
Copy link
Collaborator

Thanks very much for the clarification.

I do not see any reason why you could not use Syncer code with a D405. However, the D405 does not have a separate RGB sensor and instead obtains its RGB image from the depth sensor. So when calling RGB options in a script, you use depth_sensor instead of color_sensor (you do not need to define a color sensor).

As depth and RGB are provided by the same sensor, stopping depth will also stop color, so you do not need a color stop command either.

@ndaley7
Copy link
Author

ndaley7 commented Oct 16, 2024

I get a frame_ref error when it tries to get the color frame from the captured frameset.

image

For comparison the frameset has 2 elements when retreived via pipeline:

image

I seem to only be getting the depth frame when I try this modification to the sample code (for configuration specification):

Configuration:

std::string globalPath = "IMG/";
// Create a context object
    rs2::context ctx;
    auto list = ctx.query_devices();
    if (list.size() == 0) {
        std::cerr << "No RealSense device connected." << std::endl;
        return EXIT_FAILURE;
    }

    // Get the first device
    rs2::device dev = list.front();

    // Access the sensors
    rs2::sensor depth_sensor;
    rs2::sensor color_sensor;

    for (auto& sensor : dev.query_sensors()) {
        if (sensor.get_info(RS2_CAMERA_INFO_NAME) == std::string("RGB Camera")) {
            color_sensor = sensor;
        } else if (sensor.get_info(RS2_CAMERA_INFO_NAME) == std::string("Stereo Module")) {
            depth_sensor = sensor;
        }
    }

    // Desired settings
    int width = 848;   // Set desired width
    int height = 480;   // Set desired height
    int fps = 30;       // Set desired frame rate

    // Open depth stream with specific resolution and frame rate
    bool depth_opened = false;
    for (auto& profile : depth_sensor.get_stream_profiles()) {
        if (auto vid_profile = profile.as<rs2::video_stream_profile>()) {
            if (vid_profile.width() == width && vid_profile.height() == height && 
                vid_profile.fps() == fps && vid_profile.format() == RS2_FORMAT_Z16) {
                depth_sensor.open(vid_profile);
                depth_opened = true;
                break;
            }
        }
    }


    if (!depth_opened ) {
        std::cerr << "Failed to open streams with specified resolution, format, or frame rate." << std::endl;
        return EXIT_FAILURE;
    }

// Create new syncer and set it’s capacity, by default the capacity is 1
rs2::syncer sync(10);

//In order to begin getting data from the sensor, we need to register a class to handle frames, 
//in our case we provide the syncer at the sensor initialization.
depth_sensor.start(sync);
//color_sensor.start(sync);

Frame Retreival:

rs2::frame depth, color_img;
    int w, h, wc, hc;

        // Debug Variable Init
        Mat image_holder,imageWithText;

        int imageQuantity=10;

        // Initialize an array (vector) of cv::Mat images
        std::vector<cv::Mat> debugImages(imageQuantity);

        // For loop  based on imageQuantity to add images to debugimage array
        for (int i = 0; i < 10; ++i)
        {
            rs2::frameset data = sync.wait_for_frames(); // Wait for next set of frames from the camera
            //rs2::frameset aligned_frames = align_to_color.process(data); // Align the depth frame to color frame

            //color_img = aligned_frames.get_color_frame(); // Get the aligned color frame
            depth = data.first_or_default(RS2_STREAM_DEPTH);     // Get the aligned depth frame
            color_img = data.first_or_default(RS2_STREAM_COLOR); // Get color frame


            // Query frame size (width and height)
            w = depth.as<rs2::video_frame>().get_width();
            h = depth.as<rs2::video_frame>().get_height();

            wc = color_img.as<rs2::video_frame>().get_width();
            hc = color_img.as<rs2::video_frame>().get_height();

            image_holder = frame_to_mat(color_img);

            debugImages[i]=image_holder;
            
        }

         // For loop  based on imageQuantity to click through and view images in debugimage array
        for (int i = 0; i < imageQuantity; ++i)
        {
            // Copy the image to avoid modifying the original
            imageWithText = debugImages[i].clone();

            // Define the text to display (image number)
        std::string text = "Image " + std::to_string(i + 1);

        // Define the position of the text
        cv::Point textPosition(10, 30); // (x, y) coordinates for the upper-left corner

         // Define font properties
        int fontFace = cv::FONT_HERSHEY_SIMPLEX;
        double fontScale = 1.0;
        cv::Scalar fontColor(255, 255, 255); // White color for the text
        int thickness = 2;

        // Add the text to the image
        cv::putText(imageWithText, text, textPosition, fontFace, fontScale, fontColor, thickness);

        // Display the image with the text
        cv::imshow("Debug Images", imageWithText);

        // Wait for a key press before displaying the next image
        imwrite(globalPath +"Image " + std::to_string(i + 1)+".jpg", imageWithText); // Color image write
        cv::waitKey(0);
        

        }
 // Close all windows
    cv::destroyAllWindows();

// To stop streaming, we simply need to call the sensor's stop method
// After returning from the call to stop(), no frames will arrive from this sensor
depth_sensor.stop();
//color_sensor.stop();

// To complete the stop operation, and release access of the device, we need to call close()
//per sensor
depth_sensor.close();

@ndaley7
Copy link
Author

ndaley7 commented Oct 20, 2024

@MartyG-RealSense I think we can rule out a settings issue as well. I enabled the pipeline recording functionality and stepped through the result step by step. At least when it comes to writing out the ROSBAG file, all frames show the time on the phone increasing as expected with none showing previous timestamps. (Settings and a few sequential frames seen below):

image
image
image
image

@MartyG-RealSense
Copy link
Collaborator

I do not have any further suggestions about this out of order frames issue at this time, unfortunately. I see that you reposted this issue on the Intel RealSense Help Center support forum, so a member of the team there may be able to offer advice.

@ndaley7
Copy link
Author

ndaley7 commented Oct 20, 2024

I do not have any further suggestions about this out of order frames issue at this time, unfortunately. I see that you reposted this issue on the Intel RealSense Help Center support forum, so a member of the team there may be able to offer advice.

Thats understandable. Are there any examples of using the syncer to get color and depth frames on the D405? Even Python will suffice if it exsits.

Concerning the frame grabbing, I'll delve further in to the source code to see if I can figure out what the enable recording flag is doing differently than the wait_for_frames()/poll_for_frames() approach.

@MartyG-RealSense
Copy link
Collaborator

D405-specific scripts are rare, with the D405 version of the Python depth-color alignment script at #11329 (which is not a syncer script) being the main example of one made specially for D405.

#774 advises that both syncer and pipeline match frames using the same logic. There is typically not a performance advantage from using syncer instead of pipeline. It is also recommended to the RealSense user in that case to build librealsense from source code with the **-DCMAKE_BUILD_TYPE=release ** flag included in the CMake build instruction if it has not been used already.

@ndaley7
Copy link
Author

ndaley7 commented Oct 28, 2024

D405-specific scripts are rare, with the D405 version of the Python depth-color alignment script at #11329 (which is not a syncer script) being the main example of one made specially for D405.

#774 advises that both syncer and pipeline match frames using the same logic. There is typically not a performance advantage from using syncer instead of pipeline. It is also recommended to the RealSense user in that case to build librealsense from source code with the **-DCMAKE_BUILD_TYPE=release ** flag included in the CMake build instruction if it has not been used already.

other issues are coming up but I was able to get the color/ depth sensors up by adding the configurations at the same time as mentioned here:

#7010 (comment)

 // Open depth stream with specific resolution and frame rate
    bool depth_opened = false;
    for (auto& profile : g_depth_sensor.get_stream_profiles()) 
    {
        if (auto vid_profile = profile.as<rs2::video_stream_profile>()) {
            if (vid_profile.width() == width && vid_profile.height() == height && 
                vid_profile.fps() == fps && vid_profile.format() == RS2_FORMAT_Z16) {
                // Depth Stream Profile
                vid_profile_depth=vid_profile;
                
                break;
            }
        }
    }

  


    std::cout << "Opening Color Stream with Specified Settings" << std::endl;
    // Open color stream with specific resolution and frame rate
    // No color sensor in D405, comes from depth
    bool color_opened = false;
    for (auto& profile : g_depth_sensor.get_stream_profiles()) 
    {
        if (auto vid_profile = profile.as<rs2::video_stream_profile>()) {
            if (vid_profile.width() == width && vid_profile.height() == height && 
                vid_profile.fps() == fps && vid_profile.format() == RS2_FORMAT_BGR8&& vid_profile.stream_type() == RS2_STREAM_COLOR) {
                    //This code is failing on the open.
                    //It seems like you can only open the sensor for color OR depth.
                    //This needs to be resolved before the frame queue test can be implemented as well.
                //Color Stream Profile
                vid_profile_color=vid_profile;
                break;
            }
        }
    }

    //Construct the const vector
//Create Compound Color Depth Profile Holder
    const std::vector<rs2::stream_profile> & profilesColorDepth={vid_profile_depth,vid_profile_color};

    //Open Depth Sensor with Combination of Depth and Color Profiles

    g_depth_sensor.open(profilesColorDepth);
                depth_opened = true;
                color_opened = true;
    
      //Set Specific Depth Sensor Settings
    g_depth_sensor.set_option(RS2_OPTION_ENABLE_AUTO_EXPOSURE, false);
    g_depth_sensor.set_option(RS2_OPTION_EXPOSURE, 1732);
    g_depth_sensor.set_option(RS2_OPTION_GAIN, 23);
    g_depth_sensor.set_option(RS2_OPTION_WHITE_BALANCE, 4120);
    
    ```

@MartyG-RealSense
Copy link
Collaborator

Thanks so much @ndaley7 for sharing your solution to the configuration issue!

@MartyG-RealSense
Copy link
Collaborator

Hi @ndaley7 Do you require further assistance with this case, please? Thanks!

@MartyG-RealSense
Copy link
Collaborator

Case closed due to no further comments received.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants