Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

D435i Motion Module Hardware Timestamps using LIBUVC #5212

Closed
JeremyBYU opened this issue Nov 8, 2019 · 10 comments
Closed

D435i Motion Module Hardware Timestamps using LIBUVC #5212

JeremyBYU opened this issue Nov 8, 2019 · 10 comments

Comments

@JeremyBYU
Copy link

Required Info
Camera Model D435i
Firmware Version 05.11.15.00
Operating System & Version Ubuntu 18.04
Kernel Version (Linux Only) Linux pop-os 5.3.0-20-generic
Platform PC
SDK Version 2.30
Language C++
Segment Robots

I have installed the latest Realsense SDK 2.30 on my machine. Here is CMAKE commands for completeness:

cmake ../ -DFORCE_RSUSB_BACKEND=true -DCMAKE_BUILD_TYPE=release -DBUILD_EXAMPLES=true -DBUILD_GRAPHICAL_EXAMPLES=true -DBUILD_PYTHON_BINDINGS=true

My kernel just upgraded to 5.3 so I now need to use the libuvc backend. It installs and seems to work fine. I have already calibrated the IMU using intel procedures. Hardware timestamps seem to be working for RGB and depth. However I am not sure if they are working for the motion module. There are no errors about meta information in the terminal when running realsense-viewer. There is this error but not sure if its relevant, they are few of them over the lifespan, and mostly in the beginning.

08/11 10:51:57,716 WARNING [139733166438144] (messenger-libusb.cpp:42) control_transfer returned error, index: 768, error: No data available, number: 61
 08/11 10:51:57,776 WARNING [139733166438144] (messenger-libusb.cpp:42) control_transfer returned error, index: 768, error: No data available, number: 61
 08/11 10:51:57,957 WARNING [139733166438144] (messenger-libusb.cpp:42) control_transfer returned error, index: 768, error: No data available, number: 61
 08/11 10:51:58,138 WARNING [139733166438144] (messenger-libusb.cpp:42) control_transfer returned error, index: 768, error: No data available, number: 61
 08/11 10:51:58,319 WARNING [139733166438144] (messenger-libusb.cpp:42) control_transfer returned error, index: 768, error: No data available, number: 61
 08/11 10:51:58,379 WARNING [139733166438144] (messenger-libusb.cpp:42) control_transfer returned error, index: 768, error: No data available, number: 61
 08/11 10:52:16,873 WARNING [139733166438144] (messenger-libusb.cpp:42) control_transfer returned error, index: 768, error: No data available, number: 61
 08/11 11:04:04,052 WARNING [139733166438144] (messenger-libusb.cpp:42) control_transfer returned error, index: 768, error: No data available, number: 61
 08/11 11:08:09,032 WARNING [139733166438144] (messenger-libusb.cpp:42) control_transfer returned error, index: 768, error: Resource temporarily unavailable, number: 11

However when I click the meta information overlay for the motion I get the following picture.

Screenshot from 2019-11-08 10-55-54

As you can see the clock domain is showing Global Time, which should mean that hardware timestamps are working. However the backend timestamp is showing 0, making me think that its not working. The main question I have:

  1. How do I know if the hardware timestamps are working?
  2. And if they are not working what must be done to get libuvc to work with them?

A final question I have is a little general. I have searched for it but have not foudn the answer:

  1. What are the advantages and disadvantages of using libuvc vs patched kernel modules?

Obviously ease of installation is for libuvc. But are there any tradeoffs? Like higher latency, more bugs, unsupported workflows? Any info on that would be great.

Thanks again for the great product and the support Intel provides.

@Fred3D-tech
Copy link

Fred3D-tech commented Nov 8, 2019

same problem here... with a DR415 but same config and new install of SDK 2.30.

@ev-mp
Copy link
Collaborator

ev-mp commented Nov 10, 2019

@JeremyBYU hello, please see answers inlined:

My kernel just upgraded to 5.3 so I now need to use the libuvc backend

With all kernels v.416+ you get the standard HW timestamp for free (w/o kernel patches), You'd still need to apply the patches if you need the extended frame metadata - exposure, gain, laser poser, etc'. But if you're only interested in getting HW timestamps then using unmodified kernel should suffice.

As you can see the clock domain is showing Global Time, which should mean that hardware timestamps are working. However the backend timestamp is showing 0, making me think that its not working.

The backend_timestamp is host-generated attribute. Global timestamp does relies on HW timestamp and is not affected by backend ts. There is a minor issue with not all metadata attributes being display in the viewer that we'll need to check.

How do I know if the hardware timestamps are working?

By checking the timestamp domain attribute of frame's timestamp. When HW timestamps are available then the SDK will automatically prefer them over the host-generated timestamps and you're expected to receive global ts or hw ts domain. (documentation). Also see related #4525.

And if they are not working what must be done to get libuvc to work with them?

The HW timestamps should be available with rsusb (libuvc is deprecated with v2.30.0) when that backend is configured in Cmake. No additional parameters are required.

What are the advantages and disadvantages of using libuvc vs patched kernel modules?

This questing is worth a topic of its own:), The following points in my estimation are valid to the discussion:
Libuvc advantages:

  • Cross-platform (at least for Linux & MacOS).
  • User-space UVC implementation :
    • No kernel patches required
    • Easily to deploy (gcc + libusb dependencies only) - thus is the preferred choice of Librealsense for ARM/Jetsons/Mac platforms
    • Easier to Debug

Disadvantages/ Fundamental issues:

  • No official support/maintainers.
  • Most of the implementation is build around UVC 1.1, while most contemporary cameras (including realsense) are UVC 1.5 devices
  • User-space UVC implementation:
    • Power-management - In case an application crushes/get stuck the the underlying device is not being released and continues to run in orphan mode. This may require manual re-plugging to recover (reliability).
  • Single Consumer - most kernel drivers (Linux/Windows) allow to connect and communicate with device from multiple processes (except for streaming). With Libuvc only one application can get device handle. This is one of the limitations for multicam on MacOS, for instance. There are attempts to address this by community enthusiasts, so this may change one day.

Kernel patches:

  • The kernels are fast and (mostly) stable. Adding patched modules into kernel tree in controlled and pier-reviewed manner allows to get most of the benefits and also get the additional features.
  • All kernel patches should eventually get upstreamed into Linux of abandoned,
    From the user's perspective the main obstacle in accepting the kernel patch model is the deployment scheme that requires certain engineering skills level.
    We address this by by wrapping and redistributing the patches with DKMS Debian package, at least for Ubuntu LTS kernels.

@JeremyBYU
Copy link
Author

Thank you for this thorough response. I will continue to use the libuvc/rsusb driver for now, since mostly all I care about is timestamps. The only thing that you mentioned that I am a a little concerned with is this quote:

The kernels are fast and (mostly) stable.

Just wondering if this implies that there might be an issue with speed when using libuvc. However that is something that I can determine myself through my own tests.

@JeremyBYU JeremyBYU reopened this Nov 11, 2019
@JeremyBYU
Copy link
Author

I think I just ran into an issue with something you just said:

Single Consumer - most kernel drivers (Linux/Windows) allow to connect and communicate with device from multiple processes (except for streaming). With Libuvc only one application can get device handle. This is one of the limitations for multicam on MacOS, for instance. There are attempts to address this by community enthusiasts, so this may change one day.

I have a program that is modified from rs-motion. Two rs:pipelines are created

  1. One pipeline that monitors motion data through a callback. e.g. pipeline.start(config, callback). This puts it in a seperate thread.
  2. Another pipeline that gets aligned rgb, depth camera data. This is on the main thread using the blocking wait for frames: rs2::frameset data = pipe_camera.wait_for_frames();

This setup works just fine on my Windows 10 computer (which I assume is using a good device driver that supports this setup). However on my Linux computer now using libuvc/rsusb, only the last pipeline started gets access to data (camera data). Code attached at bottom.

If you comment out the auto profile_camera = pipe_camera.start(cfg_camera); line and any references to it a few lines below, then motion data is now recorded.

I just want to make sure that this is not a bug, and this "issue" is what you were referring to in the above quote. If it is expected behaviour, do you have any solution for me to capture high frequency motion data on on thread, and separately capture a frame set of depth/rgb on another using rsusb? I think I have an idea on how to do that and will try it out: Capture all data in one pipeline, each callback gives back each sensor stream independently (gryo, accel, depht, rgb). But then I think I lose the ability to nicely use frameset alignment and such.

Thanks for your help @ev-mp.

// License: Apache 2.0. See LICENSE file in root directory.

// Copyright(c) 2019 Intel Corporation. All Rights Reserved.

#include <librealsense2/rs.hpp>

#include <mutex>

#include "example.hpp" // Include short list of convenience functions for rendering

#include <cstring>

#include <chrono>
#include <thread>

struct short3

{

    uint16_t x, y, z;
};

#include "d435.h"

void draw_axes()

{

    glLineWidth(2);

    glBegin(GL_LINES);

    // Draw x, y, z axes

    glColor3f(1, 0, 0);
    glVertex3f(0, 0, 0);
    glVertex3f(-1, 0, 0);

    glColor3f(0, 1, 0);
    glVertex3f(0, 0, 0);
    glVertex3f(0, -1, 0);

    glColor3f(0, 0, 1);
    glVertex3f(0, 0, 0);
    glVertex3f(0, 0, 1);

    glEnd();

    glLineWidth(1);
}

void draw_floor()

{

    glBegin(GL_LINES);

    glColor4f(0.4f, 0.4f, 0.4f, 1.f);

    // Render "floor" grid

    for (int i = 0; i <= 8; i++)

    {

        glVertex3i(i - 4, 1, 0);

        glVertex3i(i - 4, 1, 8);

        glVertex3i(-4, 1, i);

        glVertex3i(4, 1, i);
    }

    glEnd();
}

void render_scene(glfw_state app_state)

{

    glClearColor(0.0, 0.0, 0.0, 1.0);

    glColor3f(1.0, 1.0, 1.0);

    glMatrixMode(GL_PROJECTION);

    glLoadIdentity();

    gluPerspective(60.0, 4.0 / 3.0, 1, 40);

    glClear(GL_COLOR_BUFFER_BIT);

    glMatrixMode(GL_MODELVIEW);

    glLoadIdentity();

    gluLookAt(1, 0, 5, 1, 0, 0, 0, -1, 0);

    glTranslatef(0, 0, +0.5f + app_state.offset_y * 0.05f);

    glRotated(app_state.pitch, -1, 0, 0);

    glRotated(app_state.yaw, 0, 1, 0);

    draw_floor();
}

class camera_renderer

{

    std::vector<float3> positions, normals;

    std::vector<short3> indexes;

public:
    // Initialize renderer with data needed to draw the camera

    camera_renderer()

    {

        uncompress_d435_obj(positions, normals, indexes);
    }

    // Takes the calculated angle as input and rotates the 3D camera model accordignly

    void render_camera(float3 theta)

    {

        glEnable(GL_BLEND);

        glBlendFunc(GL_ONE, GL_ONE);

        glPushMatrix();

        // Set the rotation, converting theta to degrees

        glRotatef(theta.x * 180 / PI, 0, 0, -1);

        glRotatef(theta.y * 180 / PI, 0, -1, 0);

        glRotatef((theta.z - PI / 2) * 180 / PI, -1, 0, 0);

        draw_axes();

        // Scale camera drawing

        glScalef(0.035, 0.035, 0.035);

        glBegin(GL_TRIANGLES);

        // Draw the camera

        for (auto &i : indexes)

        {

            glVertex3fv(&positions[i.x].x);

            glVertex3fv(&positions[i.y].x);

            glVertex3fv(&positions[i.z].x);

            glColor4f(0.05f, 0.05f, 0.05f, 0.3f);
        }

        glEnd();

        glPopMatrix();

        glDisable(GL_BLEND);

        glFlush();
    }
};

class rotation_estimator

{

    // theta is the angle of camera rotation in x, y and z components

    float3 theta;

    std::mutex theta_mtx;

    /* alpha indicates the part that gyro and accelerometer take in computation of theta; higher alpha gives more weight to gyro, but too high

    values cause drift; lower alpha gives more weight to accelerometer, which is more sensitive to disturbances */

    float alpha = 0.98;

    bool first = true;

    // Keeps the arrival time of previous gyro frame

    double last_ts_gyro = 0;

public:
    // Function to calculate the change in angle of motion based on data from gyro

    void process_gyro(rs2_vector gyro_data, double ts)

    {

        if (first) // On the first iteration, use only data from accelerometer to set the camera's initial position

        {

            last_ts_gyro = ts;

            return;
        }

        // Holds the change in angle, as calculated from gyro

        float3 gyro_angle;

        // Initialize gyro_angle with data from gyro

        gyro_angle.x = gyro_data.x; // Pitch

        gyro_angle.y = gyro_data.y; // Yaw

        gyro_angle.z = gyro_data.z; // Roll

        // Compute the difference between arrival times of previous and current gyro frames

        double dt_gyro = (ts - last_ts_gyro) / 1000.0;

        last_ts_gyro = ts;

        // Change in angle equals gyro measures * time passed since last measurement

        gyro_angle = gyro_angle * dt_gyro;

        // Apply the calculated change of angle to the current angle (theta)

        std::lock_guard<std::mutex> lock(theta_mtx);

        theta.add(-gyro_angle.z, -gyro_angle.y, gyro_angle.x);
    }

    void process_accel(rs2_vector accel_data)

    {

        // Holds the angle as calculated from accelerometer data

        float3 accel_angle;

        // Calculate rotation angle from accelerometer data

        accel_angle.z = atan2(accel_data.y, accel_data.z);

        accel_angle.x = atan2(accel_data.x, sqrt(accel_data.y * accel_data.y + accel_data.z * accel_data.z));

        // If it is the first iteration, set initial pose of camera according to accelerometer data (note the different handling for Y axis)

        std::lock_guard<std::mutex> lock(theta_mtx);

        if (first)

        {

            first = false;

            theta = accel_angle;

            // Since we can't infer the angle around Y axis using accelerometer data, we'll use PI as a convetion for the initial pose

            theta.y = PI;
        }

        else

        {

            /* 

            Apply Complementary Filter:

                - high-pass filter = theta * alpha:  allows short-duration signals to pass through while filtering out signals

                  that are steady over time, is used to cancel out drift.

                - low-pass filter = accel * (1- alpha): lets through long term changes, filtering out short term fluctuations 

            */

            theta.x = theta.x * alpha + accel_angle.x * (1 - alpha);

            theta.z = theta.z * alpha + accel_angle.z * (1 - alpha);
        }
    }

    // Returns the current rotation angle

    float3 get_theta()

    {

        std::lock_guard<std::mutex> lock(theta_mtx);

        return theta;
    }
};

bool check_imu_is_supported()

{

    bool found_gyro = false;

    bool found_accel = false;

    rs2::context ctx;

    for (auto dev : ctx.query_devices())

    {

        // The same device should support gyro and accel

        found_gyro = false;

        found_accel = false;

        for (auto sensor : dev.query_sensors())

        {

            for (auto profile : sensor.get_stream_profiles())

            {

                if (profile.stream_type() == RS2_STREAM_GYRO)

                    found_gyro = true;

                if (profile.stream_type() == RS2_STREAM_ACCEL)

                    found_accel = true;
            }
        }

        if (found_gyro && found_accel)

            break;
    }

    return found_gyro && found_accel;
}

int main(int argc, char *argv[]) try

{
    using namespace std::chrono_literals;

    // Before running the example, check that a device supporting IMU is connected

    if (!check_imu_is_supported())

    {

        std::cerr << "Device supporting IMU (D435i) not found";

        return EXIT_FAILURE;
    }

    // Initialize window for rendering

    // window app(1280, 720, "RealSense Motion Example");

    // // Construct an object to manage view state

    // glfw_state app_state(0.0, 0.0);

    // // Register callbacks to allow manipulation of the view state

    // register_glfw_callbacks(app, app_state);

    size_t gryro_iter = 0;

    size_t accel_iter = 0;

    size_t depth_iter = 0;

    size_t color_iter = 0;

    double ts_gyro = 0.0;

    double ts_accel = 0.0;

    double ts_depth = 0.0;

    double ts_color = 0.0;

    // Create pipeline for gyro and accelerometer. Run in seperate thread.

    // Declare RealSense pipeline, encapsulating the actual device and sensors

    rs2::pipeline pipe_motion;

    // Create a configuration for configuring the pipeline with a non default profile

    rs2::config cfg_motion;

    // Add streams of gyro and accelerometer to configuration

    cfg_motion.enable_stream(RS2_STREAM_ACCEL, RS2_FORMAT_MOTION_XYZ32F);

    cfg_motion.enable_stream(RS2_STREAM_GYRO, RS2_FORMAT_MOTION_XYZ32F);

    // Declare object for rendering camera motion

    camera_renderer camera;

    // Declare object that handles camera pose calculations

    rotation_estimator algo;

    double my_clock = std::chrono::duration<double, std::milli>(std::chrono::system_clock::now().time_since_epoch()).count();

    // Start streaming with the given configuration;

    // Note that since we only allow IMU streams, only single frames are produced

    auto profile = pipe_motion.start(cfg_motion, [&](rs2::frame frame)

                                     {
                                         // Cast the frame that arrived to motion frame

                                         auto motion = frame.as<rs2::motion_frame>();

                                         // If casting succeeded and the arrived frame is from gyro stream

                                         if (motion && motion.get_profile().stream_type() == RS2_STREAM_GYRO && motion.get_profile().format() == RS2_FORMAT_MOTION_XYZ32F)

                                         {

                                             // Get the timestamp of the current frame

                                             double ts = motion.get_timestamp();

                                             auto domain = motion.get_frame_timestamp_domain();

                                             // Get gyro measures

                                             rs2_vector gyro_data = motion.get_motion_data();

                                             // Call function that computes the angle of motion based on the retrieved measures

                                             algo.process_gyro(gyro_data, ts);

                                             gryro_iter++;

                                             ts_gyro = ts;
                                         }

                                         // If casting succeeded and the arrived frame is from accelerometer stream

                                         if (motion && motion.get_profile().stream_type() == RS2_STREAM_ACCEL && motion.get_profile().format() == RS2_FORMAT_MOTION_XYZ32F)

                                         {

                                             // Get accelerometer measures

                                             double ts = motion.get_timestamp();

                                             auto domain = motion.get_frame_timestamp_domain();

                                             rs2_vector accel_data = motion.get_motion_data();

                                             // Call function that computes the angle of motion based on the retrieved measures

                                             algo.process_accel(accel_data);

                                             accel_iter++;

                                             ts_accel = ts;
                                         }
                                     });

    // Create pipeline for depth and color. Run in this main thread.

    // Declare RealSense pipeline, encapsulating the actual device and sensors

    rs2::pipeline pipe_camera;

    // Create a configuration for configuring the pipeline with a non default profile

    rs2::config cfg_camera;

    // Add streams of gyro and accelerometer to configuration

    cfg_camera.enable_stream(RS2_STREAM_COLOR, 640, 480, RS2_FORMAT_BGR8, 30);

    cfg_camera.enable_stream(RS2_STREAM_DEPTH, 640, 480, RS2_FORMAT_Z16, 30);

    auto profile_camera = pipe_camera.start(cfg_camera);

    // Main loop

    // while (app)
    while (true)

    {

        rs2::frameset data = pipe_camera.wait_for_frames();

        rs2::frame depth = data.get_depth_frame();

        rs2::frame color = data.get_color_frame();

        depth_iter++;

        color_iter++;

        ts_depth = depth.get_timestamp();

        ts_color = color.get_timestamp();

        double new_clock = std::chrono::duration<double, std::milli>(std::chrono::system_clock::now().time_since_epoch()).count();

        if (new_clock - my_clock >= 1000)

        {

            my_clock = new_clock;

            std::cout << std::setprecision(0) << std::fixed << "FPS --- "
                      << "Gryo: " << gryro_iter << "; Accel: " << accel_iter

                      << "; Depth: " << depth_iter << "; RGB: " << color_iter << std::endl;

            std::cout << std::setprecision(0) << std::fixed << "Timing --- Now: " << my_clock << "; Gryo: " << ts_gyro << "; Accel: " << ts_accel

                      << "; Depth: " << ts_depth << "; RGB: " << ts_color << std::endl;

            std::cout << std::endl;

            gryro_iter = 0;

            accel_iter = 0;

            depth_iter = 0;

            color_iter = 0;
        }

        // auto domain = depth.get_frame_timestamp_domain();

        // Configure scene, draw floor, handle manipultation by the user etc.

        // render_scene(app_state);

        // Draw the camera according to the computed theta

        // camera.render_camera(algo.get_theta());
        std::this_thread::sleep_for(10ms);
    }

    // Stop the pipeline

    pipe_motion.stop();

    return EXIT_SUCCESS;
}

catch (const rs2::error &e)

{

    std::cerr << "RealSense error calling " << e.get_failed_function() << "(" << e.get_failed_args() << "):\n    " << e.what() << std::endl;

    return EXIT_FAILURE;
}

catch (const std::exception &e)

{

    std::cerr << e.what() << std::endl;

    return EXIT_FAILURE;
}

@JeremyBYU
Copy link
Author

To make a clarification on why I am actually still using rsusb. My final deployment will be on a Raspberry Pi4. This seems to the recommended way to install realsense. I have been switching between Linux x86_64 (personal PC) and RPI4 buster x64 (experimental new 64 bit kernel) and testing code between the two. It seems to be working okay except with this new issue.

@JeremyBYU
Copy link
Author

All my investigations have determined that it will not be possible to use Libuvc/rsusb to get high frequency motion data (200fps) at the same time of getting low frequency video data (30fps). I must use kernel drivers.

It was previously said:

With all kernels v.416+ you get the standard HW timestamp for free (w/o kernel patches), You'd still need to apply the patches if you need the extended frame metadata - exposure, gain, laser poser, etc'. But if you're only interested in getting HW timestamps then using unmodified kernel should suffice.

My investigations are showing this not to be true. I will open up a separate issue for that. Closing this now.

@ev-mp
Copy link
Collaborator

ev-mp commented Nov 27, 2019

The following comment is imho relevant to the discussion (#5315)

@JeremyBYU
Copy link
Author

I believe this comment indicates that the libuvc/rsusb driver did not work for the customer. They highlight several issues and say they used the v4l2 (kernel) driver instead and the issues went away. this is what I am trying to do as well now. Unfortunately I have run into these issues with the kennel driver: #4505

@JeremyBYU
Copy link
Author

I have found away to get high frequency motion data at the same time as low frequency video using the RSUSB (LIBUVC) drivers. Below is code that demonstates this technique while reading bag file as well as live streaming. The main technique that makes this possible is to NOT USE rs::pipeline, but to directly acess each sensor and configure the stream manually (as well as associated callbacks).

// License: Apache 2.0. See LICENSE file in root directory.
// Copyright(c) 2019 Intel Corporation. All Rights Reserved.
// After many many times of trial and error I have finally arrived on this technique to get multiple sensor streams from
// an intel realsense which have DIFFERENT FRAME RATES

// This technique works with RSUSB driver (LIBUVC) but should work with kernel drivers as well
// This example is meant for D435i, to get high frequency motion data at the same time of depth,rgb images
// Two examples are shown, one that read a bags file and another that does live streaming from the sensor. Activate bag reading with cmd line flag --bag=<file>
// The main technique that makes this possible is to NOT USE rs::pipeline, but to directly acess each sensor and 
// configure the stream manually (as well as associated callbacks).

#include <cstring>
#include <chrono>
#include <thread>
#include <mutex>
#include <map>
#include <string>
#include <sstream>
#include <iostream>
#include <iomanip>

#include <librealsense2/rs.hpp>
#include <gflags/gflags.h>

using namespace std::chrono_literals;

// Command line flags
DEFINE_string(bag, "",
				"Path to bag file");

bool check_imu_is_supported()
{

	bool found_gyro = false;
	bool found_accel = false;
	rs2::context ctx;
	for (auto dev : ctx.query_devices())
	{
		// The same device should support gyro and accel
		found_gyro = false;
		found_accel = false;
		for (auto sensor : dev.query_sensors())
		{
			for (auto profile : sensor.get_stream_profiles())
			{
				if (profile.stream_type() == RS2_STREAM_GYRO)
					found_gyro = true;
				if (profile.stream_type() == RS2_STREAM_ACCEL)
					found_accel = true;
			}
		}
		if (found_gyro && found_accel)
			break;
	}
	return found_gyro && found_accel;
}


int bag_counter(std::string file_name)
{

	size_t gryro_iter = 1;
	size_t accel_iter = 0;
	size_t depth_iter = 0;
	size_t color_iter = 0;
	double ts_gyro = 0.0;
	double ts_accel = 0.0;
	double ts_depth = 0.0;
	double ts_color = 0.0;
	rs2_timestamp_domain gyro_domain;
	rs2_timestamp_domain accel_domain;
	rs2_timestamp_domain depth_domain;
	rs2_timestamp_domain color_domain;

	rs2::config config;
	rs2::device device;
	rs2::pipeline pipe;

	// enable file playback with playback repeat disabled
	config.enable_device_from_file(file_name, false);

	auto profile = config.resolve(pipe);
    device = profile.get_device();
    auto playback = device.as<rs2::playback>();
    playback.set_real_time(false);
	// DONT START THE PIPELINE, you will always get dropped frames from high frequency data if paired with low frequency images
	// pipeline_profile = pipe.start( config );

	auto sensors = playback.query_sensors();

	for (auto &sensor : sensors)
	{
		auto profiles = sensor.get_stream_profiles();
		for(auto &profile: profiles)
		{
			sensor.open(profile);
			std::cout << "Profile: " << profile.stream_name() <<std::endl;
		}
		// Sensor Callback
		sensor.start([&](rs2::frame frame){
			if (frame.get_profile().stream_type() == RS2_STREAM_COLOR)
			{
				color_iter +=1;
				ts_color = frame.get_timestamp();
				color_domain = frame.get_frame_timestamp_domain();
			}
			else if(frame.get_profile().stream_type() == RS2_STREAM_GYRO)
			{
				gryro_iter +=1;
				ts_gyro = frame.get_timestamp();
				gyro_domain = frame.get_frame_timestamp_domain();
			}
			else if(frame.get_profile().stream_type() == RS2_STREAM_ACCEL)
			{
				accel_iter +=1;
				ts_accel = frame.get_timestamp();
				accel_domain = frame.get_frame_timestamp_domain();
			}
			else if(frame.get_profile().stream_type() == RS2_STREAM_DEPTH)
			{
				depth_iter +=1;
				ts_depth = frame.get_timestamp();
				depth_domain = frame.get_frame_timestamp_domain();
			}
			std::this_thread::sleep_for( 1ms );

		});
	}

	while (true)
	{
		
		double my_clock = std::chrono::duration<double, std::milli>(std::chrono::system_clock::now().time_since_epoch()).count();
		std::cout << std::setprecision(0) << std::fixed << "FPS --- "
			<< "Gryo: " << gryro_iter << "; Accel: " << accel_iter

			<< "; Depth: " << depth_iter << "; RGB: " << color_iter << std::endl;

		std::cout << std::setprecision(0) << std::fixed << "Timing --- Now: " << my_clock << "; Gryo: " << ts_gyro << "; Accel: " << ts_accel
			<< "; Depth: " << ts_depth << "; RGB: " << ts_color << std::endl;

		std::cout << std::setprecision(0) << std::fixed << "Time Domain --- Now: " << my_clock << "; Gryo: " << gyro_domain << "; Accel: " << accel_domain
				<< "; Depth: " << depth_domain << "; RGB: " << color_domain << std::endl;

		std::cout << std::setprecision(0) << std::fixed << "Latency --- GyroToColor: " << ts_gyro - ts_color << std::endl;
		std::cout <<std::endl;
		if (gryro_iter == 0)
			break;
		gryro_iter = 0;
		accel_iter = 0;
		depth_iter = 0;
		color_iter = 0;

		std::this_thread::sleep_for( 1000ms );
	}

	return EXIT_SUCCESS;
}


void print_profiles(std::vector<rs2::stream_profile> streams)
{
	for (auto &stream: streams)
	{
		std::cout << "Stream Name: " << stream.stream_name() << "; Format: " << stream.format() << "; Index: " << stream.stream_index() << "FPS: " << stream.fps() <<std::endl;
	}
}

// This example demonstrates live streaming motion data as well as video data
int live_counter()
{
	// Before running the example, check that a device supporting IMU is connected
	if (!check_imu_is_supported())
	{
		std::cerr << "Device supporting IMU (D435i) not found";
		return EXIT_FAILURE;
	}

	size_t gryro_iter = 0;
	size_t accel_iter = 0;
	size_t depth_iter = 0;
	size_t color_iter = 0;
	double ts_gyro = 0.0;
	double ts_accel = 0.0;
	double ts_depth = 0.0;
	double ts_color = 0.0;
	rs2_timestamp_domain gyro_domain;
	rs2_timestamp_domain accel_domain;
	rs2_timestamp_domain depth_domain;
	rs2_timestamp_domain color_domain;
	
	// Declare RealSense pipeline, encapsulating the actual device and sensors
	rs2::pipeline pipe;
	// Create a configuration for configuring the pipeline with a non default profile
	rs2::config config;
	// Add streams of gyro and accelerometer to configuration
	config.enable_stream(RS2_STREAM_ACCEL, RS2_FORMAT_MOTION_XYZ32F);
	config.enable_stream(RS2_STREAM_GYRO, RS2_FORMAT_MOTION_XYZ32F);
	// Add dpeth and color streams
	config.enable_stream(RS2_STREAM_COLOR, 640, 480, RS2_FORMAT_BGR8, 30);
	config.enable_stream(RS2_STREAM_DEPTH, 640, 480, RS2_FORMAT_Z16, 30);

	auto profile = config.resolve(pipe); // allows us to get device
    auto device = profile.get_device();

	auto streams =  profile.get_streams(); // 0=Depth, 1=RGB, 2=Gryo, 3=Accel
	std::cout << "Profiles that will be activated: "<< std::endl; 
	print_profiles(streams);

	// Create a mapping between sensor name and the desired profiles
	std::map<std::string, std::vector<rs2::stream_profile>> sensor_to_streams; 
	sensor_to_streams.insert(std::pair<std::string, std::vector<rs2::stream_profile>>(std::string("Stereo Module"), {streams[0]}));
	sensor_to_streams.insert(std::pair<std::string, std::vector<rs2::stream_profile>>(std::string("RGB Camera"), {streams[1]}));
	sensor_to_streams.insert(std::pair<std::string, std::vector<rs2::stream_profile>>(std::string("Motion Module"), {streams[2], streams[3]}));

	auto sensors = device.query_sensors();
	for (auto &sensor : sensors)
	{
		auto sensor_name = std::string(sensor.get_info(RS2_CAMERA_INFO_NAME));
		std::cout << "Sensor Name: " << sensor_name << std::endl;
		bool make_callback = false;
		try
		{
			// get sensor streams that are mapped to this sensor name
			auto sensor_streams = sensor_to_streams.at(sensor_name);
			std::cout << "Opening stream for " << sensor_name << std::endl;
			sensor.open(sensor_streams);
			make_callback = true;

		}
		catch(const std::exception& e)
		{
			std::cout << "Sensor " << sensor_name << " has not configured streams, skipping..." << std::endl; 
		}
		if (make_callback)
		{
			std::cout << "Creating callback for " << sensor_name << std::endl;
			sensor.start([&](rs2::frame frame){
				if (frame.get_profile().stream_type() == RS2_STREAM_COLOR)
				{
					color_iter +=1;
					ts_color = frame.get_timestamp();
					color_domain = frame.get_frame_timestamp_domain();
				}
				else if(frame.get_profile().stream_type() == RS2_STREAM_GYRO)
				{
					gryro_iter +=1;
					ts_gyro = frame.get_timestamp();
					gyro_domain = frame.get_frame_timestamp_domain();
				}
				else if(frame.get_profile().stream_type() == RS2_STREAM_ACCEL)
				{
					accel_iter +=1;
					ts_accel = frame.get_timestamp();
					accel_domain = frame.get_frame_timestamp_domain();
				}
				else if(frame.get_profile().stream_type() == RS2_STREAM_DEPTH)
				{
					depth_iter +=1;
					ts_depth = frame.get_timestamp();
					depth_domain = frame.get_frame_timestamp_domain();
				}
			});

		}
	}

	while (true)
	{
		double my_clock = std::chrono::duration<double, std::milli>(std::chrono::system_clock::now().time_since_epoch()).count();
		std::cout << std::setprecision(0) << std::fixed << "FPS --- "
			<< "Gryo: " << gryro_iter << "; Accel: " << accel_iter

			<< "; Depth: " << depth_iter << "; RGB: " << color_iter << std::endl;

		std::cout << std::setprecision(0) << std::fixed << "Timing --- Now: " << my_clock << "; Gryo: " << ts_gyro << "; Accel: " << ts_accel
			<< "; Depth: " << ts_depth << "; RGB: " << ts_color << std::endl;

		std::cout << std::setprecision(0) << std::fixed << "Time Domain --- Now: " << my_clock << "; Gryo: " << gyro_domain << "; Accel: " << accel_domain
				<< "; Depth: " << depth_domain << "; RGB: " << color_domain << std::endl;

		std::cout << std::setprecision(0) << std::fixed << "Latency --- GyroToColor: " << ts_gyro - ts_color << std::endl;
		std::cout <<std::endl;

		gryro_iter = 0;
		accel_iter = 0;
		depth_iter = 0;
		color_iter = 0;

		std::this_thread::sleep_for( 1000ms );
	}

	return EXIT_SUCCESS;
}

int main(int argc, char* argv[]) try
{
	gflags::ParseCommandLineFlags(&argc, &argv, true);
	if (FLAGS_bag == "")
	{
		live_counter();
	}
	else
	{
		bag_counter(FLAGS_bag);
	}
	
	
}
catch (const rs2::error & e)

{

	std::cerr << "RealSense error calling " << e.get_failed_function() << "(" << e.get_failed_args() << "):\n    " << e.what() << std::endl;

	return EXIT_FAILURE;
}

catch (const std::exception & e)

{

	std::cerr << e.what() << std::endl;

	return EXIT_FAILURE;
}

@sunlong169
Copy link

You'd still need to apply the patches if you need the extended frame metadata - exposure, gain, laser poser, etc

Which patch is about exposure, gain, laser poser?Could you please tell me the name? I carefully checked the files whose postfix is .patch, but I did not find it in the librealsense.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants