-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
D435i Motion Module Hardware Timestamps using LIBUVC #5212
Comments
same problem here... with a DR415 but same config and new install of SDK 2.30. |
@JeremyBYU hello, please see answers inlined:
With all kernels v.416+ you get the standard HW timestamp for free (w/o kernel patches), You'd still need to apply the patches if you need the extended frame metadata - exposure, gain, laser poser, etc'. But if you're only interested in getting HW timestamps then using unmodified kernel should suffice.
The
By checking the
The HW timestamps should be available with
This questing is worth a topic of its own:), The following points in my estimation are valid to the discussion:
Disadvantages/ Fundamental issues:
Kernel patches:
|
Thank you for this thorough response. I will continue to use the libuvc/rsusb driver for now, since mostly all I care about is timestamps. The only thing that you mentioned that I am a a little concerned with is this quote:
Just wondering if this implies that there might be an issue with speed when using libuvc. However that is something that I can determine myself through my own tests. |
I think I just ran into an issue with something you just said:
I have a program that is modified from rs-motion. Two rs:pipelines are created
This setup works just fine on my Windows 10 computer (which I assume is using a good device driver that supports this setup). However on my Linux computer now using libuvc/rsusb, only the last pipeline started gets access to data (camera data). Code attached at bottom. If you comment out the I just want to make sure that this is not a bug, and this "issue" is what you were referring to in the above quote. If it is expected behaviour, do you have any solution for me to capture high frequency motion data on on thread, and separately capture a frame set of depth/rgb on another using rsusb? I think I have an idea on how to do that and will try it out: Capture all data in one pipeline, each callback gives back each sensor stream independently (gryo, accel, depht, rgb). But then I think I lose the ability to nicely use frameset alignment and such. Thanks for your help @ev-mp. // License: Apache 2.0. See LICENSE file in root directory.
// Copyright(c) 2019 Intel Corporation. All Rights Reserved.
#include <librealsense2/rs.hpp>
#include <mutex>
#include "example.hpp" // Include short list of convenience functions for rendering
#include <cstring>
#include <chrono>
#include <thread>
struct short3
{
uint16_t x, y, z;
};
#include "d435.h"
void draw_axes()
{
glLineWidth(2);
glBegin(GL_LINES);
// Draw x, y, z axes
glColor3f(1, 0, 0);
glVertex3f(0, 0, 0);
glVertex3f(-1, 0, 0);
glColor3f(0, 1, 0);
glVertex3f(0, 0, 0);
glVertex3f(0, -1, 0);
glColor3f(0, 0, 1);
glVertex3f(0, 0, 0);
glVertex3f(0, 0, 1);
glEnd();
glLineWidth(1);
}
void draw_floor()
{
glBegin(GL_LINES);
glColor4f(0.4f, 0.4f, 0.4f, 1.f);
// Render "floor" grid
for (int i = 0; i <= 8; i++)
{
glVertex3i(i - 4, 1, 0);
glVertex3i(i - 4, 1, 8);
glVertex3i(-4, 1, i);
glVertex3i(4, 1, i);
}
glEnd();
}
void render_scene(glfw_state app_state)
{
glClearColor(0.0, 0.0, 0.0, 1.0);
glColor3f(1.0, 1.0, 1.0);
glMatrixMode(GL_PROJECTION);
glLoadIdentity();
gluPerspective(60.0, 4.0 / 3.0, 1, 40);
glClear(GL_COLOR_BUFFER_BIT);
glMatrixMode(GL_MODELVIEW);
glLoadIdentity();
gluLookAt(1, 0, 5, 1, 0, 0, 0, -1, 0);
glTranslatef(0, 0, +0.5f + app_state.offset_y * 0.05f);
glRotated(app_state.pitch, -1, 0, 0);
glRotated(app_state.yaw, 0, 1, 0);
draw_floor();
}
class camera_renderer
{
std::vector<float3> positions, normals;
std::vector<short3> indexes;
public:
// Initialize renderer with data needed to draw the camera
camera_renderer()
{
uncompress_d435_obj(positions, normals, indexes);
}
// Takes the calculated angle as input and rotates the 3D camera model accordignly
void render_camera(float3 theta)
{
glEnable(GL_BLEND);
glBlendFunc(GL_ONE, GL_ONE);
glPushMatrix();
// Set the rotation, converting theta to degrees
glRotatef(theta.x * 180 / PI, 0, 0, -1);
glRotatef(theta.y * 180 / PI, 0, -1, 0);
glRotatef((theta.z - PI / 2) * 180 / PI, -1, 0, 0);
draw_axes();
// Scale camera drawing
glScalef(0.035, 0.035, 0.035);
glBegin(GL_TRIANGLES);
// Draw the camera
for (auto &i : indexes)
{
glVertex3fv(&positions[i.x].x);
glVertex3fv(&positions[i.y].x);
glVertex3fv(&positions[i.z].x);
glColor4f(0.05f, 0.05f, 0.05f, 0.3f);
}
glEnd();
glPopMatrix();
glDisable(GL_BLEND);
glFlush();
}
};
class rotation_estimator
{
// theta is the angle of camera rotation in x, y and z components
float3 theta;
std::mutex theta_mtx;
/* alpha indicates the part that gyro and accelerometer take in computation of theta; higher alpha gives more weight to gyro, but too high
values cause drift; lower alpha gives more weight to accelerometer, which is more sensitive to disturbances */
float alpha = 0.98;
bool first = true;
// Keeps the arrival time of previous gyro frame
double last_ts_gyro = 0;
public:
// Function to calculate the change in angle of motion based on data from gyro
void process_gyro(rs2_vector gyro_data, double ts)
{
if (first) // On the first iteration, use only data from accelerometer to set the camera's initial position
{
last_ts_gyro = ts;
return;
}
// Holds the change in angle, as calculated from gyro
float3 gyro_angle;
// Initialize gyro_angle with data from gyro
gyro_angle.x = gyro_data.x; // Pitch
gyro_angle.y = gyro_data.y; // Yaw
gyro_angle.z = gyro_data.z; // Roll
// Compute the difference between arrival times of previous and current gyro frames
double dt_gyro = (ts - last_ts_gyro) / 1000.0;
last_ts_gyro = ts;
// Change in angle equals gyro measures * time passed since last measurement
gyro_angle = gyro_angle * dt_gyro;
// Apply the calculated change of angle to the current angle (theta)
std::lock_guard<std::mutex> lock(theta_mtx);
theta.add(-gyro_angle.z, -gyro_angle.y, gyro_angle.x);
}
void process_accel(rs2_vector accel_data)
{
// Holds the angle as calculated from accelerometer data
float3 accel_angle;
// Calculate rotation angle from accelerometer data
accel_angle.z = atan2(accel_data.y, accel_data.z);
accel_angle.x = atan2(accel_data.x, sqrt(accel_data.y * accel_data.y + accel_data.z * accel_data.z));
// If it is the first iteration, set initial pose of camera according to accelerometer data (note the different handling for Y axis)
std::lock_guard<std::mutex> lock(theta_mtx);
if (first)
{
first = false;
theta = accel_angle;
// Since we can't infer the angle around Y axis using accelerometer data, we'll use PI as a convetion for the initial pose
theta.y = PI;
}
else
{
/*
Apply Complementary Filter:
- high-pass filter = theta * alpha: allows short-duration signals to pass through while filtering out signals
that are steady over time, is used to cancel out drift.
- low-pass filter = accel * (1- alpha): lets through long term changes, filtering out short term fluctuations
*/
theta.x = theta.x * alpha + accel_angle.x * (1 - alpha);
theta.z = theta.z * alpha + accel_angle.z * (1 - alpha);
}
}
// Returns the current rotation angle
float3 get_theta()
{
std::lock_guard<std::mutex> lock(theta_mtx);
return theta;
}
};
bool check_imu_is_supported()
{
bool found_gyro = false;
bool found_accel = false;
rs2::context ctx;
for (auto dev : ctx.query_devices())
{
// The same device should support gyro and accel
found_gyro = false;
found_accel = false;
for (auto sensor : dev.query_sensors())
{
for (auto profile : sensor.get_stream_profiles())
{
if (profile.stream_type() == RS2_STREAM_GYRO)
found_gyro = true;
if (profile.stream_type() == RS2_STREAM_ACCEL)
found_accel = true;
}
}
if (found_gyro && found_accel)
break;
}
return found_gyro && found_accel;
}
int main(int argc, char *argv[]) try
{
using namespace std::chrono_literals;
// Before running the example, check that a device supporting IMU is connected
if (!check_imu_is_supported())
{
std::cerr << "Device supporting IMU (D435i) not found";
return EXIT_FAILURE;
}
// Initialize window for rendering
// window app(1280, 720, "RealSense Motion Example");
// // Construct an object to manage view state
// glfw_state app_state(0.0, 0.0);
// // Register callbacks to allow manipulation of the view state
// register_glfw_callbacks(app, app_state);
size_t gryro_iter = 0;
size_t accel_iter = 0;
size_t depth_iter = 0;
size_t color_iter = 0;
double ts_gyro = 0.0;
double ts_accel = 0.0;
double ts_depth = 0.0;
double ts_color = 0.0;
// Create pipeline for gyro and accelerometer. Run in seperate thread.
// Declare RealSense pipeline, encapsulating the actual device and sensors
rs2::pipeline pipe_motion;
// Create a configuration for configuring the pipeline with a non default profile
rs2::config cfg_motion;
// Add streams of gyro and accelerometer to configuration
cfg_motion.enable_stream(RS2_STREAM_ACCEL, RS2_FORMAT_MOTION_XYZ32F);
cfg_motion.enable_stream(RS2_STREAM_GYRO, RS2_FORMAT_MOTION_XYZ32F);
// Declare object for rendering camera motion
camera_renderer camera;
// Declare object that handles camera pose calculations
rotation_estimator algo;
double my_clock = std::chrono::duration<double, std::milli>(std::chrono::system_clock::now().time_since_epoch()).count();
// Start streaming with the given configuration;
// Note that since we only allow IMU streams, only single frames are produced
auto profile = pipe_motion.start(cfg_motion, [&](rs2::frame frame)
{
// Cast the frame that arrived to motion frame
auto motion = frame.as<rs2::motion_frame>();
// If casting succeeded and the arrived frame is from gyro stream
if (motion && motion.get_profile().stream_type() == RS2_STREAM_GYRO && motion.get_profile().format() == RS2_FORMAT_MOTION_XYZ32F)
{
// Get the timestamp of the current frame
double ts = motion.get_timestamp();
auto domain = motion.get_frame_timestamp_domain();
// Get gyro measures
rs2_vector gyro_data = motion.get_motion_data();
// Call function that computes the angle of motion based on the retrieved measures
algo.process_gyro(gyro_data, ts);
gryro_iter++;
ts_gyro = ts;
}
// If casting succeeded and the arrived frame is from accelerometer stream
if (motion && motion.get_profile().stream_type() == RS2_STREAM_ACCEL && motion.get_profile().format() == RS2_FORMAT_MOTION_XYZ32F)
{
// Get accelerometer measures
double ts = motion.get_timestamp();
auto domain = motion.get_frame_timestamp_domain();
rs2_vector accel_data = motion.get_motion_data();
// Call function that computes the angle of motion based on the retrieved measures
algo.process_accel(accel_data);
accel_iter++;
ts_accel = ts;
}
});
// Create pipeline for depth and color. Run in this main thread.
// Declare RealSense pipeline, encapsulating the actual device and sensors
rs2::pipeline pipe_camera;
// Create a configuration for configuring the pipeline with a non default profile
rs2::config cfg_camera;
// Add streams of gyro and accelerometer to configuration
cfg_camera.enable_stream(RS2_STREAM_COLOR, 640, 480, RS2_FORMAT_BGR8, 30);
cfg_camera.enable_stream(RS2_STREAM_DEPTH, 640, 480, RS2_FORMAT_Z16, 30);
auto profile_camera = pipe_camera.start(cfg_camera);
// Main loop
// while (app)
while (true)
{
rs2::frameset data = pipe_camera.wait_for_frames();
rs2::frame depth = data.get_depth_frame();
rs2::frame color = data.get_color_frame();
depth_iter++;
color_iter++;
ts_depth = depth.get_timestamp();
ts_color = color.get_timestamp();
double new_clock = std::chrono::duration<double, std::milli>(std::chrono::system_clock::now().time_since_epoch()).count();
if (new_clock - my_clock >= 1000)
{
my_clock = new_clock;
std::cout << std::setprecision(0) << std::fixed << "FPS --- "
<< "Gryo: " << gryro_iter << "; Accel: " << accel_iter
<< "; Depth: " << depth_iter << "; RGB: " << color_iter << std::endl;
std::cout << std::setprecision(0) << std::fixed << "Timing --- Now: " << my_clock << "; Gryo: " << ts_gyro << "; Accel: " << ts_accel
<< "; Depth: " << ts_depth << "; RGB: " << ts_color << std::endl;
std::cout << std::endl;
gryro_iter = 0;
accel_iter = 0;
depth_iter = 0;
color_iter = 0;
}
// auto domain = depth.get_frame_timestamp_domain();
// Configure scene, draw floor, handle manipultation by the user etc.
// render_scene(app_state);
// Draw the camera according to the computed theta
// camera.render_camera(algo.get_theta());
std::this_thread::sleep_for(10ms);
}
// Stop the pipeline
pipe_motion.stop();
return EXIT_SUCCESS;
}
catch (const rs2::error &e)
{
std::cerr << "RealSense error calling " << e.get_failed_function() << "(" << e.get_failed_args() << "):\n " << e.what() << std::endl;
return EXIT_FAILURE;
}
catch (const std::exception &e)
{
std::cerr << e.what() << std::endl;
return EXIT_FAILURE;
}
|
To make a clarification on why I am actually still using rsusb. My final deployment will be on a Raspberry Pi4. This seems to the recommended way to install realsense. I have been switching between Linux x86_64 (personal PC) and RPI4 buster x64 (experimental new 64 bit kernel) and testing code between the two. It seems to be working okay except with this new issue. |
All my investigations have determined that it will not be possible to use Libuvc/rsusb to get high frequency motion data (200fps) at the same time of getting low frequency video data (30fps). I must use kernel drivers. It was previously said:
My investigations are showing this not to be true. I will open up a separate issue for that. Closing this now. |
The following comment is imho relevant to the discussion (#5315) |
I believe this comment indicates that the libuvc/rsusb driver did not work for the customer. They highlight several issues and say they used the v4l2 (kernel) driver instead and the issues went away. this is what I am trying to do as well now. Unfortunately I have run into these issues with the kennel driver: #4505 |
I have found away to get high frequency motion data at the same time as low frequency video using the RSUSB (LIBUVC) drivers. Below is code that demonstates this technique while reading bag file as well as live streaming. The main technique that makes this possible is to NOT USE rs::pipeline, but to directly acess each sensor and configure the stream manually (as well as associated callbacks).
|
Which patch is about |
I have installed the latest Realsense SDK 2.30 on my machine. Here is CMAKE commands for completeness:
My kernel just upgraded to 5.3 so I now need to use the libuvc backend. It installs and seems to work fine. I have already calibrated the IMU using intel procedures. Hardware timestamps seem to be working for RGB and depth. However I am not sure if they are working for the motion module. There are no errors about meta information in the terminal when running realsense-viewer. There is this error but not sure if its relevant, they are few of them over the lifespan, and mostly in the beginning.
However when I click the meta information overlay for the motion I get the following picture.
As you can see the clock domain is showing Global Time, which should mean that hardware timestamps are working. However the backend timestamp is showing 0, making me think that its not working. The main question I have:
A final question I have is a little general. I have searched for it but have not foudn the answer:
Obviously ease of installation is for libuvc. But are there any tradeoffs? Like higher latency, more bugs, unsupported workflows? Any info on that would be great.
Thanks again for the great product and the support Intel provides.
The text was updated successfully, but these errors were encountered: