Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

problems in rs_convert #1919

Closed
marcovs opened this issue Jun 20, 2018 · 16 comments
Closed

problems in rs_convert #1919

marcovs opened this issue Jun 20, 2018 · 16 comments
Assignees
Labels

Comments

@marcovs
Copy link

marcovs commented Jun 20, 2018

Required Info
Camera Model D400
Firmware Version 05.09.09.02
Operating System & Version MacOS
Kernel Version (Linux Only)
Platform Mac
SDK Version 2.11.0
Language {C++ }

I tested the recently released version of the rs_convert tool. However I found the following problems:

  1. Even with small bag files (~150MB), it drops frames. I was unable to extract all frames from a bag with 427 color and 430 depth frames. The number of frames dropped varies in each execution instance.

  2. Sometimes it crashes before completion. I suspect it is also because of timing issues. When debugging, it points to the following line:

void Writer::initializeLogger(const std::string& loggerId, bool lookup, bool needLock) {
  if (lookup) {
    m_logger = ELPP->registeredLoggers()->get(loggerId, ELPP->hasFlag(LoggingFlag::CreateLoggerAutomatically));
  }
  1. It does not seem to have a smooth execution. Sometimes it gets stuck at a certain completion percentage and then it gets a burst of speed.

I was hoping to use the tool as a base for non-realtime processing of rosbags frame-by-frame. However, every time I tried with playback.resume() and playback.pause() even before the rs_convert tool was released I ran in issues 1 or 2. Are there any suggestions for my use-case?

@ev-mp ev-mp added the software label Jun 21, 2018
@baptiste-mnh
Copy link
Contributor

Hi!
I got the exact same issue.

@ghost
Copy link

ghost commented Jun 22, 2018

I have this frame dropping issue too.

I recorded about 30 seconds @ 30 fps but instead of the expected 900 depth images I got only 390-400 (exact number varies each time I execute the tool)

@TheMikeyR
Copy link

TheMikeyR commented Jun 25, 2018

If you need something temporarily, you can use this python script, it requires opencv and realsense python bindings. You could probably get past opencv by using another image library like Pillow (PIL).

import argparse

import pyrealsense2 as rs
import numpy as np
import cv2
import os

LOAD_BAG = True

def main():
    if not os.path.exists(args.directory):
        os.mkdir(args.directory)
    try:
        config = rs.config()
        if LOAD_BAG:
            rs.config.enable_device_from_file(config, args.input, False)
        pipeline = rs.pipeline()
        config.enable_stream(rs.stream.depth, 1280, 720, rs.format.z16, 30)
        pipeline.start(config)
        i = 0
        while True:
            print("Saving frame:", i)
            frames = pipeline.wait_for_frames()
            depth_frame = frames.get_depth_frame()
            depth_image = np.asanyarray(depth_frame.get_data())
            cv2.imwrite(args.directory + "/" + str(i).zfill(6) + ".png", depth_image)
            i += 1
    except RuntimeError:
        print("No more frames arrived, reached end of BAG file!")

    finally:
        pass


if __name__ == "__main__":
    parser = argparse.ArgumentParser()
    parser.add_argument("-d", "--directory", type=str, help="Path to save the images")
    parser.add_argument("-i", "--input", type=str, help="Bag file to read")
    args = parser.parse_args()

    main()

@baptiste-mnh
Copy link
Contributor

Thanks, it looks stable way more stable with the python wrapper!
I keep the issue open as long as the rs-convert still have this bug.

@baptiste-mnh
Copy link
Contributor

baptiste-mnh commented Jun 25, 2018

Objetive

Get the depth image and the color image in two separate folders.

Code

import argparse

import pyrealsense2 as rs
import numpy as np
import cv2
import os


LOAD_BAG = True
SAVE_DEPTH = True
SAVE_RGB = True


def main():
    if not os.path.exists(args.directory):
        os.mkdir(args.directory)
    try:
        config = rs.config()
        if LOAD_BAG:
            rs.config.enable_device_from_file(config, args.input, False)
        pipeline = rs.pipeline()
        if SAVE_DEPTH:
            config.enable_stream(rs.stream.depth, 1280, 720, rs.format.z16, 30)
        if SAVE_RGB:
            config.enable_stream(rs.stream.color, 1280, 720, rs.format.rgb8, 30)
        pipeline.start(config)
        i = 0
        while True:
            print("Saving frame:", i)
            frames = pipeline.wait_for_frames()
            if SAVE_DEPTH:
                depth_frame =  frames.get_depth_frame()
                depth_image = np.asanyarray(depth_frame.get_data())
                depth_image[np.where((depth_image > 3000))] = 0
                cv2.imwrite(args.directory + "/depth/" + str(i).zfill(6) + ".png", depth_image)
            if SAVE_RGB:
                color_frame = frames.get_color_frame()
                color_image = np.asanyarray(color_frame.get_data())
                cv2.imwrite(args.directory + "/images/" + str(i).zfill(6) + ".png", color_image)
            i += 1
    except RuntimeError:
        print("No more frames arrived, reached end of BAG file!")

    finally:
        pass


if __name__ == "__main__":
    parser = argparse.ArgumentParser()
    parser.add_argument("-d", "--directory", type=str, help="Path to save the images")
    parser.add_argument("-i", "--input", type=str, help="Bag file to read")
    args = parser.parse_args()

    main()

Issues with the python exemple

When I only save depth OR color frame, the number of exported images vary but i got ~1400 frames for my ros_bag.
So I lunch it twice (one for depth, one for color frame) and I obtain ~ 2800 frames.

When I try to save both frames at the same time, it also vary but I got less frames (~700 frames with the same ros_bag). So ~1400 frames for both captors...

So even with the python wrapper, the issue persist.

@marcovs
Copy link
Author

marcovs commented Jun 25, 2018

I agree, it looks like an issue of timing. I have my own c++ code that can save depth and rgb images in two separate instances and frames are not dropped. However, if I try to save them at the same time or if I introduce any delay in the loop (e.g. if I want to do some processing), frames are dropped.

It looks like pausing or resuming the playback does not always work, and the realtime flag has no effect

@baptiste-mnh
Copy link
Contributor

Does your code save the images from a rosbag?
Could you share it please? :)

@marcovs
Copy link
Author

marcovs commented Jun 25, 2018

There you go, it's ugly and uses a fat global variable but it works. The function to call is extractImagesFromBagFile. You will need opencv. Let me know how you get on. It also outputs timestamps, feel free to remove it if you don't need it

#include "realsenseUtils.hpp"

using namespace rs2;

//global variable to check the status of transcribed video - have to modify
int realsenseProcessedFrames = 0;

void deviceInfo(const device &device) {
    //print playback device configuration
    VLOG(1) << "Device information: ";
    for (int i = 0; i < static_cast<int>(RS2_CAMERA_INFO_COUNT); i++)
    {
        rs2_camera_info info_type = static_cast<rs2_camera_info>(i);
        //SDK enum types can be streamed to get a string that represents them
        VLOG(1) << "  " << std::left << std::setw(20) << info_type;

        //A device might not support all types of RS2_CAMERA_INFO.
        //To prevent throwing exceptions from the "get_info" method we first check if the device supports this type of info
        if (device.supports(info_type))
            VLOG(1) << device.get_info(info_type);
        else
            VLOG(1) << "N/A";
    }

    //get sensors
    std::vector<sensor> sensors = device.query_sensors();
    VLOG(1) << "Device consists of " << sensors.size() << " sensors:\n";
    int index = 0;
    // We can now iterate the sensors and print their names
    for (sensor sensor : sensors)
    {
        std::string sensorName;
        if (sensor.supports(RS2_CAMERA_INFO_NAME))
            sensorName = sensor.get_info(RS2_CAMERA_INFO_NAME);
        else
            sensorName = "Unknown Sensor";

        VLOG(1) << "  " << index++ << " : " << sensorName;
    }

}

void outputStreamInfo(const sensor &s) {
    // A Sensor is an object that is capable of streaming one or more types of data.
    // For example:
    //    * A stereo sensor with Left and Right Infrared streams that
    //        creates a stream of depth images
    //    * A motion sensor with an Accelerometer and Gyroscope that
    //        provides a stream of motion information

    // Using the sensor we can get all of its streaming profiles
    std::vector<rs2::stream_profile> stream_profiles = s.get_stream_profiles();

    // Usually a sensor provides one or more streams which are identifiable by their stream_type and stream_index
    // Each of these streams can have several profiles (e.g FHD/HHD/VGA/QVGA resolution, or 90/60/30 fps, etc..)
    //The following code shows how to go over a sensor's stream profiles, and group the profiles by streams.
    std::map<std::pair<rs2_stream, int>, int> unique_streams;
    for (auto&& sp : stream_profiles)
    {
        unique_streams[std::make_pair(sp.stream_type(), sp.stream_index())]++;
    }
    VLOG(1) << "Sensor consists of " << unique_streams.size() << " streams: ";
    for (size_t i = 0; i < unique_streams.size(); i++)
    {
        auto it = unique_streams.begin();
        std::advance(it, i);
        VLOG(1) << "  - " << it->first.first << " #" << it->first.second;
    }

    //Next, we go over all the stream profiles and print the details of each one
    VLOG(1) << "Sensor provides the following stream profiles:";
    int profile_num = 0;
    for (rs2::stream_profile stream_profile : stream_profiles)
    {
        // A Stream is an abstraction for a sequence of data items of a
        //  single data type, which are ordered according to their time
        //  of creation or arrival.
        // The stream's data types are represented using the rs2_stream
        //  enumeration
        rs2_stream stream_data_type = stream_profile.stream_type();

        // The rs2_stream provides only types of data which are
        //  supported by the RealSense SDK
        // For example:
        //    * rs2_stream::RS2_STREAM_DEPTH describes a stream of depth images
        //    * rs2_stream::RS2_STREAM_COLOR describes a stream of color images
        //    * rs2_stream::RS2_STREAM_INFRARED describes a stream of infrared images

        // As mentioned, a sensor can have multiple streams.
        // In order to distinguish between streams with the same
        //  stream type we can use the following methods:

        // 1) Each stream type can have multiple occurances.
        //    All streams, of the same type, provided from a single
        //     device have distinct indices:
        int stream_index = stream_profile.stream_index();

        // 2) Each stream has a user-friendly name.
        //    The stream's name is not promised to be unique,
        //     rather a human readable description of the stream
        std::string stream_name = stream_profile.stream_name();

        // 3) Each stream in the system, which derives from the same
        //     rs2::context, has a unique identifier
        //    This identifier is unique across all streams, regardless of the stream type.
        int unique_stream_id = stream_profile.unique_id(); // The unique identifier can be used for comparing two streams
        VLOG(1) << std::setw(3) << profile_num << ": " << stream_data_type << " #" << stream_index;

        // As noted, a stream is an abstraction.
        // In order to get additional data for the specific type of a
        //  stream, a mechanism of "Is" and "As" is provided:
        if (stream_profile.is<rs2::video_stream_profile>()) //"Is" will test if the type tested is of the type given
        {
            // "As" will try to convert the instance to the given type
            rs2::video_stream_profile video_stream_profile = stream_profile.as<rs2::video_stream_profile>();

            // After using the "as" method we can use the new data type
            //  for additinal operations:
            VLOG(1) << " (Video Stream: " << video_stream_profile.format() << " " <<
            video_stream_profile.width() << "x" << video_stream_profile.height() << "@ " << video_stream_profile.fps() << "Hz)";
        }
        
        profile_num++;
    }
}

cv::Mat frame_to_mat(const rs2::frame& f)
{
    using namespace cv;
    using namespace rs2;
    
    auto vf = f.as<video_frame>();
    const int w = vf.get_width();
    const int h = vf.get_height();
    
    if (f.get_profile().format() == RS2_FORMAT_BGR8)
    {
        return Mat(Size(w, h), CV_8UC3, (void*)f.get_data(), Mat::AUTO_STEP);
    }
    else if (f.get_profile().format() == RS2_FORMAT_RGB8)
    {
        auto r = Mat(Size(w, h), CV_8UC3, (void*)f.get_data(), Mat::AUTO_STEP);
        cv::cvtColor(r, r, COLOR_BGR2RGB);
        return r;
    }
    else if (f.get_profile().format() == RS2_FORMAT_Z16)
    {
        return Mat(Size(w, h), CV_16UC1, (void*)f.get_data(), Mat::AUTO_STEP);
    }
    else if (f.get_profile().format() == RS2_FORMAT_Y8)
    {
        return Mat(Size(w, h), CV_8UC1, (void*)f.get_data(), Mat::AUTO_STEP);;
    }
    
    throw std::runtime_error("Frame format is not supported yet!");
}


void displayRGBFrame(frame f, ofstream &fout, const std::string &baseFolder) {
    cv::Mat rgbMat = frame_to_mat(f);
    
    std::string frameNoString = "Frame Counter";
    std::string timestampString = "Time Of Arrival";
    rs2_metadata_type frameNo = 0;
    rs2_metadata_type timestamp = 0;
    std::string outputFilename;
    for (size_t i = 0; i < RS2_FRAME_METADATA_COUNT; i++)
    {
        if (f.supports_frame_metadata((rs2_frame_metadata_value)i))
        {
            std::string property = rs2_frame_metadata_to_string((rs2_frame_metadata_value)i);
            rs2_metadata_type value = f.get_frame_metadata((rs2_frame_metadata_value)i);
            if (frameNoString.compare(property) == 0) {
                frameNo = value;
                std::stringstream ss;
                ss << value;
                ss >> outputFilename;
            }
            if (timestampString.compare(property) == 0) {
                timestamp = value;
            }
            //std::cout << rs2_frame_metadata_to_string((rs2_frame_metadata_value)i) << "," << value << "\n";
        }
    }
    
    //output values and frame
    try {
        imwrite(std::string(baseFolder).append("/colour_").append(outputFilename).append(".png"), rgbMat);
        realsenseProcessedFrames++;
    } catch(cv::Exception &ex) {
        LOG(FATAL) << "Error when writing depth image to png: " << ex.what();
    }
    
    fout << frameNo << " " << timestamp << std::endl;

}

void displayDepthFrame(frame f, ofstream &fout, const colorizer &color_map, const std::string &baseFolder) {
    
    cv::Size depthSize;
    depthSize.width = f.as<video_frame>().get_width();
    depthSize.height = f.as<video_frame>().get_height();
    //frame depthColor = color_map(f);
    //cv::Mat depthMat(depthSize, CV_8UC3, (void*)depthColor.get_data(), cv::Mat::AUTO_STEP);
    cv::Mat depthMat(depthSize, CV_16UC1, (void*)f.get_data());
    
    
    std::string frameNoString = "Frame Counter";
    std::string timestampString = "Time Of Arrival";
    rs2_metadata_type frameNo = 0;
    rs2_metadata_type timestamp = 0;
    std::string outputFilename;
    for (size_t i = 0; i < RS2_FRAME_METADATA_COUNT; i++)
    {
        if (f.supports_frame_metadata((rs2_frame_metadata_value)i))
        {
            std::string property = rs2_frame_metadata_to_string((rs2_frame_metadata_value)i);
            rs2_metadata_type value = f.get_frame_metadata((rs2_frame_metadata_value)i);
            if (frameNoString.compare(property) == 0) {
                frameNo = value;
                std::stringstream ss;
                ss << value;
                ss >> outputFilename;
            }
            if (timestampString.compare(property) == 0) {
                timestamp = value;
            }
            //std::cout << rs2_frame_metadata_to_string((rs2_frame_metadata_value)i) << "," << value << "\n";
        }
    }
    
    //output values and frame
    try {
        imwrite(std::string(baseFolder).append("/depth_").append(outputFilename).append(".png"), depthMat);
        fout << frameNo << " " << timestamp << std::endl;
        realsenseProcessedFrames++;
    } catch(cv::Exception &ex) {
        LOG(FATAL) << "Error when writing depth image to png: " << ex.what();
    }

}

void extractImagesFromBagFile(const string &filename, const string &outFolder) {
    // Declare depth colorizer for pretty visualization of depth data
    colorizer color_map;
    color_map.set_option(RS2_OPTION_COLOR_SCHEME, 1);
    color_map.set_option(RS2_OPTION_HISTOGRAM_EQUALIZATION_ENABLED,true);
    rs2::align align_to(RS2_STREAM_COLOR);
    
    //Declare bag file in device configuration
    context ctx;
    playback dev = ctx.load_device(filename);
    dev.set_real_time(false);
    deviceInfo(dev);
    
    //set sensors
    vector<sensor> sensors = dev.query_sensors();
    //std::cout << "Select RGB Module Index" << std::endl;
    //uint32_t usrInput;
    //std::cin >> usrInput;
    sensor rgbSensor = sensors[1];
    VLOG(1) << "RGB sensor: " << rgbSensor.get_info(RS2_CAMERA_INFO_NAME);
    outputStreamInfo(rgbSensor);
    stream_profile rgbStreamProfile = rgbSensor.get_stream_profiles()[0];
    LOG(INFO) << "Video stream: " << rgbStreamProfile.stream_type() << ", " << rgbStreamProfile.stream_name() << " " << rgbStreamProfile.format() << " " << rgbStreamProfile.as<video_stream_profile>().width() << "x" << rgbStreamProfile.as<video_stream_profile>().height() << "@" << rgbStreamProfile.fps();
    
    //std::cout << "Select Depth Module Index" << std::endl;
    //std::cin >> usrInput;
    sensor depthSensor = sensors[0];
    VLOG(1) << "Depth stream: " << depthSensor.get_info(RS2_CAMERA_INFO_NAME);
    outputStreamInfo(depthSensor);
    stream_profile depthStreamProfile = depthSensor.get_stream_profiles()[0];
    LOG(INFO) << "Depth stream: " << depthStreamProfile.stream_type() << ", " << depthStreamProfile.stream_name() << " " << depthStreamProfile.format() << " " << depthStreamProfile.as<video_stream_profile>().width() << "x" << depthStreamProfile.as<video_stream_profile>().height() << "@" << depthStreamProfile.fps();
    
    //open profiles and start streaming
    fout.open(string(outFolder).append("/depthTimestamp.txt"), std::ios_base::out);
    depthSensor.open(depthStreamProfile);
    depthSensor.start([&](frame f) {displayDepthFrame(f, fout, color_map, outFolder);});
    while (true) {
        int prevProcessedFrames = realsenseProcessedFrames;
        std::this_thread::sleep_for(std::chrono::milliseconds(1000));
        if (prevProcessedFrames == realsenseProcessedFrames)
            break;
    }
    depthSensor.stop();
    depthSensor.close();
    fout.close();
    
    //process RGB frames
    realsenseProcessedFrames = 0;
    fout.open(string(outFolder).append("/rgbTimestamp.txt"), std::ios_base::out);
    rgbSensor.open(rgbStreamProfile);
    rgbSensor.start([&](frame f) {displayRGBFrame(f, fout, outFolder);});
    while (true) {
        int prevProcessedFrames = realsenseProcessedFrames;
        std::this_thread::sleep_for(std::chrono::milliseconds(1000));
        if (prevProcessedFrames == realsenseProcessedFrames)
            break;
    }
    rgbSensor.stop();
    rgbSensor.close();
    fout.close();
    
    realsenseProcessedFrames = 0;
}

@baptiste-mnh
Copy link
Contributor

I'll test this as soon as I can and come back to you! Thanks!

@baptiste-mnh
Copy link
Contributor

Hi @marcovs, I've an issue with your code, I only have few .png in the output folder but the realsenseProcessedFrames indicate the good number of frames.

I think my declaration of the fout variable is wrong (I think yours is in your realsenseUtils.hpp).

Here is what I did in the extractImagesFromBagFile function :

void extractImagesFromBagFile(const string &filename, const string &outFolder) {
    // Declare depth colorizer for pretty visualization of depth data
    colorizer color_map;
    color_map.set_option(RS2_OPTION_COLOR_SCHEME, 1);
    color_map.set_option(RS2_OPTION_HISTOGRAM_EQUALIZATION_ENABLED,true);
    rs2::align align_to(RS2_STREAM_COLOR);
    
    //Declare bag file in device configuration
    context ctx;
    playback dev = ctx.load_device(filename);
    dev.set_real_time(false);
    deviceInfo(dev);
    
    //set sensors
    vector<sensor> sensors = dev.query_sensors();
    //std::cout << "Select RGB Module Index" << std::endl;
    //uint32_t usrInput;
    //std::cin >> usrInput;
    sensor rgbSensor = sensors[1];
    std::cout << "RGB sensor: " << rgbSensor.get_info(RS2_CAMERA_INFO_NAME) << std::endl;
    outputStreamInfo(rgbSensor);
    stream_profile rgbStreamProfile = rgbSensor.get_stream_profiles()[0];
    cout << "Video stream: " << rgbStreamProfile.stream_type() << ", " << rgbStreamProfile.stream_name() << " " << rgbStreamProfile.format() << " " << rgbStreamProfile.as<video_stream_profile>().width() << "x" << rgbStreamProfile.as<video_stream_profile>().height() << "@" << rgbStreamProfile.fps() <<endl;
    
    //std::cout << "Select Depth Module Index" << std::endl;
    //std::cin >> usrInput;
    sensor depthSensor = sensors[0];
    std::cout << "Depth stream: " << depthSensor.get_info(RS2_CAMERA_INFO_NAME) << std::endl;
    outputStreamInfo(depthSensor);
    stream_profile depthStreamProfile = depthSensor.get_stream_profiles()[0];
    cout << "Depth stream: " << depthStreamProfile.stream_type() << ", " << depthStreamProfile.stream_name() << " " << depthStreamProfile.format() << " " << depthStreamProfile.as<video_stream_profile>().width() << "x" << depthStreamProfile.as<video_stream_profile>().height() << "@" << depthStreamProfile.fps() << endl;
    
    //open profiles and start streaming

   /////////////////////////////////////////////////////////////
   ofstream fout; // FOUT DECLARATION
  /////////////////////////////////////////////////////////////

    ofstream.open(string(outFolder).append("/depthTimestamp.txt"), std::ios_base::out);
    depthSensor.open(depthStreamProfile);
    depthSensor.start([&](frame f) {displayDepthFrame(f, fout, color_map, outFolder);});
    while (true) {
        int prevProcessedFrames = realsenseProcessedFrames;
        std::this_thread::sleep_for(std::chrono::milliseconds(1000));
        if (prevProcessedFrames == realsenseProcessedFrames)
            break;
    }
    depthSensor.stop();
    depthSensor.close();
    fout.close();
    
    //process RGB frames
    realsenseProcessedFrames = 0;
    fout.open(string(outFolder).append("/rgbTimestamp.txt"), std::ios_base::out);
    rgbSensor.open(rgbStreamProfile);
    rgbSensor.start([&](frame f) {displayRGBFrame(f, fout, outFolder);});
    while (true) {
        int prevProcessedFrames = realsenseProcessedFrames;
        std::this_thread::sleep_for(std::chrono::milliseconds(1000));
        if (prevProcessedFrames == realsenseProcessedFrames)
            break;
    }
    rgbSensor.stop();
    rgbSensor.close();
    fout.close();
    
    realsenseProcessedFrames = 0;
}

@baptiste-mnh
Copy link
Contributor

I finally successfully ran your code but I still got frame drop.
After each run, I got a different number of frames output.

Here is what i did to save all the frames (my issues was that the name of the frame saved was the same at each iteration):

void displayRGBFrame(frame f, ofstream &fout, const std::string &baseFolder) {
    cv::Mat rgbMat = frame_to_mat(f);
    
    std::string frameNoString = "Frame Counter";
    std::string timestampString = "Time Of Arrival";
    rs2_metadata_type frameNo = 0;
    rs2_metadata_type timestamp = 0;
    std::string outputFilename;
    for (size_t i = 0; i < RS2_FRAME_METADATA_COUNT; i++)
    {
        if (f.supports_frame_metadata((rs2_frame_metadata_value)i))
        {
            std::string property = rs2_frame_metadata_to_string((rs2_frame_metadata_value)i);
            rs2_metadata_type value = f.get_frame_metadata((rs2_frame_metadata_value)i);
            if (frameNoString.compare(property) == 0) {
                frameNo = value;
                std::stringstream ss;
                ss << value;
                ss >> outputFilename;
            }
            if (timestampString.compare(property) == 0) {
                timestamp = value;
            }
            std::cout << rs2_frame_metadata_to_string((rs2_frame_metadata_value)i) << "," << value << "\n";
        }
    }
    
    //output values and frame
    try {
        string path = std::string(baseFolder).append("/colour/colour_"+to_string(realsenseProcessedFrames)).append(outputFilename).append(".png");
        cout << "------------------------------------------------\n"<< path << endl;
        cv::imwrite(path, rgbMat);
        realsenseProcessedFrames++;
    } catch(cv::Exception &ex) {
        cout << "Error when writing depth image to png: " << ex.what() << endl;
    }
    
    fout << frameNo << " " << timestamp << std::endl;

}

void displayDepthFrame(frame f, ofstream &fout, const colorizer &color_map, const std::string &baseFolder) {
    
    cv::Size depthSize;
    depthSize.width = f.as<video_frame>().get_width();
    depthSize.height = f.as<video_frame>().get_height();
    frame depthColor = color_map(f);
    cv::Mat depthMat(cv::Size(1280, 720), CV_16U, (void*) depthColor.get_data());
    //cv::Mat depthMat(depthSize, CV_16U, (void*)f.get_data());
    
    
    std::string frameNoString = "Frame Counter";
    std::string timestampString = "Time Of Arrival";
    rs2_metadata_type frameNo = 0;
    rs2_metadata_type timestamp = 0;
    std::string outputFilename;
    for (size_t i = 0; i < RS2_FRAME_METADATA_COUNT; i++)
    {
        if (f.supports_frame_metadata((rs2_frame_metadata_value)i))
        {
            std::string property = rs2_frame_metadata_to_string((rs2_frame_metadata_value)i);
            rs2_metadata_type value = f.get_frame_metadata((rs2_frame_metadata_value)i);
            if (frameNoString.compare(property) == 0) {
                frameNo = value;
                std::stringstream ss;
                ss << value;
                ss >> outputFilename;
            }
            if (timestampString.compare(property) == 0) {
                timestamp = value;
            }
            std::cout << rs2_frame_metadata_to_string((rs2_frame_metadata_value)i) << "," << value << "\n";
        }
    }
    
    //output values and frame
    try {
        string path = std::string(baseFolder).append("/depth/depth_"+to_string(realsenseProcessedFrames)).append(outputFilename).append(".png");
        cout << "------------------------------------------------\n"<< path << endl;

        imwrite( path, depthMat);
        fout << frameNo << " " << timestamp << std::endl;
        realsenseProcessedFrames++;
    } catch(cv::Exception &ex) {
        cout << "Error when writing depth image to png: " << ex.what() << endl;
    }

}

@apoorva-sriv
Copy link
Contributor

@marcovs
Copy link
Author

marcovs commented Aug 8, 2018

@baptiste-mnh @apoorva2398 @ev-mp Thank you for the pointers. I worked on it a little bit and I made a class that reads bag files using directly the rosbag API. Please check issue #2215 for the code. I hope it can be useful

@dorodnic
Copy link
Contributor

@baptiste-mnh @TheMikeyR @marcovs - I think ROS-bag reading issues are behind us, with v2.16.1 release.

@RealSense-Customer-Engineering
Copy link
Collaborator

[Realsense Customer Engineering Team Comment]
Hi @marcovs, @baptiste-mnh,

Any update based on the new librealsense?

@dorodnic dorodnic closed this as completed Nov 3, 2018
@YangJiao1996
Copy link

Hi @dorodnic @RealSense-Customer-Engineering,
I believe the frame dropping and crashing issues still exist even with the latest librealsense-2.18.0.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

8 participants