Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pointcloud Bending/Warping (Wrong intrinsics?) #4

Closed
Wollimayer opened this issue Jan 22, 2016 · 11 comments
Closed

Pointcloud Bending/Warping (Wrong intrinsics?) #4

Wollimayer opened this issue Jan 22, 2016 · 11 comments

Comments

@Wollimayer
Copy link

We are working with the Realsense F200 camera and encountered a problem with generated pointclouds based on the code below. We tested the camera on a flat surface , but the resulting point cloud seems to bend towards the camera. I attached two examples, one of them showing a flat surface. The second image contains the realsense box, also with the bending effekt.

There seems to be a problem with the way the SDK interprets the depth informations.

Flat Surface:
toportho2

Rectangle Box:
box

...
const float depth_scale = dev.get_depth_scale();
const rs::intrinsics depth_intrin = dev.get_stream_intrinsics(rs::stream::depth);

auto depth = reinterpret_cast<const uint16_t *>(dev.get_frame_data(rs::stream::depth));
for (int y = 0; y < depth_intrin.height; ++y)
        {
            for (int x = 0; x < depth_intrin.width; ++x)
            {
                if (uint16_t d = *depth++)
                {
                    const rs::float3 point = depth_intrin.deproject({ static_cast<float>(x),static_cast<float>(y) }, d*depth_scale);
                    pointCloud.push_back(point);
                }
            }
        }
@Rennschnitzl
Copy link

Maybe the rs::intrinsics you get from the device do not match the actual intrinsics of the cameras.

The deproject function does some basic projective geometry and the undistortion with the model (brown-conrady).

I had a similar problem when using the unofficial library from https://github.com/teknotus/depthview where i had to correct the distances by myself. Since you have to do some calculations based on the camera intrinsics. Being lazy while calibrating gave me the same effect.
the base streams are with lens distortion so you can use them for calibration.

You could try the pointcloud sample ( number 3) and see if the effect is in there too. My camera seems fine but i put my intrinsic values in instead of taking it from the camera.

tl;dr
my bet is on bad calibration.

screenshot from 2016-01-23 18 30 48
screenshot from 2016-01-23 18 30 54
screenshot from 2016-01-23 18 31 18

@teknotus
Copy link

On the F200 the camera interfaces are claimed by the video 4 linux drivers, and usually put in a permissions group for video, but the calibration is handled by another interface which shows up as raw USB access, and is given permissions a normal user wouldn't have. There are udev rules in /config that might fix this by making the USB endpoint read/write rather than the default of no access at all. I also noticed when reading through the calibration code that there is a TODO for supporting resolutions other than 640x480.

@Wollimayer
Copy link
Author

I did some more tests with different cameras. Some stored calibration seem to be nearly perfect. However, others needed some manual calibration with a cheesboard.
The undistorted ir stream seems to be correct now, still there are problems with the depth information.

The bending effekt at the corners becomes visible when using the whole depth image ( not just the part, that is also visible by the color camera). This might be a limitation of the depth image.

I measured the distances computet by different cameras within the color rectangle. Some of them differ in 2 cm ( constant over the area).

As this effect does occure depending on the camera used, I'm wondering whether the orientation of the projector couses difference in the depth image. As far as i know the camera and the projected image need to be calibrated (probably by intel during production); depending on the forces applied to the circuit during assembling this might change the relation between camera and projector which does finally lead to slightly changes in the measured distances.

The question is, how to deal with the effects.

@Rennschnitzl
Copy link

The distance measured depends on the sensors temperature. There is a model for the difference based on the temperature measured by the sensor itself. Please make sure both cameras are running roughly at the same temperature by either letting both cool down (around an hour for mine) or let them run for a while before measuring (mine was stable after around 45 minutes - as a rule of thumb).

The relation of the RGB and IR sensor (the extrinsics you can actually calibrate) are not relevant for the distance measurement. Only for mapping the color and depth.

I'm not sure if the orientation of the IR sensor wrt the laser projector makes a difference. If that is the case you'd have to measure the offset and have to add it manually, i guess.

@ddiakopoulos ddiakopoulos changed the title Bending Effekt F200 Pointcloud Bending/Warping (Wrong intrinsics?) Jan 25, 2016
@ddiakopoulos
Copy link
Contributor

@Rennschnitzl Correct in your statement about temperature and performance, although there is host-side correction for this which updates the ASIC directly.

@teknotus You reminded us that we need to cleanup a few TODOs in the code -- namely that they aren't entirely accurate (non-640x480 modes should work on F200 now).

The underlying issue at hand is that one of the cameras documented in this thread probably left the factory without being properly calibrated (or was physically dropped, bent, or warped). There's little corrective action we can take in librealsense -- the per-camera calibration stored in device memory should yield correct intrinsics in all cases with no manual intervention needed by developers (such as checkerboard recalibration). The best we can offer is that you can try emailing click.support@intel.com for a replacement F200. Hope that helps!

@teknotus
Copy link

teknotus commented Jan 27, 2016 via email

@ddiakopoulos
Copy link
Contributor

@teknotus Correct, there are some minor adjustments made on the host side to correct for thermal drift, but in fact these updated intrinsics are not given back to users of the library. We double and triple checked with our internal hardware team, but the recomputed coefficients are used to stabilize the ASIC algorithm and have minimal effect on the projection (e.g. shifting principal point by a thousanth of a pixel), which doesn't explain the massive discrepancies documented in the images above. This is only a problem for F200 and now all compensation is done in hardware for SR300.

@Wollimayer
Copy link
Author

@ddiakopoulos I tested another camera, same effect, but not as strong. I'm wondering: is the intrinsic calibration performed by Intel or by creative ? During the assembling process by creative, the camera might be exposed to (minimal) pressure.

A bended circuit leads to a shift (alpha) of the projected patterns. The camera does calculate the depth information based on the location of each pattern in the image. As the pattern position shifts, the camera assumes a wrong distance ( crossing between camera ray and the expected ray for a given pattern).

img_0022

You may correct me if I'm wrong but I believe this leads to the blending effect.

In order to use this technology for scientific projects it might be useful to have a method to correct this effect. (I'm unsure about the SR300 solving this problem entirely, as it seems to use the same technology )

Otherwise, is there any chance to get the raw real sense chips ( calibrated by intel ) .

@leonidk
Copy link
Contributor

leonidk commented Jan 27, 2016

@Wollimayer

First, I think a basic "check" is to try the Realsense SDK and see if you're still having issues. If so, there's an actual hardware problem and you should contact Intel support for warranty/support information. If the RSSDK fixes it, then we've an actual librealsense bug.

Second, I think we should check if this bending is within the expected tolerances for shipped F200 units. @ddiakopoulos

Third, I think this may just unfortunate, expected behavior. I don't believe we currently offer a method of re-calibrating the device in either the RSSDK or librealsense. The cameras are factory-calibrated, but there exist challenges to maintaining calibration and there's the possibility that certain cameras will fall out of calibration for some of the following

  1. Mechanical impulses causing intrinsics or extrinsics changes, such as bending of the stiffener or shifting of the lens. The cameras are designed with a stiffener, but it's possible that mechanical impulses (e.g. dropping) might knock the device out of calibration.
  2. Un-modeled temperature changes. We currently have a real-time loop running a temperature correction model that adjusts intrinsics. If you subject the device to strong or non-uniform temperature sources, this might cause this loop to fail.
  3. Reflection & multipath. If you have a highly reflective or specular environment, self-reflections in the scene might cause ambiguities.
  4. Un-modeled pressure changes. I won't talk about details, but the F200 can be sensitive to strong ambient pressure changes. There are sensors and logic to compensate for this but you never know.

@teknotus
Copy link

  1. Un-modeled pressure changes. I won't talk about details, but our
    device can be sensitive to strong ambient pressure changes. There's sensors
    and logic to compensate for this but you never know.

Does this mean it has a barometer? I don't think I know enough to use that
information for correcting an image, but the raw data from a barometer
would be useful for unrelated applications.

@ddiakopoulos
Copy link
Contributor

The ambient pressure change compensation is hard-coded into the chip -- there is no barometer (a colleague has informed me that there is one but it's hardwired into the chip with no software controls) and no compensation should be required on the part of the camera user.

furushchev pushed a commit to furushchev/librealsense that referenced this issue Jul 17, 2016
dorodnic added a commit that referenced this issue Aug 21, 2017
dorodnic referenced this issue in dorodnic/librealsense Aug 14, 2018
Merge from IntelrealSense/librealsense/Development to abernste/librealsense/development
YoshuaNava pushed a commit to YoshuaNava/librealsense that referenced this issue Dec 10, 2018
…ecutables

Renamed executables to avoid conflicts.
mkaspr pushed a commit to mkaspr/librealsense that referenced this issue Jan 18, 2019
Replace person tracking node (put node from person tracking team)
dorodnic pushed a commit that referenced this issue Jan 20, 2020
Spacing and checking support before setting preset
nhershko added a commit to nhershko/librealsense that referenced this issue Feb 23, 2020
…mic-load

load and identify stream at run time
nohayassin referenced this issue in nohayassin/librealsense May 21, 2020
…ation

Smearing passed autocal7 integration
nohayassin referenced this issue in nohayassin/librealsense May 21, 2020
…7_integration"

This reverts commit e46c6ff, reversing
changes made to 1d63c2b.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

5 participants