-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Depth modules return wrong depth data #8258
Comments
Hi @oxidane-lin The main factor in inaccuracy of depth results in this case is likely to be the 6 meter distance of the wall from the camera. With the 400 Series cameras, error increases linearly as distance of an observed object / surface from the camera increases. This phenomenon is known as RMS error. The error starts to become noticable beyond 3 meters with the D435 cameras. The D455 camera model has 2x the accuracy over distance of the D435 models, meaning that D455 has the same accuracy at 6 meters that the D435 models do at 3 meters. You also note that the depth image of the grid-fenced wall worsens when the cloth is taken away. This leads me to think that aside from distance accuracy issues, there is additional disruption due to a phenomenon where the camera may be confused by repetitive horizontal or vertical patterns (a row of vertical fence posts, a row of simlar tree-tops, or horizontal rows of window blinds). The green-yellow boxes on the depth images that do not match to details on the RGB image would be consistent with phantom detail generated from repetitive patterns. Your depth images may be better when the cloth is present on the wall because the cloth is partially breaking up the repetitive pattern. The discussion in the link below looks at this subject and offers links to resources for attempting to reduce the effect. I would also recommend checking that the Threshold Filter option in the Post-Processing section of the Viewer's stereo module options is not enabled in order to ensure that the depth image is rendering the full distance of detail that it is able to observe instead of being limited to the 4 meter distance that this filter sets by default when enabled. If the camera is able to render a wall 6 meters away from it though then it is probably disabled in your Viewer. In regard to black dots on the floor of the depth image, this may be a phenomenon called laser speckle that results from the laser-based dot pattern projector built into the camera. You could try filling in the dots by enabling the Hole Filling Filter in the Post-Processing filter list, or by using an external LED-based pattern projector instead of the D435i's built-in projector, |
@MartyG-RealSense Thank you for your quick reply. I have read your suggested links and whitepapers and tried to adjust some parameters such as SecondPeakThreshold or TextureCountThresh or use some filters. They don't seem to do enough help. I realized that it is a common phenomenon. The repetitive patterns maybe the main factor. I'm just very curious here why it is related to a specific distance. Shouldn't the patterns be the same whether closer or further? |
I am admittedly not an expert on the science of repetitive patterns in stereo imaging. Intel's excellent camera tuning guide advises though about combatting repetitive patterns that as well as using Second Peak Threshold, "another mitigation strategy that may help is to tilt the camera a few degrees (e.g., 20-30 deg) from the horizontal". Another image-enhancing action that you could try is to attempt to reduce the black empty areas on the ceiling at far distance by maximizing the Laser Power slider under the Controls section of the Stereo Module option. Increasing the value of Laser Power reduces sparseness on the depth image (less gaps). I tested with my camera in average light level conditions to simulate tall-ceiling warehouse conditions and viewing a wall 6 meters away. In my test under these conditions, maximizing Laser Power from the default of 150 to the maximum of 360 significantly filled in missing ceiling detail at the 5-6 meter range. A key difference between my test location and yours though is that you seem to have fluorescent ceiling strip lights. Fluorescent lights may introduce noise into images because they contain a heated gas that flickers at frequencies that are hard to see with the human eye. This negative effect may be reduced by using a camera FPS that is close to the operating frequency of the lights. For some lights this may be 30 FPS and for others it may be 60 FPS. Some fluorescent lights in world regions such as Europe may also operate at 50Hz, making 50 FPS camera speed a closer match (though 50 FPS is not a default supported mode on RealSense cameras). |
@MartyG-RealSense Sorry for a late reply during weekends. Indeed tilt the camera for about 10 degrees and then results are much better. I've noticed this before I open this issue as our camera cannot avoid facing straight towards the wall during movement. I must find other ways to filter wrong depth data than finding a suitable angle or position. |
If you are not aligning depth to color then I would think that problems with the RGB image with a slow rolling shutter would not affect the accuracy of the depth image, as depth is generated from the infrared frame. If the effectiveness of the laser power in your case is limited by the maximum power available from the camera's built in projector being insufficient for a space the size of your indoor room, an external pattern projector could offer a higher power output and so a larger range. On the images in your opening message, it can be seen how the strength of the dot pattern on the floor seems to taper off once past the halfway distance from camera to far wall, and not reach as far as the back wall. External projectors also have the advantage that they can be moved around and shaken without affecting the camera image. Range can also be extended by positioning multiple projectors (e.g putting a second one at the halfway point of the room). The section of Intel's white-paper document about projectors at the link below discusses this subject. https://dev.intelrealsense.com/docs/projectors#section-4-increasing-range |
@MartyG-RealSense Unluckily I am using depth to color alignment. It seems several factors are affecting the results. I'll try to do some backend filtering later. Thank you for your help and I'll reopen this issue if I have some progress here. |
Hi @MartyG-RealSense I hope you remember this issue. I reviewed our conversation and need a double confirm that our discuss was focused on the yellow and green areas on the wall. They should return depth of 6-7m rather than 2-3m. These data are apparently wrong. They are not in the error category. The depth camera is returning wrong data at certain distances. Shouldn't this be a bug or hardware defect? Shouldn't there be updates to eliminate the wrong data? |
Are you referring to the false-data floating blobs on the depth image, please? As mentioned earlier in this discussion, they are characteritic of detection by the camera of repetitive patterns. Aside from the links provided that offer advice about combatting it (repeated below), I do not have further advice that I can offer about reducing repetitive patterns. #6713 (comment) I would not classify the repetitive pattern issue as a bug, but a consequence of the stereo depth algorithm. |
@MartyG-RealSense I am referring to 2 blue rectangle areas in depth image below. I think we're talking about the same problem, right? Just doing double confirm to avoid misunderstanding. |
Yes, I am referring to those blobs highlighted by the blue rectangles too. |
Thank you @MartyG-RealSense . You've helped a lot. I'll try filtering those wrong data out. |
You are very welcome @oxidane-lin - good luck! |
Hello everyone, |
I am using python for the code also applied post processing filters. |
Hi @Aanalpatel99 I would recommend using a maximum camera tilt angle of 30 degrees if possible. Whilst the camera can still operate at a larger angle, the risk of problems with the image may increase as the angle increases further beyond 30 degrees. |
Thank you so much @MartyG-RealSense, I cannot change the camera angle because the fov that it makes at this angle is important to me. If you have any other solutions please let me know. |
You may get less inaccuracy in depth values if you change the camera configuration preset to High Accuracy to screen out obviously inaccurate depth values from the image. #2577 (comment) provides an example of Python scripting for doing so. If too much depth detail is stripped out by the High Accuracy setting, try changing the line
If you are using more than one post-processing filter, Intel recommend that the filter types are applied in a particular order when listed in a script. What filters are you applying and what order are they listed in, please? If you are not already using 1280x720 depth resolution then setting the depth stream to that when the camera is at 30 degrees may help to compensate for the altered FOV by increasing a little how much of the scene the camera can see. |
Hii @MartyG-RealSense, I am using three filters decimation->spatial->temporal. |
Another way to stabilize fluctuating depth is to reduce the value of the 'alpha' on the temporal filter. This has the effect of slowing the rate at which the depth image updates, and so may not be suitable for an application where the camera is being moved around quickly as the slowdown results in a visible transition between one depth state and another. #10078 (comment) has a Python script that demonstrates configuring the value of alpha on the temporal filter. '0.1' alpha (instead of its default of 0.4) and the 'delta' left unchanged on its default of '20' should be a good test value. |
Thank you so much @MartyG-RealSense for the help, the temporal filter's attributes value change helped a lot. |
That's great news, @Aanalpatel99 - thanks for the update! |
Issue Description
Hello, engineers from IntelRealSense:
I am using a D435i and I met a problem that the depth modules return wrong depth data. Here is the detailed description with rgb and depth images.
This problem occurs frequently when the camera is facing the wall with some grid fences. Basically the distance to the wall should be continuous,but as shown in the pics, the depth data are much worse than expected. Lots of wrong depth data appear on the wall. I did some tests to check the reason. First, different camera or sdk version, I changed camera and SDK, it doesn't seem to do any help. Then I changed the resolution of the depth image: 640480 or 848480, comparing pic 1 to 2(or 3 to 4), depth show different results under the two resolutions. This also relates with the distance from the camera to the wall(6m vs 6.5m). A summary is that at a distance of 6m and resolution 640480, or distance 6.5m and resolution 848480, the problem will show up. Pic 5 and 6 show the results when a cloth removed from the wall, more wrong data appeared.
I'm guessing from the tests above that maybe the wall doesn't offer enough texture? BUT it's not a white wall. Or maybe this is a bug? What can I do to avoid this problem? I need the depth data for around 10m, but I don't want wrong data.
↑ ↑ ↑1-depth640-6m-covered
↑ ↑ ↑2-depth848-6m-covered
↑ ↑ ↑3-depth848-6.5m-covered
↑ ↑ ↑4-depth640-6 5m-covered
↑ ↑ ↑5-depth640-6m-uncovered
↑ ↑ ↑6-depth848-6 5m-uncovered
The text was updated successfully, but these errors were encountered: