-
Notifications
You must be signed in to change notification settings - Fork 4.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Depth map quality #10675
Comments
If some black tape was put around the edge of the bucket at the bottom then that may create a clearer separation, as depth cameras have difficulty with reading dark grey or black color shades because they absorb light. Those color areas can therefore appear on the depth image as plain black areas without depth information. When the mouse stands on that tape at the edge they should therefore stand out better on the depth image, especially if the tape extends part-way up the side of the base of the bucket as well as round its bottom-most edge. |
Thank you for your reply!That's a good way to do it!But I don't have any mice to film right now. I can only process these videos that I've shot before.Obviously, the problem of cylinder walls cannot be solved by background subtraction.I have tried to use UNET neural network and Labelme to train marked pictures these days.Now I have an idea, can it be better to divide mice through point cloud?How to extract point cloud from BAG file in Python? |
You could try different colorization and color scheme options with your bag in the Viewer's Depth Visuaization controls (which work with bag files) and then program the changes into your Python application if they make the mouse stand out against the wall more clearly. References for altering the color settings in Python can be found at #7767 (comment) and #7089 (comment) You could generate a point cloud from a bag in the SDK's opencv_viewer_example program by using a live-camera pointcloud script and adapting it to use the bag as its data source instead of the camera by placing a enable_device_from_file instruction on the line before the pipe.start instruction, as demonstrated in the SDK's read_bag_example.py Python example program. |
Hi @L-xn Do you still require assistance with improving depth map quality or can this particular part of your ongoing mouse behaviour case be closed, please? Thanks! |
I really appreciate your help! |
Hi @L-xn Is the mouse still difficult to see separately on the depth image when it is at the side of the bucket? |
Hi @L-xn Can you confirm please if this case can now be closed or if you need more help? Thanks! |
Case closed due to no further comments received. |
|---------------------------------|------------------------------------------- |
| Camera Model | D455|
| Operating System & Version | {Win 10 |
| Language | python } |
Sorry to bother you again!I'm now studying how mice behave at night.The first thing to do is track the mice from a depth map.But I found that when the mouse stood up against the wall, part of its body formed the same color as the wall.This made it difficult for me to track the full outline of the mouse.The first chart below is easy to track.But in the second image, it's harder to get an accurate outline of the mouse.
The text was updated successfully, but these errors were encountered: