-
Notifications
You must be signed in to change notification settings - Fork 289
Home
jtremblay edited this page Mar 16, 2020
·
2 revisions
Welcome to the Deep_Object_Pose wiki!
Use NDDS to create your synthetic training dataset.
- 3d model of your object. (You can use blender for example)
- Export your model in FBX format (recommended) to UE4.
- Create a domain randomization with scene with your object being exported.
- Generate around 20k images (should be enough).
- Use train.py script. It is native to NDDS exported data. Train for about 30 epochs.
python train.py --data path/to/FAT --object soup --outf soup --gpuids 0 1 2 3 4 5 6 7
- Deploy the trained weights to DOPE ROS adding weights and object dimensions.
- DOPE+NDDS for Cautery Tracking -- Simple Training Data
- Running DOPE with Zed in Extreme Lighting
- DOPE in extremely bright light
-
How to train for transparent objects?(Unclosed issue)
-
How to train for symmetrical objects?
- Refer to Sec. 3.3 of the paper "BB8: A Scalable, Accurate, Robust to Partial Occlusion Method for Predicting the 3D Poses of Challenging Objects without Using Depth."
- https://github.com/NVlabs/Deep_Object_Pose/issues/37.
-
How to randomly simulate real scenes and random poses in UE4?
-
How to avoid object overlaps with multiple objects in NDDS?
-
How to move the camera in UE4?