Robotic systems for an unstructured world — Live Dense Multi Modal 3D Mapping — A system designed for real time 3D reconstruction using a fusion of multiple depth and camera sensors simultaneously at real time speed.
Robopilot is minimalist and modular autonomous computer vision library for Python. It is developed with a focus on allowing fast experimentation with Distributed Deep Neural Networks. It is based on existing opensource work, associated machine vision, communications and motor-control libraries and the CUDA and Tensor Flow deep-learning framework.
- Experiment with point clouds, mapping computer vision and neural networks.
- Log sensor data. (images, user inputs, sensor readings)
- Leverage distributed visiion data.
- Capturing an object’s 3D structure from multiple viewpoints simultaneously,
- Capturing a “panoramic” 3D structure of a scene (extending the field of view of one sensor by using many)
- Streaming the reconstructed point cloud to a remote location,
- Increasing the density of a point cloud captured by a single sensor, by having multiple sensors capture the same scene.
- Nvidia TX1 (x2)
- RedCat Crawler 1/5 (x1)
- Xbox Kinect for PC (x2)
- Intel RTF Drone (x1)
After building a Robopilot you can turn on your device and go to http://localhost:8887 to pilot.
The robopilot device is controlled by running a sequence of events
#Define a vehicle to take and record pictures 10 times per second.
import time
from robopilot import Vehicle
from robopilot.parts.cv import CvCam
from robopilot.parts.tub_v2 import TubWriter
V = Vehicle()
IMAGE_W = 160
IMAGE_H = 120
IMAGE_DEPTH = 3
#Add a camera part
cam = CvCam(image_w=IMAGE_W, image_h=IMAGE_H, image_d=IMAGE_DEPTH)
V.add(cam, outputs=['image'], threaded=True)
#warmup camera
while cam.run() is None:
time.sleep(1)
#add tub part to record images
tub = TubWriter(path='./dat', inputs=['image'], types=['image_array'])
V.add(tub, inputs=['image'], outputs=['num_records'])
#start the drive loop at 10 Hz
V.start(rate_hz=10)