Pointcaster is an engine for capturing, transforming, analysing and broadcasting 3D scenes captured using depth sensors. It enables room-scale mixed reality installations, performances and applications. Currently it supports Orbbec Femto Mega and Azure Kinect sensors.
Depth data captured by multiple sensors can be synthesized into a single point-cloud in real-time, and streamed into client applications through the Pointreceiver library over local networks or the internet. Scene analysis such as clustering and presence detection can be configured and streamed over multiple protocols including MIDI, RTP-MIDI, OSC and MQTT to enable integration into game engines, digital audio workstations and any other application you can think of.
pointcaster
requires CMake and vcpkg to build and manage dependencies. A working CUDA toolkit installation andnvcc
on the path is also required.- Linux builds are using GCC 12.3.0
- Windows builds use MSVC v143
- CUDA version 12.2.r12
cmake -B build \
-DCMAKE_TOOLCHAIN_FILE=/opt/vcpkg/scripts/buildsystems/vcpkg.cmake \
-DCMAKE_BUILD_TYPE=Debug \
-DVCPKG_OVERLAY_PORTS=$(pwd)/ports
At least Arch is shipping with GCC 13 as default, so the `CMAKE_CXX_COMPILER` CMake option needs to be set:
cmake -B build -DCMAKE_CXX_COMPILER=/usr/bin/gcc-12 ...
cd build
cmake --build .
- To run multiple Azure Kinect sensors on Linux, you must boot with the following kernel parameter that adjusts memory provided to USB hosts. Set it to a number appropriate for your system and camera count. 256MB is enough (and is probably overkill):
usbcore.usbfs_memory_mb=256