Skip to content

Sebo-the-tramp/tinygsplat

Repository files navigation

TinySplat

A simple implementation of Gaussian splatting with tinygrad. It works. I plan to add more features, see TODO

I started this to:

  1. Learn Gaussian Splatting
  2. Learn tinygrad

Installation

To install all the necessary libraries to run all the examples use:

conda env create -f environment.yaml

Important

There is a known bug on MacOS and conda (see this issue for details), please always use the following env variables before any notebook or when running commands.

%env METAL_XCODE=1
%env DISABLE_COMPILER_CACHE=1

Benchmark

MacBook Pro 2023 16GB RAM M2

framework accelerator time (min) SSIM loss number of iterations number of gaussian splats
Tinygrad (JIT) MPS 1.18 min 0.033394 1000 1000
Tinygrad (no JIT) MPS 1.58 min 0.033394 1000 1000
Tinygrad (JIT) MPS 13.29 min 0.013182 2000 5000
Tinygrad (no JIT) MPS 10.17 min 0.018872 2000 5000
Pytorch CPU 2000 5000
Pytorch MPS 23.54 mins 0.031974 400 5000

RTX-3070 - AMD Ryzen 7 3800X 8-Core Processor - 32GB RAM

framework accelerator time (min) SSIM loss number of iterations number of gaussian splats
Tinygrad (no JIT) GPU 1.17 mins 0.034477 1000 1000
Tinygrad (no JIT) GPU 4.07 mins 0.034477 2000 5000
Pytorch GPU 10.17 mins 0.021133 1000 5000
Pytorch GPU OOM - 2000 5000
Gsplat GPU 0.11 mins 0.006781 2000 5000

Note

I think there is some problem with Gsplat process...

Usage

This repository is divided into subfolder each containing its own notebooks;

  • for tinygrad refer to tinysplat_2D
  • for torch based refer to torchsplat - credits to OutofAi

Results

Tinygrad no densification

Tinygrad WITH STANDARD densification

Tinygrad WITH gaussian blur densification

Torch WITH densification

GSPLAT

TODO:

  • Densify the Gaussians in Tinygrad
  • Compare the speed with Gsplat
  • Check if Gsplat is correctly configured!
  • Make it more efficient in TinyGrad -> use DEBUG=4
  • parallelize the splatting process

CREDITS