- Showcase
- Features
- The Nabla Core Profile
- Physical Device Selection and Filtering
- SPIR-V and Vulkan as First-Class Citizens
- Integration of Renderdoc
- Nabla Event Handler: Seamless GPU-CPU Synchronization
- GPU Object Lifecycle Tracking
- HLSL2021 Standard Template Library
- Full Embrace of Buffer Device Address and Descriptor Indexing
- Minimally Invasive Design
- Designed for Interoperation
- Cancellable Future-based Async I/O
- Data Transfer Utilities
- Virtual File System
- Asset System
- Asset Converter (CPU to GPU)
- Unit-Tested BxDFs for Physically Based Rendering
- Property Pools (GPU Entity Component System)
- SPIR-V Introspection and Layout Creation
- Nabla Extensions
- Coming Soon
- Need Our Expertise?
- Join Our Team
Ray Tracing | Emulated shaderFloat64 |
![]() ![]() |
![]() |
MSDF Hatches | Porting GDI to Nabla |
![]() ![]() |
![]() ![]() ![]() |
SDF function manipulator | Fluid 3D |
![]() |
![]() |
Nabla Shader Compiler & Godbolt docker integration | ImGUI render backend & extensions |
![]() |
![]() |
Nabla exposes a curated set of Vulkan extensions and features compatible across the GPUs we aim to support on Windows, Linux, (coming soon MacOS, iOS as well as Android)
Vulkan evolves fast—just when you think you've figured out sync, you realize there's sync2. Keeping up with new extensions, best practices, and hardware quirks is exhausting. Instead of digging through gpuinfo.org or Vulkan specs, Nabla gives you a well-thought-out set of extensions—so you can focus on what you want to achieve, not get stuck in an eternal loop of:
- mastering a feature
- finding out about a new feature
- assesing whether obsoletes or just adds the one you've just mastered
- working if the feature is ubiquitous on the devices you target
- rewriting what you've just polished
Nabla allows you to select the best GPU for your compute or graphics workload.
void filterDevices(core::set<video::IPhysicalDevice*>& physicalDevices)
{
nbl::video::SPhysicalDeviceFilter deviceFilter = {};
deviceFilter.minApiVersion = { 1,3,0 };
deviceFilter.minConformanceVersion = {1,3,0,0};
deviceFilter.requiredFeatures.rayQuery = true;
deviceFilter(physicalDevices);
}
Nabla treats SPIR-V and Vulkan as the preferred, reference standard—everything else is built around them, with all other backends adapting to them.
Built-in support for capturing frames and debugging with Renderdoc. This is how one debugs headless or async GPU workloads that are not directly involved in producing a swapchain frame to be captured by Renderdoc.
const IQueue::SSubmitInfo submitInfo = {
.waitSemaphores = {},
.commandBuffers = {&cmdbufInfo,1},
.signalSemaphores = {&signalInfo,1}
};
m_api->startCapture(); // Start Renderdoc Capture
queue->submit({&submitInfo,1});
m_api->endCapture(); // End Renderdoc Capture
Nabla Event Handler's extensive usage of Timeline Semaphores enables CPU Callbacks on GPU conditions.
You can enqueue callbacks that trigger upon submission completion (workload finish), enabling amongst others, async readback of submission side effects, or deallocating an allocation after a workload is finished.
// This doesn't actually free the memory from the pool, the memory is queued up to be freed only after the `scratchSemaphore` reaches a value a future submit will signal
memory_pool->deallocate(&offset,&size,nextSubmit.getFutureScratchSemaphore());
Nabla uses reference counting to track the lifecycle of GPU objects. Descriptor sets and command buffers are responsible for maintaining reference counts on the resources (e.g., buffers, textures) they use. The queue itself also tracks command buffers, ensuring that objects remain alive as long as they are pending execution. This system guarantees the correct order of deletion and makes it difficult for GPU objects to go out of scope and be destroyed before the GPU has finished using them.
-
🔄 Reusable: Unified single-source C++/HLSL libraries eliminate code duplication with reimplementation of STL's
type_traits
,limits
,functional
,tgmath
, etc. -
🐞 Shader Logic, CPU-Tested: A subset of HLSL compiles as both C++ and SPIR-V, enabling CPU-side debugging of GPU logic, ensuring correctness in complex tasks like FFT, Prefix Sum, etc. (See our examples: 1. BxDF Unit Test, 2. Math Funcs Unit Test)
-
🔮 Future-Proof: C++20 concepts in HLSL enable safe and documented polymorphism.
-
🧠 Insane: Boost Preprocessor and Template Metaprogramming in HLSL!
-
🛠️ Real-World Problem Solvers: The library offers GPU-optimized solutions for tasks like Prefix Sum, Binary Search, FFT, Global Sort, and even emulated
shaderFloat64
when native GPU support is unavailable!
🎤 Talks from us:
- Vulkanised 2024: Beyond SPIR-V: Single Source C++ and Shader Programming
- Vulkanised 2023: HLSL202x like its C++, building an
std::
like Library
🧩 Full Embrace of Buffer Device Address and Descriptor Indexing
By utilizing Buffer Device Addresses (BDAs), Nabla enables more direct access to memory through 64-bit GPU virtual addresses. Synergized with Descriptor Indexing, this approach enhances flexibility by enabling more dynamic, scalable resource binding without relying on traditional descriptor sets.
No Singletons, No Main Thread—Nabla allows multiple instances of every object (including Vulkan devices) without assuming a main thread or thread-local contexts. Thread-agnostic by design, it avoids global state and explicitly passes contexts for easy multithreading.
Nabla's minimally invasive and flexible design with api handle acquisitions and multi-window support make it ideal for custom rendering setups and low-level GPU programming without unnecessary constraints such as assuming a main thread or a single window.
Even Win32 windowing is wrapped for use across multiple threads, breaking free traditional single-thread limitations.
This allows simpler porting of legacy OpenGL and DirectX applications.
Nabla is built with interoperation in mind, supporting memory export and import between different compute and graphics APIs.
File I/O is fully asynchronous, using nbl::system::future_t, a cancellable MPSC circular buffer-based future implementation.
Requests start in a PENDING state and can be invalidated before execution if needed. This enables efficient async file reads and GPU memory writes, ensuring non-blocking execution:
ISystem::future_t<size_t> bytesActuallyWritten;
file->read(bytesActuallyWritten, gpuMemory->getMappedPointer(), offsetInFile, 2*1024*1024*1024);
while (!bytesActuallyWritten.ready()) { /* Do other work */ }
Nabla's Utilities streamlines the process of pushing/pulling arbitrary-sized buffers and images with fixed staging memory to/from the GPU, ensuring seamless data transfers. The system automatically handles submission when buffer memory overflows, while promoting unsupported formats during upload to handle color format conversions. By leveraging device-specific properties, the system respects alignment limits and ensures deterministic behavior. The user only provides initial submission info through SIntendedSubmitInfo, and the utility manages subsequent submissions automatically.
- Learn more:
- 🎤 Our Talk at Vulkanised: Vulkanised 2023: Keeping your staging buffer fixed size!
- 📚 Our Blog post: Uploading Textures to GPU - The Good Way
Nabla provides a [unified Virtual File System] (system::ISystem) that supports mounting archives and folders under different virtual paths. This enables access to both external and embedded assets while preserving original relative paths.
For embedding, we provide an alternative to C++23's #embed, which allows embedding files directly into compiled binaries. Instead of relying on compiler support, we use Python + CMake to generate what we call built-in resource archives—packing files (e.g., images, shaders, .obj
, .mtl
, .dds
) into DLLs as memory-mapped system::IFile objects ensuring that dependent assets (e.g., models and their textures) retain their correct relative paths even when embedded.
The embedding process:
- At build time, Python reads an input path table (generated by CMake).
- It serializes files into constexpr arrays with metadata (key + timestamps).
- The output C++ source + header define a built-in resource library, linked into Nabla or examples.
This approach keeps assets self-contained, making file access efficient while maintaining asset dependencies.
The asset system in Nabla maintains a 1:1 mapping between CPU and GPU representations, where every CPU asset has a direct GPU counterpart. The system also allows for coordination between loaders—for instance, the OBJ loader can trigger the MTL loader, and the MTL loader in turn invokes image loaders, ensuring smooth asset dependency management.
The Asset Converter transforms CPU objects (asset::IAsset
) into GPU objects (video::IBackendObject
) while eliminating duplicates with Merkle Trees. Instead of relying on pointer comparisons, it hashes asset contents to detect and reuse identical GPU objects.
A statically polymorphic library for defining Bidirectional Scattering Distribution Functions (BxDFs) in HLSL and C++. Each BxDF is rigorously unit-tested in C++ as well as HLSL. This is part of Nabla’s HLSL-C++ compatible library.
Snippet of our BxDF Unit Test:
TestJacobian<bxdf::reflection::SLambertianBxDF<sample_t, iso_interaction, aniso_interaction, spectral_t>>::run(initparams, cb);
TestJacobian<bxdf::reflection::SOrenNayarBxDF<sample_t, iso_interaction, aniso_interaction, spectral_t>>::run(initparams, cb);
TestJacobian<bxdf::reflection::SBeckmannBxDF<sample_t, iso_cache, aniso_cache, spectral_t>, false>::run(initparams, cb);
TestJacobian<bxdf::reflection::SBeckmannBxDF<sample_t, iso_cache, aniso_cache, spectral_t>, true>::run(initparams, cb);
TestJacobian<bxdf::reflection::SGGXBxDF<sample_t, iso_cache, aniso_cache, spectral_t>, false>::run(initparams, cb);
TestJacobian<bxdf::reflection::SGGXBxDF<sample_t, iso_cache, aniso_cache, spectral_t>,true>::run(initparams, cb);
TestJacobian<bxdf::transmission::SLambertianBxDF<sample_t, iso_interaction, aniso_interaction, spectral_t>>::run(initparams, cb);
TestJacobian<bxdf::transmission::SSmoothDielectricBxDF<sample_t, iso_cache, aniso_cache, spectral_t>>::run(initparams, cb);
TestJacobian<bxdf::transmission::SSmoothDielectricBxDF<sample_t, iso_cache, aniso_cache, spectral_t, true>>::run(initparams, cb);
TestJacobian<bxdf::transmission::SBeckmannDielectricBxDF<sample_t, iso_cache, aniso_cache, spectral_t>, false>::run(initparams, cb);
TestJacobian<bxdf::transmission::SBeckmannDielectricBxDF<sample_t, iso_cache, aniso_cache, spectral_t>, true>::run(initparams, cb);
TestJacobian<bxdf::transmission::SGGXDielectricBxDF<sample_t, iso_cache, aniso_cache, spectral_t>, false>::run(initparams, cb);
TestJacobian<bxdf::transmission::SGGXDielectricBxDF<sample_t, iso_cache, aniso_cache, spectral_t>,true>::run(initparams, cb);
Property Pools group related properties together in a Structure Of Arrays (SoA) manner, allowing efficient, cache-friendly access to data on the GPU. The system enables transferring properties (Components) between the CPU and GPU, with the PropertyPoolHandler
managing scattered updates with a special compute shader. Handles are assigned for each object and remain constant as data is added or removed.
SPIR-V introspection in Nabla eliminates most of the boilerplate code required to set up descriptor and pipeline layouts, simplifying resource binding to shaders.
- ImGui integration –
MultiDrawIndirect
based and draws in as little as a single drawcall. - Fast Fourier Transform Extension – for image processing and all kind of frequncy-domain fun.
- Workgroup Prefix Sum – Efficient parallel prefix sum computation.
- Blur – Optimized GPU-based image blurring.
- Counting Sort – High-performance, GPU-accelerated sorting algorithm.
- [WIP] Autoexposure – Adaptive brightness adjustment for HDR rendering.
- [WIP] Tonemapping
- [WIP] GPU MPMC Queue – Multi-producer, multi-consumer GPU queue.
- [WIP] OptiX interoperability for ray tracing.
- [WIP] Global Scan – High-speed parallel scanning across large datasets.
- Full CUDA interoperability support.
- Scene Loaders
- GPU-Driven Scene Graph
- Material Compiler 2.0 for efficient scheduling of BxDF graph evaluation
We specialize in:
- High-performance computing and performance optimization
- Path Tracing and Physically Based Rendering
- CAD Rendering
- Audio Programming and Digital Signal Processing
- Porting and Optimizing legacy Renderers
- Graphics and Compute APIs:
- Vulkan, D3D12, CUDA, OpenCL, WebGPU, D3D11, OpenGL
Whether you're optimizing your renderer or compute workloads, looking to port your legacy renderer, or integrating complex visual effects into your product, our team can help you. As a specialized team, we're constantly learning, evolving, and discussing matters with each other. Each member brings unique insights to the table, ensuring we approach every project from multiple angles to achieve the best possible solution.
Our primary language is C++20, but we also work with C#, Java, Python, and other related technologies.
If you're already here reading this, We want to hear from you and learn more about what you're building.
Contact us at newclients@devsh.eu.
The members of Devsh Graphics Programming Sp. z O.O. (Company Registration (KRS) #: 0000764661) are available (individually or collectively) for contracts on projects of various scopes and timescales.
[TODO]: also link to achievements, personal blogs, websites, linkedin and presentations of each member