NVGS: Neural Visibility for Occlusion Culling in
3D Gaussian Splatting
1Hasselt University (Digital Future Lab - Flanders Make) 2TU Braunschweig (Computer Graphics Lab)
TL;DR: Backface culling for 3DGS.
Abstract
3D Gaussian Splatting can exploit frustum culling and level- of-detail strategies to accelerate rendering of scenes containing a large number of primitives. However, the semi-transparent nature of Gaussians prevents the application of another highly effective technique: occlusion culling. We address this limitation by proposing a novel method to learn the viewpoint-dependent visibility function of all Gaussians in a trained model using a small, shared MLP across instances of an asset in a scene. By querying it for Gaussians within the viewing frustum prior to rasterization, our method can discard occluded primitives during rendering. Leveraging Tensor Cores for efficient computation, we integrate these neural queries directly into a novel instanced software rasterizer. Our approach outperforms the current state of the art for composed scenes in terms of VRAM usage and image quality, utilizing a combination of our instanced rasterizer and occlusion culling MLP, and exhibits complementary properties to existing LoD techniques.
Video Comparison
Gaussians that are not used in the first frame of each scene are shown in redNeural Visibility Culling
3D Gaussian Splatting uses alpha blending to calculate the final per-pixel color. To achieve this however, all Gaussians in frustum have to be passed in order to calculate their color and opacity. This includes a large portion of Gaussians which in the end will have no contribution to the rendered image, hence in this work we bake Gaussian visibility into a lightweight MLP and show that we can use this to avoid unecessary preprocessing. Visibility is determined by considering the contribution of a Gaussian to a view. Due to transmittance being saturated effectively limiting this number, we can discard a significant number of Gaussians this way. By discarding Gaussians, we also enable methods that have higher per-Gaussian costs to become more effective as we avoid this preprocessing altogether.
Asset Library
Instanced Rasterizer
Besides the MLP, we introduce a novel instanced rasterizer tailored for 3DGS. We only instantiate a Gaussian after it has survived near-plane, frustum, and visibility culling. This allows us to drastically reduce VRAM compared to baseline 3DGS, or state-of-the-art methods for composed scenes. Our rasterizer can render a combination of neural-visibility enabled assets and normal 3DGS assets, enabling flexible usage for creators.
Results
A combination of the instanced rasterizer and our visibilty culling MLP allows us to drastically reduce VRAM usage, while keeping quality and FPS high for large-scale scenes. Combining our method with LoD approaches suitable for further distances can, similarly to meshes, prove to be a powerful tool for creating large-scale worlds using 3DGS.
Our approach allows for fast rendering speeds with little to no impact on visual quality
And it uses significantly less VRAM than other approaches due to a combination of our instanced rasterizer and occlusion MLP
Citation
Acknowledgements
This research was partly funded by the FWO fellowship grants (1SHDZ24N, 1S80926N), the Special Research Fund (BOF) of Hasselt University (R-14360, R-14436), the NORM.AI SBO project, the DFG project “Real-Action VR” (ID 523421583), and the Flanders Make’s XRTWin SBO project (R-12528). This work was made possible with the support of MAXVR-INFRA, a scalable and flexible infrastructure that facilitates the transition to digital-physical work environments.
The website template was borrowed from Jon Barron, who adopted it from Michaël Gharbi.