Sources of Uncertainty
in 3D Scene Reconstruction

ECCV 2024 Workshops

1Aalto University, 2Oulu University

TLDR: We categorize various sources of uncertainties that can arise in 3D scene reconstruction with NeRFs and GS.

Abstract

The process of 3D scene reconstruction can be affected by numerous uncertainty sources in real-world scenes. While Neural Radiance Fields (NeRFs) and 3D Gaussian Splatting (GS) achieve high-fidelity rendering, they lack built-in mechanisms to directly address or quantify uncertainties arising from the presence of noise, occlusions, confounding outliers, and imprecise camera pose inputs. In this paper, we introduce a taxonomy that categorizes different sources of uncertainty inherent in these methods. Moreover, we extend NeRF- and GS-based methods with uncertainty estimation techniques, including learning uncertainty outputs and ensembles, and perform an empirical study to assess their ability to capture the sensitivity of the reconstruction. Our study highlights the need for addressing various uncertainty aspects when designing NeRF/GS-based methods for uncertainty-aware 3D reconstruction.

Sources of Uncertainty in NeRF and GS

We study four types of uncertainty associated with NeRF/GS methods:

  1. Irreducible uncertainty (aleatoric), stemming from random effects in observations (inherent noise).
  2. Reducible uncertainty (epistemic), due to insufficient information in parts of the scene, which can be reduced by capturing data from new pose.;
  3. Confounding outliers, caused by non-static scenes, with elements such as moving objects or vegetation, which lead to ambiguities like blur or hallucinations.
  4. Input uncertainty, which relates to sensitivity to camera poses and can be seen as the dual of reconstruction uncertainty, focusing on how changing inputs can reduce uncertainty or enhance quality.

Sources of uncertainty.

Videos

We show rendered RGB and uncertainty side-by-side for each method separately. Here, we study uncertainty type (2) on reducible uncertainty, and only train on images that have positive x-translation (Section 5.2). The rendered videos show how the uncertainty changes when the method renders outside of the training views. The uncertainty valeus are capped at 0.3.

Active-Nerfacto

MC-Dropout-Nerfacto

Ensemble-Nerfacto

Active-Splatfacto

Laplace-Nerfacto

Ensemble-Splatfacto

BibTeX

@inproceedings{klasson2024sources,
      title={Sources of Uncertainty in 3D Scene Reconstruction}, 
      author={Marcus Klasson and Riccardo Mereu and Juho Kannala and Arno Solin},
      booktitle={European Conference on Computer Vision Workshops},
      year={2024},
      organization={Springer}
}