In dynamic 3D environments, accurately updating scene representations over time is crucial for applications in robotics, mixed reality, and embodied AI. As scenes evolve, efficient methods to incorporate changes are needed to maintain up-to-date, high-quality reconstructions without the computational overhead of re-optimizing the entire scene. This paper introduces CL-Splats, which incrementally updates Gaussian splatting-based 3D representations from sparse scene captures.
CL-Splats integrates a robust change-detection module that segments updated and static components within the scene, enabling focused, local optimization that avoids unnecessary re-computation. Moreover, CL-Splats supports storing and recovering previous scene states, facilitating temporal segmentation and new scene-analysis applications.
Our extensive experiments demonstrate that CL-Splats achieves efficient updates with improved reconstruction quality over the state-of-the-art. This establishes a robust foundation for future real-time adaptation in 3D scene reconstruction tasks. We will release our source code and the synthetic and real-world datasets we created to support further research in this area.
First, we encode the image pairs with the same backbone which gives us features. We compare those with cosine similarity to obtain scores and finally we project them into 3D where we perform voting. Last, we project those points back into 2D to obtain the masks for optimization. Optimization happens in a 3D restricted space.
Comparisons on synthetic and real-world scenes with a baseline and a state-of-the-art method.
@article{ackermann2024clsplats,
author = {Ackermann, Jan and Kulhanek, Jonas and Cai, Shengqu and Haofei, Xu and Pollefeys, Marc and Wetzstein, Gordon and Guibas, Leonidas and Peng, Songyou},
title = {CL-Splats: Continual Learning Gaussian Splatting with Local Optimization},
journal = {arxiv},
year = {2025},
}