ProDyG enables online dynamic scene reconstruction using 3D Gaussian Splatting from unposed monocular videos.
Achieving truly practical dynamic 3D reconstruction requires online operation, global pose and map consistency, detailed appearance modeling, and the flexibility to handle both RGB and RGB-D inputs. However, existing SLAM methods typically merely remove the dynamic parts or require RGB-D input, while offline methods are not scalable to long video sequences, and current transformer-based feedforward methods lack global consistency and appearance details. To this end, we achieve online dynamic scene reconstruction by disentangling the static and dynamic parts within a SLAM system. The poses are tracked robustly with a novel motion masking strategy, and dynamic parts are reconstructed leveraging a progressive adaptation of a Motion Scaffolds graph. Our method yields novel view renderings competitive to offline methods and achieves on-par tracking with state-of-the-art dynamic SLAM methods.
We achieve motion-agnostic online tracking by leveraging residual flow to first create keyframe-based coarse motion masks, from which we seed prompts for SAM2 to distill per-frame fine-grained masks. Static background is reconstructed by optimizing the static set of Gaussians with proxy depth maps. For dynamic reconstruction, we attach Gaussians to Motion Scaffolds, which are initialized by lifting 2D tracks to 3D, to encode a dense motion field. Subsequent to a final geometric and photometric optimization, the Motion Scaffolds and dynamic Gaussians are extended temporally when a new batch of images arrives.
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
| DynOMo | SoM | MoSca | ProDyG (Ours) | Ground Truth |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
| Ground Truth | Rerendering | NVS |
@article{chen2025prodyg,
title={ProDyG: Progressive Dynamic Scene Reconstruction via Gaussian Splatting from Monocular Videos},
author={Chen, Shi and Sandstr{\"o}m, Erik and Lombardi, Sandro and Li, Siyuan and Oswald, Martin R},
journal={arXiv preprint arXiv:2509.17864},
year={2025}
}