InstaInpaint
Instant 3D-Scene Inpainting with Masked Large Reconstruction Model

1Shanghai Jiao Tong University, 2UC Merced, 3Singapore University of Technology and Design
Description

InstaInpaint can generate inpainted 3D scenes in 0.4 seconds and supports background inpainting, object insertion and multi-region inpainting simultaneously.

Abstract

Recent advances in 3D scene reconstruction enable real-time viewing in virtual and augmented reality. To support interactive operations for better immersiveness, such as moving or editing objects, 3D scene inpainting methods are proposed to repair or complete the altered geometry. However, current approaches rely on lengthy and computationally intensive optimization, making them impractical for real-time or online applications.

We propose InstaInpaint, a reference-based feed-forward framework that produces 3D-scene inpainting from a 2D inpainting proposal within 0.4 seconds. We develop a self-supervised masked-finetuning strategy to enable training of our custom large reconstruction model (LRM) on the large- scale dataset. Through extensive experiments, we analyze and identify several key designs that improve generalization, textural consistency, and geometric correctness. InstaInpaint achieves a 1000× speed-up from prior methods while maintaining a state-of-the-art performance across two standard benchmarks. Moreover, we show that InstaInpaint generalizes well to flexible downstream applications such as object insertion and multi-region inpainting.

Pipeline Architecture & Mask Generation

Quantitative Comparison

Comparison with Baseline

We provide qualitative comparisons with previous state-of-the-art 3D inpainting methods: Mald-NeRF , GScream and InFusion . Please click the button to select scenes and viewing mode.

Scene Image

Original Scene

Loading...

Text-Guided Object Insersion

Scene Image

Original Scene

Loading...

Edited Scene

Loading...

Depth

BibTeX


    @misc{you2025instainpaint,
      title={InstaInpaint: Instant 3D-Scene Inpainting with Masked Large Reconstruction Model}, 
      author={Junqi You and Chieh Hubert Lin and Weijie Lyu and Zhengbo Zhang and Ming-Hsuan Yang},
      year={2025},
      url={https://arxiv.org/abs/2506.10980}, 
    }