REALM: An MLLM-Agent Framework for Open-World Reasoning Segmentation and Editing on 3D Gaussian Splatting

Changyue Shi*, Minghao Chen*, Yiping Mao, Chuxiao Yang,
Xinyuan Hu, Zhijie Wang, Jiajun Ding†, Zhou Yu
* Equal Contribution † Corresponding Author

🔧 Framework Overview

REALM Pipeline

TL;DR: REALM addresses a key challenge in 3D understanding-3D reasoning segmentation. REALM also supports language-guided 3D editing tasks includig removal, replacement and stylization.

👑 Abstract

Bridging the gap between complex human instructions and precise 3D object grounding remains a significant challenge in vision and robotics. Existing 3D segmentation methods often struggle to interpret ambiguous, reasoning-based instructions, while 2D vision-language models that excel at such reasoning lack intrinsic 3D spatial understanding. In this paper, we introduce REALM, an innovative MLLM-agent framework that enables open-world reasoning-based segmentation without requiring extensive 3D-specific post-training. We perform segmentation directly on 3D Gaussian Splatting representations, capitalizing on their ability to render photorealistic novel views that are highly suitable for MLLM comprehension. As directly feeding one or more rendered views to the MLLM can lead to high sensitivity to viewpoint selection, we propose a novel Global-to-Local Spatial Grounding strategy. Specifically, multiple global views are first fed into the MLLM agent in parallel for coarse-level localization, aggregating responses to robustly identify the target object. Then, several close-up novel views of the object are synthesized to perform fine-grained local segmentation, yielding accurate and consistent 3D masks. Extensive experiments show that REALM achieves remarkable performance in interpreting both explicit and implicit instructions across LERF, 3D-OVS, and our newly introduced REALM3D benchmarks. Furthermore, our agent framework seamlessly supports a range of 3D interaction tasks, including object removal, replacement, and style transfer, demonstrating its practical utility and versatility.

Video (Open on YouTube if it does not play here)

🏃 We propose REALM, an MLLM-Agent framework for 3D reasoning segmentation. (YouTube Link: Video)

Main Results on LERF Dataset

Main Results on 3D-OVS Dataset

Benchmark: Our REALM3D Dataset

REALM Pipeline

We propose REALM3D dataset, which comprises 100+ scenes and 1k+ of implicit prompt-mask pairs.

Comparison Results on REALM3D

REALM Pipeline

REALM outperforms previous methods.

Language-Guided 3D Editing

REALM Pipeline

Once the object is grounded, we can perform a wide range of 3D editing tasks.

BibTeX