TranSplat: Generalizable 3D Gaussian Splatting from Sparse Multi-View Images with Transformers

Chuanrui Zhang1* Yingshuang Zou1* Zhuoling Li2 Minmin Yi3 Haoqian Wang1 
1Tsinghua University2The University of Hong Kong  3E-surfing Vision Technology Co., Ltd 
* Equal Contribution 
Paper Code coming soon
architecture

TL;DR

We present TranSplat, a transformer-based approach for generalizable 3D gaussian splatting from sparse multi-view images.

Abstract

Compared with previous 3D reconstruction methods like Nerf, recent Generalizable 3D Gaussian Splatting (G-3DGS) methods demonstrate impressive efficiency even in the sparse-view setting. However, the promising reconstruction performance of existing G-3DGS methods relies heavily on accurate multi-view feature matching, which is quite challenging. Especially for the scenes that have many non-overlapping areas between various views and contain numerous similar regions, the matching performance of existing methods is poor and the reconstruction precision is limited. To address this problem, we develop a strategy that utilizes a predicted depth confidence map to guide accurate local feature matching. In addition, we propose to utilize the knowledge of existing monocular depth estimation models as prior to boost the depth estimation precision in non-overlapping areas between views. Combining the proposed strategies, we present a novel G-3DGS method named TranSplat, which obtains the best performance on both the RealEstate10K and ACID benchmarks while maintaining competitive speed and presenting strong cross-dataset generalization ability.

Architecture

architecture
Overview of TranSplat. Our method takes multi-view images as input and first extracts image features and monocular depth priors. Next, the coarse-to-fine matching stage is used to obtain a geometry-consistent depth distribution for each view. Specifically, we compute multi-view feature similarities using our proposed Depth-Aware Deformable Matching Transformer module. The Depth Refine U-Net is then employed to further refine the depth prediction. Finally, we predict pixel-wise 3D Gaussian parameters to render novel views.

Comparisons with the State-of-the-art

We present qualitative comparisons with the following state-of-the-art models:

SOTA comparisons

Geometry Reconstruction

Our TranSplat generates impressive 3D Gaussian primitives which is atrributed to our high-quality depth estimation results.

SOTA comparisons

Cross-dataset Generalization

Our proposed TranSplat demonstrates significant superiority in generalizing to out-of-distribution novel scenes.

cross-dataset comparisons