NeuSDFusion: A Spatial-Aware Generative Model for 3D Shape Completion, Reconstruction, and Generation

1 Australian National University, 2Tencent XR Vision Labs
3Shanghai Jiao Tong University, 4The University of Tokyo
ECCV 2024

*The contribution of Ruikai Cui, Han Yan and Zhennan Wu was made during an internship at Tencent XR Vision Labs

Corresponding author

Abstract

3D shape generation aims to produce innovative 3D content adhering to specific conditions and constraints. Existing methods often decompose 3D shapes into a sequence of localized components, treating each element in isolation without considering spatial consistency. As a result, these approaches exhibit limited versatility in 3D data representation and shape generation, hindering their ability to generate highly diverse 3D shapes that comply with the specified constraints. In this pa- per, we introduce a novel spatial-aware 3D shape generation framework that leverages 2D plane representations for enhanced 3D shape modeling. To ensure spatial coherence and reduce memory usage, we incorporate a hybrid shape representation technique that directly learns a continu- ous signed distance field representation of the 3D shape using orthogonal 2D planes. Additionally, we meticulously enforce spatial correspondences across distinct planes using a transformer-based autoencoder structure, promoting the preservation of spatial relationships in the generated 3D shapes. This yields an algorithm that consistently outperforms state- of-the-art 3D shape generation methods on various tasks, including unconditional shape generation, multi-modal shape completion, single-view reconstruction, and text-to-shape synthesis.

BibTeX


        @inproceedings{cui2024neusdfusion,
          title={Neusdfusion: A spatial-aware generative model for 3d shape completion, reconstruction, and generation},
          author={Cui, Ruikai and Liu, Weizhe and Sun, Weixuan and Wang, Senbo and Shang, Taizhang and Li, Yang and Song, Xibin and Yan, Han and Wu, Zhennan and Chen, Shenzhou and others},
          booktitle={European Conference on Computer Vision},
          year={2024},
          organization={Springer}
        }