MetaSDF: Meta-Learning Signed Distance Functions

NeurIPS 2020


Vincent Sitzmann*, Eric R. Chan*, Richard Tucker,
Noah Snavely, Gordon Wetzstein

Paper Code Data

Neural implicit shape representations are an emerging paradigm that offers many potential benefits over conventional discrete representations, including memory efficiency at a high spatial resolution. Generalizing across shapes with such neural implicit representations amounts to learning priors over the respective function space and enables geometry reconstruction from partial or noisy observations. Existing generalization methods rely on conditioning a neural network on a low-dimensional latent code that is either regressed by an encoder or jointly optimized in the auto-decoder framework. Here, we formalize learning of a shape space as a meta-learning problem and leverage gradient-based meta-learning algorithms to solve this task. We demonstrate that this approach performs on par with auto-decoder based approaches while being an order of magnitude faster at test-time inference. We further demonstrate that the proposed gradient-based method outperforms encoder-decoder based methods that leverage pooling-based set encoders.

Reconstructing SDFs from Dense Samples


We can recover an SDF by supervising with dense, ground-truth samples from the signed distance function, as proposed in DeepSDF, or with a point cloud taken from the zero-level set (mesh surface), similar to a PointNet encoder.

Ground Truth

DeepSDF

MetaSDF

Reconstructing SDFs from a Surface Point Cloud


We can recover an SDF by supervising with dense, ground-truth samples from the signed distance function, as proposed in DeepSDF, or with a point cloud taken from the zero-level set (mesh surface), similar to a PointNet encoder.

Ground Truth

PointNet

MetaSDF Lvl.

Related Projects


Check out our related projects on the topic of implicit neural representations!

We propose a new neural network architecture for implicit neural representations that can accurately fit complex signals, such as room-scale SDFs, video, and audio, and allows us to supervise implicit representations via their gradients to solve boundary value problems!
A continuous, 3D-structure-aware neural scene representation that encodes both geometry and appearance, supervised only in 2D via a neural renderer, and generalizes for 3D reconstruction from a single posed 2D image.
We demonstrate that the features learned by neural implicit scene representations are useful for downstream tasks, such as semantic segmentation, and propose a model that can learn to perform continuous 3D semantic segmentation on a class of objects (such as chairs) given only a single, 2D (!) semantic label map!

Paper


Bibtex


@inproceedings{sitzmann2019metasdf, author = {Sitzmann, Vincent and Chan, Eric R. and Tucker, Richard and Snavely, Noah and Wetzstein, Gordon}, title = {MetaSDF: Meta-Learning Signed Distance Functions}, booktitle = {arXiv}, year={2020} }