Go to file
2023-05-30 11:46:06 +02:00
.vscode ✏️ change default interpreter path 2023-05-30 11:45:00 +02:00
configs minor 2022-01-05 17:54:47 +01:00
media init commit 2021-11-08 11:09:50 +01:00
scripts 🎨 apply auto formatting 2023-05-26 14:59:53 +02:00
src 🎨 apply auto formatting 2023-05-26 14:59:53 +02:00
.editorconfig 🔧 add .editorconfig 2023-05-26 14:54:38 +02:00
.gitattributes 🙈 add .gitattributes 2023-05-26 14:54:49 +02:00
.gitignore 🙈 update .gitignore 2023-05-26 14:58:52 +02:00
environment.yml 🚑 fix cuda deps, prevent future me from dying of unreproducibility 2023-05-30 11:45:36 +02:00
eval_meshes.py 🎨 apply auto formatting 2023-05-26 14:59:53 +02:00
generate.py 🎨 apply auto formatting 2023-05-26 14:59:53 +02:00
LICENSE Initial commit 2021-10-18 11:14:19 +02:00
optim.py 🎨 apply auto formatting 2023-05-26 14:59:53 +02:00
optim_hierarchy.py 🎨 apply auto formatting 2023-05-26 14:59:53 +02:00
pyproject.toml 🔧 add pyproject.toml 2023-05-26 14:59:38 +02:00
README.md 📝 update README.md for better installation instructions 2023-05-30 11:46:06 +02:00
train.py 🎨 apply auto formatting 2023-05-26 14:59:53 +02:00

Shape As Points (SAP)

Paper | Project Page | Short Video (6 min) | Long Video (12 min)

This repository contains the implementation of the paper:

Shape As Points: A Differentiable Poisson Solver Songyou Peng, Chiyu "Max" Jiang, Yiyi Liao, Michael Niemeyer, Marc Pollefeys and Andreas Geiger NeurIPS 2021 (Oral)

If you find our code or paper useful, please consider citing

 author    = {Peng, Songyou and Jiang, Chiyu "Max" and Liao, Yiyi and Niemeyer, Michael and Pollefeys, Marc and Geiger, Andreas},
 title     = {Shape As Points: A Differentiable Poisson Solver},
 booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
 year      = {2021}}


First you have to make sure that you have all dependencies in place. The simplest way to do so, is to use anaconda.

You can create an anaconda environment called sap using

conda env create -f environment.yaml
conda activate sap

Next, you should install PyTorch3D (>=0.5) yourself from the official instruction.

git clone https://github.com/facebookresearch/pytorch3d.git
cd pytorch3d
module load compilers
pip install -e .

Demo - Quick Start

First, run the script to get the demo data:

bash scripts/download_demo_data.sh

Optimization-based 3D Surface Reconstruction

You can now quickly test our code on the data shown in the teaser. To this end, simply run:

python optim_hierarchy.py configs/optim_based/teaser.yaml

This script should create a folder out/demo_optim where the output meshes and the optimized oriented point clouds under different grid resolution are stored.

To visualize the optimization process on the fly, you can set o3d_show: Frue in configs/optim_based/teaser.yaml.

Learning-based 3D Surface Reconstruction

You can also test SAP on another application where we can reconstruct from unoriented point clouds with either large noises or outliers with a learned network.

For the point clouds with large noise as shown above, you can run:

python generate.py configs/learning_based/demo_large_noise.yaml

The results can been found at out/demo_shapenet_large_noise/generation/vis.

As for the point clouds with outliers, you can run:

python generate.py configs/learning_based/demo_outlier.yaml

You can find the reconstrution on out/demo_shapenet_outlier/generation/vis.


We have different dataset for our optimization-based and learning-based settings.

Dataset for Optimization-based Reconstruction

Here we consider the following dataset:

Please cite the corresponding papers if you use the data.

You can download the processed dataset (~200 MB) by running:

bash scripts/download_optim_data.sh

Dataset for Learning-based Reconstruction

We train and evaluate on ShapeNet. You can download the processed dataset (~220 GB) by running:

bash scripts/download_shapenet.sh

After, you should have the dataset in data/shapenet_psr folder.

Alternatively, you can also preprocess the dataset yourself. To this end, you can:

Usage for Optimization-based 3D Reconstruction

For our optimization-based setting, you can consider running with a coarse-to-fine strategy:

python optim_hierarchy.py configs/optim_based/CONFIG.yaml

We start from a grid resolution of 32^3, and increase to 64^3, 128^3 and finally 256^3.

Alternatively, you can also run on a single resolution with:

python optim.py configs/optim_based/CONFIG.yaml

You might need to modify the CONFIG.yaml accordingly.

Usage for Learning-based 3D Reconstruction

Mesh Generation

To generate meshes using a trained model, use

python generate.py configs/learning_based/CONFIG.yaml

where you replace CONFIG.yaml with the correct config file.

Use a pre-trained model

The easiest way is to use a pre-trained model. You can do this by using one of the config files with postfix _pretrained.

For example, for 3D reconstruction from point clouds with outliers using our model with 7x offsets, you can simply run:

python generate.py configs/learning_based/outlier/ours_7x_pretrained.yaml

The script will automatically download the pretrained model and run the generation. You can find the outputs in the out/.../generation_pretrained folders.

Note config files are only for generation, not for training new models: when these configs are used for training, the model will be trained from scratch, but during inference our code will still use the pretrained model.

We provide the following pretrained models:



To evaluate a trained model, we provide the script eval_meshes.py. You can run it using:

python eval_meshes.py configs/learning_based/CONFIG.yaml

The script takes the meshes generated in the previous step and evaluates them using a standardized protocol. The output will be written to .pkl and .csv files in the corresponding generation folder that can be processed using pandas.


Finally, to train a new network from scratch, simply run:

python train.py configs/learning_based/CONFIG.yaml

For available training options, please take a look at configs/default.yaml.