Go to file
2023-05-22 09:33:10 +02:00
.vscode fix: move translation instructions at the right place 2023-04-17 13:46:48 +02:00
assets ... 2021-11-01 00:12:02 -07:00
dataset feat: modify test_generation to translate back sampled blade from norminal deformations 2023-04-17 10:35:46 +02:00
metrics style: autoformatting 2023-04-11 11:12:58 +02:00
model chore: rename "datasets" internal module to "dataset" 2023-04-11 13:50:00 +02:00
modules Cleared Caches 2023-04-11 11:31:02 +02:00
utils style: autoformatting 2023-04-11 11:12:58 +02:00
.editorconfig feat: add some toolings 2023-04-11 11:12:47 +02:00
.gitattributes feat: .gitattribute 2023-04-11 11:28:07 +02:00
.gitignore fix: ignore *.vtk 2023-04-14 13:09:16 +02:00
.gitmodules PVD 2021-10-19 13:54:46 -07:00
compare_samples.py feat: remove deformation prediction, add selectable cardinality when sampling 2023-04-17 16:16:11 +02:00
convert_cam_params.py style: autoformatting 2023-04-11 11:12:58 +02:00
environment.yml feat: add ninja as deps 2023-04-13 16:21:27 +02:00
LICENSE Create LICENSE 2022-03-11 22:16:44 -08:00
pyproject.toml fix: __init__.py cleared by mistake 2023-04-11 11:25:53 +02:00
README.md fix: load gcc and mpfr everytime 2023-04-11 09:30:14 +02:00
test_completion.py chore: rename "datasets" internal module to "dataset" 2023-04-11 13:50:00 +02:00
test_generation.py print number of parameters of network 2023-05-22 09:33:10 +02:00
train_completion.py chore: rename "datasets" internal module to "dataset" 2023-04-11 13:50:00 +02:00
train_generation.py feat: remove deformation prediction, add selectable cardinality when sampling 2023-04-17 16:16:11 +02:00

Shape Generation and Completion Through Point-Voxel Diffusion

Project | Paper

Implementation of Shape Generation and Completion Through Point-Voxel Diffusion

Linqi Zhou, Yilun Du, Jiajun Wu

Requirements:

Install Python environment

module load conda
module load artifactory
mamba env create --file environment.yml
# mamba env update --file environment.yml
conda activate PVD
module load gcc/11.2.0
module load mpfr/4.0.2

Install PyTorchEMD by

cd metrics/PyTorchEMD
python setup.py install
cp build/**/emd_cuda.cpython-*-x86_64-linux-gnu.so .

The code was tested on Unbuntu with Titan RTX.

Data

For generation, we use ShapeNet point cloud, which can be downloaded here. Or at /data/users/lfainsin/ShapeNetCore.v2.PC15k.zip.

For completion, we use ShapeNet rendering provided by GenRe. We provide script convert_cam_params.py to process the provided data.

For training the model on shape completion, we need camera parameters for each view which are not directly available. To obtain these, simply run

$ python convert_cam_params.py --dataroot DATA_DIR --mitsuba_xml_root XML_DIR

which will create ..._cam_params.npz in each provided data folder for each view.

Pretrained models

Pretrained models can be downloaded here. Or at /data/users/lfainsin/PVD/checkpoints.

Training:

$ python train_generation.py --category car|chair|airplane

Please refer to the python file for optimal training parameters.

Testing:

$ python train_generation.py --category car|chair|airplane --model MODEL_PATH

Results

Some generation and completion results are as follows.

Multimodal completion on a ShapeNet chair.

Multimodal completion on PartNet.

Multimodal completion on two Redwood 3DScan chairs.

Reference

@inproceedings{Zhou_2021_ICCV,
  author    = {Zhou, Linqi and Du, Yilun and Wu, Jiajun},
  title     = {3D Shape Generation and Completion Through Point-Voxel Diffusion},
  booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV)},
  month     = {October},
  year      = {2021},
  pages     = {5826-5835}
}

Acknowledgement

For any questions related to codes and experiment setting, please contact Linqi Zhou and Yilun Du.