Find a file
Linqi (Alex) Zhou 958537389a ...
2021-11-01 00:12:02 -07:00
assets ... 2021-11-01 00:12:02 -07:00
datasets ... 2021-11-01 00:12:02 -07:00
metrics PVD 2021-10-19 13:54:46 -07:00
model PVD 2021-10-19 13:54:46 -07:00
modules PVD 2021-10-19 13:54:46 -07:00
utils ... 2021-11-01 00:12:02 -07:00
.gitignore PVD 2021-10-19 13:54:46 -07:00
.gitmodules PVD 2021-10-19 13:54:46 -07:00
convert_cam_params.py ... 2021-11-01 00:12:02 -07:00
README.md ... 2021-11-01 00:12:02 -07:00
requirement_voxel.txt PVD 2021-10-19 13:54:46 -07:00
test_completion.py ... 2021-11-01 00:12:02 -07:00
test_generation.py ... 2021-11-01 00:12:02 -07:00
train_completion.py ... 2021-11-01 00:12:02 -07:00
train_generation.py ... 2021-11-01 00:12:02 -07:00

Shape Generation and Completion Through Point-Voxel Diffusion

Project | Paper

Implementation of Shape Generation and Completion Through Point-Voxel Diffusion

Linqi Zhou, Yilun Du, Jiajun Wu

Requirements:

Make sure the following environments are installed.

python==3.6
pytorch==1.4.0
torchvision==0.5.0
cudatoolkit==10.1
matplotlib==2.2.5
tqdm==4.32.1
open3d==0.9.0
trimesh=3.7.12
scipy==1.5.1

Install PyTorchEMD by

cd metrics/PyTorchEMD
python setup.py install
cp build/**/emd_cuda.cpython-36m-x86_64-linux-gnu.so .

The code was tested on Unbuntu with Titan RTX.

Data

For generation, we use ShapeNet point cloud, which can be downloaded here.

For completion, we use ShapeNet rendering provided by GenRe. We provide script convert_cam_params.py to process the provided data.

For training the model on shape completion, we need camera parameters for each view which are not directly available. To obtain these, simply run

$ python convert_cam_params.py --dataroot DATA_DIR --mitsuba_xml_root XML_DIR

which will create ..._cam_params.npz in each provided data folder for each view.

Pretrained models

Pretrained models can be downloaded here.

Training:

$ python train_generation.py --category car|chair|airplane

Please refer to the python file for optimal training parameters.

Testing:

$ python train_generation.py --category car|chair|airplane --model MODEL_PATH

Results

Some generative results are as follows.

Reference

@inproceedings{han2020joint,
  title={Joint Training of Variational Auto-Encoder and Latent Energy-Based Model},
  author={Han, Tian and Nijkamp, Erik and Zhou, Linqi and Pang, Bo and Zhu, Song-Chun and Wu, Ying Nian},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={7978--7987},
  year={2020}
}

Acknowledgement

For any questions related to codes and experiment setting, please contact Linqi (Alex) Zhou (alexzhou907@gmail.com). For questions related to model and algorithm in the paper, please contact Tian Han (hantian@ucla.edu). Thanks to @Tian Han and @Erik Njikamp for their colloboration and guidance.