assets | ||
config | ||
datasets | ||
docker | ||
models | ||
script | ||
third_party | ||
trainers | ||
utils | ||
.gitignore | ||
build_pkg.py | ||
default_config.py | ||
demo.py | ||
env.yaml | ||
LICENSE.txt | ||
README.md | ||
train_dist.py |
LION: Latent Point Diffusion Models for 3D Shape Generation
NeurIPS 2022
Xiaohui Zeng
Arash Vahdat
Francis Williams
Zan Gojcic
Or Litany
Sanja Fidler
Karsten Kreis
Paper Project Page
Paper Project Page
Update
- When opening an issue, please add @ZENGXH so that I can reponse faster!
Install
-
Dependencies:
- CUDA 11.6
-
Setup the environment Install from conda file
conda env create --name lion_env --file=env.yaml conda activate lion_env # Install some other packages pip install git+https://github.com/openai/CLIP.git # build some packages first (optional) python build_pkg.py
Tested with conda version 22.9.0
-
Using Docker
- build the docker with
bash ./docker/build_docker.sh
- launch the docker with
bash ./docker/run.sh
- build the docker with
Demo
run python demo.py
, will load the released text2shape model on hugging face and generate a chair point cloud. (Note: the checkpoint is not released yet, the files loaded in the demo.py
file is not available at this point)
Released checkpoint and samples
- will be release soon
- put the downloaded file under
./lion_ckpt/
Training
data
- ShapeNet can be downloaded here.
- Put the downloaded data as
./data/ShapeNetCore.v2.PC15k
or edit thepointflow
entry in./datasets/data_path.py
for the ShapeNet dataset path.
train VAE
- run
bash ./script/train_vae.sh $NGPU
(the released checkpoint is trained withNGPU=4
on A100) - if want to use comet to log the experiment, add
.comet_api
file under the current folder, write the api key as{"api_key": "${COMET_API_KEY}"}
in the.comet_api
file
train diffusion prior
- require the vae checkpoint
- run
bash ./script/train_prior.sh $NGPU
(the released checkpoint is trained withNGPU=8
with 2 node on V100)
evaluate a trained prior
- download the test data from here, unzip and put it as
./datasets/test_data/
- download the released checkpoint from above
checkpoint="./lion_ckpt/unconditional/airplane/checkpoints/model.pt"
bash ./script/eval.sh $checkpoint # will take 1-2 hour
Evaluate the samples with the 1-NNA metrics
- download the test data from here, unzip and put it as
./datasets/test_data/
- run
python ./script/compute_score.py
Citation
@inproceedings{zeng2022lion,
title={LION: Latent Point Diffusion Models for 3D Shape Generation},
author={Xiaohui Zeng and Arash Vahdat and Francis Williams and Zan Gojcic and Or Litany and Sanja Fidler and Karsten Kreis},
booktitle={Advances in Neural Information Processing Systems (NeurIPS)},
year={2022}
}