Find a file
2023-02-01 14:20:56 -05:00
assets updated animation gif 2022-10-10 17:10:34 -07:00
datasets init 2023-01-23 00:14:49 -05:00
models init 2023-01-23 00:14:49 -05:00
script update readme; minor fix and add log msg 2023-01-25 17:02:07 -05:00
third_party init 2023-01-23 00:14:49 -05:00
trainers init 2023-01-23 00:14:49 -05:00
utils update readme; minor fix and add log msg 2023-01-25 17:02:07 -05:00
.gitignore init 2023-01-23 00:14:49 -05:00
build_pkg.py init 2023-01-23 00:14:49 -05:00
default_config.py init 2023-01-23 00:14:49 -05:00
demo.py init 2023-01-23 00:14:49 -05:00
env.yaml init 2023-01-23 00:14:49 -05:00
LICENSE.txt add license 2023-01-24 11:10:30 -05:00
README.md Update README.md 2023-02-01 14:20:56 -05:00
train_dist.py init 2023-01-23 00:14:49 -05:00

LION: Latent Point Diffusion Models for 3D Shape Generation

NeurIPS 2022

Animation

Install

  • Dependencies:

    • CUDA 11.6
  • Setup the environment Install from conda file

        conda env create --name lion_env --file=env.yaml 
        conda activate lion_env 
    
        # Install some other packages 
        pip install git+https://github.com/openai/CLIP.git 
    
        # build some packages first (optional)
        python build_pkg.py
    

    Tested with conda version 22.9.0

Demo

run python demo.py, will load the released text2shape model on hugging face and generate a chair point cloud. (Note: the checkpoint is not released yet, the files loaded in the demo.py file is not available at this point)

Released checkpoint and samples

  • will be release soon
  • put the downloaded file under ./lion_ckpt/

Training

data

  • ShapeNet can be downloaded here.
  • Put the downloaded data as ./data/ShapeNetCore.v2.PC15k or edit the pointflow entry in ./datasets/data_path.py for the ShapeNet dataset path.

train VAE

  • run bash ./script/train_vae.sh $NGPU (the released checkpoint is trained with NGPU=4 on A100)
  • if want to use comet to log the experiment, add .comet_api file under the current folder, write the api key as {"api_key": "${COMET_API_KEY}"} in the .comet_api file

train diffusion prior

  • require the vae checkpoint
  • run bash ./script/train_prior.sh $NGPU (the released checkpoint is trained with NGPU=8 with 2 node on V100)

evaluate a trained prior

  • download the test data from here, unzip and put it as ./datasets/test_data/
  • download the released checkpoint from above
checkpoint="./lion_ckpt/unconditional/airplane/checkpoints/model.pt" 
bash ./script/eval.sh $checkpoint  # will take 1-2 hour 

Evaluate the samples with the 1-NNA metrics

  • download the test data from here, unzip and put it as ./datasets/test_data/
  • run python ./script/compute_score.py

Citation

@inproceedings{zeng2022lion,
    title={LION: Latent Point Diffusion Models for 3D Shape Generation},
        author={Xiaohui Zeng and Arash Vahdat and Francis Williams and Zan Gojcic and Or Litany and Sanja Fidler and Karsten Kreis},
        booktitle={Advances in Neural Information Processing Systems (NeurIPS)},
        year={2022}
}