Find a file
2023-04-03 17:03:53 -04:00
assets updated animation gif 2022-10-10 17:10:34 -07:00
config add prior config 2023-02-19 22:55:59 -05:00
datasets add scripts to train with clip feat 2023-04-03 17:03:27 -04:00
docker add docker 2023-02-20 00:27:19 -05:00
models init 2023-01-23 00:14:49 -05:00
script Merge branch 'main' of github.com:nv-tlabs/LION into main 2023-04-03 17:03:53 -04:00
third_party init 2023-01-23 00:14:49 -05:00
trainers add scripts to train with clip feat 2023-04-03 17:03:27 -04:00
utils fix tensor device issue 2023-03-16 12:44:57 -04:00
.gitignore init 2023-01-23 00:14:49 -05:00
build_pkg.py init 2023-01-23 00:14:49 -05:00
default_config.py init 2023-01-23 00:14:49 -05:00
demo.py init 2023-01-23 00:14:49 -05:00
env.yaml init 2023-01-23 00:14:49 -05:00
LICENSE.txt add license 2023-01-24 11:10:30 -05:00
README.md add check sum 2023-03-28 18:36:37 -04:00
train_dist.py fix training resume from snapshot 2023-03-14 12:30:23 -04:00

LION: Latent Point Diffusion Models for 3D Shape Generation

NeurIPS 2022

Animation

Update

  • add pointclouds rendering code used for paper figure, see utils/render_mitsuba_pc.py
  • When opening an issue, please add @ZENGXH so that I can reponse faster!

Install

  • Dependencies:

    • CUDA 11.6
  • Setup the environment Install from conda file

        conda env create --name lion_env --file=env.yaml 
        conda activate lion_env 
    
        # Install some other packages 
        pip install git+https://github.com/openai/CLIP.git 
    
        # build some packages first (optional)
        python build_pkg.py
    

    Tested with conda version 22.9.0

  • Using Docker

    • build the docker with bash ./docker/build_docker.sh
    • launch the docker with bash ./docker/run.sh

Demo

run python demo.py, will load the released text2shape model on hugging face and generate a chair point cloud. (Note: the checkpoint is not released yet, the files loaded in the demo.py file is not available at this point)

Released checkpoint and samples

  • will be release soon
  • after download, run the checksum with python ./script/check_sum.py ./lion_ckpt.zip
  • put the downloaded file under ./lion_ckpt/

Training

data

  • ShapeNet can be downloaded here.
  • Put the downloaded data as ./data/ShapeNetCore.v2.PC15k or edit the pointflow entry in ./datasets/data_path.py for the ShapeNet dataset path.

train VAE

  • run bash ./script/train_vae.sh $NGPU (the released checkpoint is trained with NGPU=4 on A100)
  • if want to use comet to log the experiment, add .comet_api file under the current folder, write the api key as {"api_key": "${COMET_API_KEY}"} in the .comet_api file

train diffusion prior

  • require the vae checkpoint
  • run bash ./script/train_prior.sh $NGPU (the released checkpoint is trained with NGPU=8 with 2 node on V100)

(Optional) monitor exp

  • (tested) use comet-ml: need to add a file .comet_api under this LION folder, example of the .comet_api file:
{"api_key": "...", "project_name": "lion", "workspace": "..."}
  • (not tested) use wandb: need to add a .wandb_api file, and set the env variable export USE_WB=1 before training
{"project": "...", "entity": "..."}
  • (not tested) use tensorboard, set the env variable export USE_TFB=1 before training
  • see the utils/utils.py files for the details of the experiment logger; I usually use comet-ml for my experiments

evaluate a trained prior

  • download the test data (Table 1) from here, unzip and put it as ./datasets/test_data/
  • download the released checkpoint from above
checkpoint="./lion_ckpt/unconditional/airplane/checkpoints/model.pt" 
bash ./script/eval.sh $checkpoint  # will take 1-2 hour 

other test data

  • ShapeNet-Vol test data:
  • table 21 and table 20, point-flow test data

Evaluate the samples with the 1-NNA metrics

  • download the test data from here, unzip and put it as ./datasets/test_data/
  • run python ./script/compute_score.py (Note: for ShapeNet-Vol data and table 21, 20, need to set norm_box=True)

Citation

@inproceedings{zeng2022lion,
    title={LION: Latent Point Diffusion Models for 3D Shape Generation},
        author={Xiaohui Zeng and Arash Vahdat and Francis Williams and Zan Gojcic and Or Litany and Sanja Fidler and Karsten Kreis},
        booktitle={Advances in Neural Information Processing Systems (NeurIPS)},
        year={2022}
}