Adding pretrained weights S3DIS
This commit is contained in:
parent
3d683b6bd6
commit
d1bb1ca36e
|
@ -1,5 +1,48 @@
|
|||
|
||||
|
||||
## Test a pretrained network
|
||||
## S3DIS Pretrained Models
|
||||
|
||||
### Models
|
||||
|
||||
We provide pretrained weights for S3DIS dataset. The raw weights come with a parameter file describing the architecture and network hyperparameters. THe code can thus load the network automatically.
|
||||
|
||||
The instructions to run these models are in the S3DIS documentation, section [Test the trained model](./doc/scene_segmentation_guide.md#test-the-trained-model).
|
||||
|
||||
|
||||
| Name (link) | KPConv Type | Description | Score |
|
||||
|:-------------|:-------------:|:-----|:-----:|
|
||||
| [Light_KPFCNN](https://drive.google.com/file/d/14sz0hdObzsf_exxInXdOIbnUTe0foOOz/view?usp=sharing) | rigid | A network with small `in_radius` for light GPU consumption (~8GB) | 65.4% |
|
||||
| [Heavy_KPFCNN](https://drive.google.com/file/d/1ySQq3SRBgk2Vt5Bvj-0N7jDPi0QTPZiZ/view?usp=sharing) | rigid | A network with better performances but needing bigger GPU (>18GB). | 66.4% |
|
||||
|
||||
|
||||
|
||||
### Instructions
|
||||
|
||||
1. Unzip and place the folder in your 'results' folder.
|
||||
|
||||
2. In the test script `test_any_model.py`, set the variable `chosen_log` to the path were you placed the folder.
|
||||
|
||||
3. Run the test script
|
||||
|
||||
python3 test_any_model.py
|
||||
|
||||
4. You will see the performance (on the subsampled input clouds) increase as the test goes on.
|
||||
|
||||
Confusion on sub clouds
|
||||
65.08 | 92.11 98.40 81.83 0.00 18.71 55.41 68.65 90.93 79.79 74.83 65.31 63.41 56.62
|
||||
|
||||
|
||||
5. After a few minutes, the script will reproject the results form the subsampled input clouds to the real data and get you the real score
|
||||
|
||||
Reproject Vote #9
|
||||
Done in 2.6 s
|
||||
|
||||
Confusion on full clouds
|
||||
Done in 2.1 s
|
||||
|
||||
--------------------------------------------------------------------------------------
|
||||
65.38 | 92.62 98.39 81.77 0.00 18.87 57.80 67.93 91.52 80.27 74.24 66.14 64.01 56.42
|
||||
--------------------------------------------------------------------------------------
|
||||
|
||||
6. The test script creates a folder `test/name-of-your-log`, where it saves the predictions, potentials, and probabilities per class. You can load them with CloudCompare for visualization.
|
||||
|
||||
TODO
|
|
@ -221,7 +221,7 @@ def compare_trainings(list_of_paths, list_of_labels=None):
|
|||
|
||||
print(path)
|
||||
|
||||
if ('val_IoUs.txt' in [f.decode('ascii') for f in listdir(path)]) or ('val_confs.txt' in [f.decode('ascii') for f in listdir(path)]):
|
||||
if ('val_IoUs.txt' in [f for f in listdir(path)]) or ('val_confs.txt' in [f for f in listdir(path)]):
|
||||
config = Config()
|
||||
config.load(path)
|
||||
else:
|
||||
|
|
|
@ -95,10 +95,10 @@ if __name__ == '__main__':
|
|||
# > 'last_XXX': Automatically retrieve the last trained model on dataset XXX
|
||||
# > '(old_)results/Log_YYYY-MM-DD_HH-MM-SS': Directly provide the path of a trained model
|
||||
|
||||
chosen_log = 'results/Log_2020-04-05_19-19-20' # => ModelNet40
|
||||
chosen_log = 'results/Light_KPFCNN'
|
||||
|
||||
# Choose the index of the checkpoint to load OR None if you want to load the current checkpoint
|
||||
chkp_idx = None
|
||||
chkp_idx = -1
|
||||
|
||||
# Choose to test on validation or test split
|
||||
on_val = True
|
||||
|
|
|
@ -360,10 +360,10 @@ class ModelTester:
|
|||
proj_probs = []
|
||||
for i, file_path in enumerate(test_loader.dataset.files):
|
||||
|
||||
print(i, file_path, test_loader.dataset.test_proj[i].shape, self.test_probs[i].shape)
|
||||
# print(i, file_path, test_loader.dataset.test_proj[i].shape, self.test_probs[i].shape)
|
||||
|
||||
print(test_loader.dataset.test_proj[i].dtype, np.max(test_loader.dataset.test_proj[i]))
|
||||
print(test_loader.dataset.test_proj[i][:5])
|
||||
# print(test_loader.dataset.test_proj[i].dtype, np.max(test_loader.dataset.test_proj[i]))
|
||||
# print(test_loader.dataset.test_proj[i][:5])
|
||||
|
||||
# Reproject probs on the evaluations points
|
||||
probs = self.test_probs[i][test_loader.dataset.test_proj[i], :]
|
||||
|
|
Loading…
Reference in a new issue