diff --git a/doc/new_dataset_guide.md b/doc/new_dataset_guide.md
deleted file mode 100644
index 8207e42..0000000
--- a/doc/new_dataset_guide.md
+++ /dev/null
@@ -1,76 +0,0 @@
-
-
-## Creating your own dataset
-
-### Overview of the pipeline
-
-The training script initiates a bunch of variables and classes before starting the training on a dataset. Here are the
-initialization steps:
-
-* Create an instance of the `Config` class. This instance will hold all the parameters defining the network.
-
-* Create an instance of your dataset class. This instance will handle the data, and the input pipeline. **This is the
-class you have to implement to train our network on your own data**.
-
-* Load the input point cloud in memory. Most datasets will fit in a 32GB RAM computer. If you don't have enough memory
-for your dataset, you will have to redesign the input pipeline.
-
-* Initialize the tensorflow input pipeline, which is a `tf.dataset` object that will create and feed the input batches
-to the network.
-
-* Create an instance of the network model class. This class contains the tensorflow operations defining the network.
-
-* Create an instance of our generic `ModelTrainer` class. This class handles the training of the model
-
-Then the training can start.
-
-
-### The dataset class
-
-This class has several roles. First this is where you define your dataset parameters (class names, data path, nature
-of the data...). Then this class will hold the point clouds loaded in memory. Eventually, it also defines the
-Tensorflow input pipeline. For efficiency, our implementation uses a parallel input queue, feeding batches to the
-network.
-
-Here we give you a description of each essential method that need to be implemented in your new dataset class. For more
-details, follow the implementation of the current datasets, which contains a lot of indications as comments.
-
-
-* The **\_\_init\_\_** method: Here you have to define the parameters of your dataset. Notice that your dataset class
-has to be a child of the common `Dataset` class, where generic methods are implemented. Their are a few thing that has
-to be defined here:
- - The labels: define a dictionary `self.label_to_names`, call the `self.init_labels()` method, and define which
- label should be ignored in `self.ignored_labels`.
- - The network model: the type of model that will be used on this dataset ("classification", "segmentation",
- "multi_segmentation" or "cloud_segmentation").
- - The number of CPU threads used in the parallel input queue.
- - Data paths and splits: you can manage your data as you wish, these variables are only used in methods that you
- will implement, so you do not have to follow exactly the notations of the other dataset classes.
-
-
-* The **load_subsampled_clouds** method: Here you load your data in memory. Depending on your dataset (if this is a
-classification or segmentation task, 3D scenes or 3D models) you will not have to load the same variables. Just follow
-the implementation of the existing datasets.
-
-
-* The **get_batch_gen** method: This method should return a python generator. This will be the base generator for the
-`tf.dataset` object. It is called in the generic `self.init_input_pipeline` or `self.init_test_input_pipeline` methods.
-Along with the generator, it also has to return the generated types and shapes. You can redesign the generators or used
-the ones we implemented. The generator returns np.arrays, but from this point of the pipeline, they will be converted
-to tensorflow tensors.
-
-
-* The **get_tf_mapping** method: This method return a mapping function that takes the generated batches and creates all
-the variables for the network. Remember that from this point we are defining a tensorflow graph of operations. There is
-not much to implement here as most of the work is done by two generic function `self.tf_augment_input` and
-`self.tf_xxxxxxxxx_inputs` where xxxxxxxxx can be "classification" of "segmentation" depending on the task. The only
-important thing to do here is to define the features that will be fed to the network.
-
-
-### The training script and configuration class
-
-In the training script you have to create a class that inherits from the `Config` class. This is where you will define
-all the network parameters by overwriting the attributes
-
-
-
diff --git a/doc/pretrained_models_guide.md b/doc/pretrained_models_guide.md
index 3509028..86a47ce 100644
--- a/doc/pretrained_models_guide.md
+++ b/doc/pretrained_models_guide.md
@@ -2,18 +2,4 @@
## Test a pretrained network
-### Data
-
-We provide two examples of pretrained models:
-- A network with rigid KPConv trained on S3DIS: link (50 MB)
-- A network with deformable KPConv trained on NPM3D: link (54 MB)
-
-
-
-Unzip the log folder anywhere.
-
-### Test model
-
-In `test_any_model.py`, choose the path of the log you just unzipped with the `chosen_log` variable:
-
- chosen_log = 'path_to_pretrained_log'
+TODO
\ No newline at end of file
diff --git a/doc/visualization_guide.md b/doc/visualization_guide.md
index a19bf35..b722a34 100644
--- a/doc/visualization_guide.md
+++ b/doc/visualization_guide.md
@@ -1,39 +1,10 @@
-## Visualize learned features
-
-### Intructions
-
-In order to visualize features you need a dataset and a pretrained model. You can use one of our pretrained models
-provided in the [pretrained models guide](./pretrained_models_guide.md), and the corresponding dataset.
-
-To start this visualization run the script:
-
- python3 visualize_features.py
-
-### Details
-
-The visualization script has to main parts, separated in two different methods of the visualizer class in
-`visualizer.py`.
-
-* In the first part, implemented in the method `top_relu_activations`, the script runs the model on test examples
-(forward pass). At the chosen Relu layer, you have N output features that are going to be visualized. For each feature,
-the script keeps the top 5 examples that activated it the most, and saves them in a `visu` folder.
-
-* In the second part, implemented in the method `top_relu_activations`, the script just shows the saved examples for
-each feature with the level of activation as color. You can navigate through examples with keys 'g' and 'h'.
-
-N.B. This second part of the code can be started without doing the first part again if the top examples have already
-been computed. See details in the code. Alternatively you can visualize the saved example with a point cloud software
-like CloudCompare.
-
-
## Visualize kernel deformations
### Intructions
-In order to visualize features you need a dataset and a pretrained model that uses deformable KPConv. You can use our
-NPM3D pretrained model provided in the [pretrained models guide](./pretrained_models_guide.md).
+In order to visualize features you need a dataset and a pretrained model that uses deformable KPConv.
To start this visualization run the script:
@@ -42,42 +13,13 @@ To start this visualization run the script:
### Details
The visualization script runs the model runs the model on a batch of test examples (forward pass), and then show these
-examples in an interactive window. Here is a list of all keyborad shortcuts:
+examples in an interactive window. Here is a list of all keyboard shortcuts:
- 'b' / 'n': smaller or larger point size.
- 'g' / 'h': previous or next example in current batch.
-- 'k': switch between the rigid kenrel (original kernel points positions) and the deformed kernel (position of the
+- 'k': switch between the rigid kernel (original kernel points positions) and the deformed kernel (position of the
kernel points after shift are applied)
- 'z': Switch between the points displayed (input points, current layer points or both).
- '0': Saves the example and deformed kernel as ply files.
- mouse left click: select a point and show kernel at its location.
- exit window: compute next batch.
-
-
-## visualize Effective Receptive Fields
-
-### Intructions
-
-In order to visualize features you need a dataset and a pretrained model. You can use one of our pretrained models
-provided in the [pretrained models guide](./pretrained_models_guide.md), and the corresponding dataset.
-
-To start this visualization run the script:
-
- python3 visualize_ERFs.py
-
-**Warning: This cript currently only works on the following datasets: NPM3D, Semantic3D, S3DIS, Scannet**
-
-### Details
-
-The visualization script show the Effective receptive fields of a network layer at one location. If you chose another
-location (with left click), it has to rerun the model on the whole input point cloud to get new gradient values. Here a
-list of all keyborad shortcuts:
-
-- 'b' / 'n': smaller or larger point size.
-- 'g' / 'h': lower or higher ceiling limit. A functionality that remove points from the ceiling. Very handy for indoor
-point clouds.
-- 'z': Switch between the points displayed (input points, current layer points or both).
-- 'x': Go to the next input point cloud.
-- '0': Saves the input point cloud with ERF values and the center point used as origin of the ERF.
-- mouse left click: select a point and show ERF at its location.
-- exit window: End script.
\ No newline at end of file