Find a file
milesial 74f825ce06 Fixed major data loading bug : #46 from pierrezeb/patch-1
Fix split_ids (returned only (id, 0))

The network is now training on both squares and not only the left part of the image

Former-commit-id: e2e46ce509382a45b1db4e1f639aeed568f6cb3e
2019-02-07 13:30:53 +01:00
unet Fix upsampling issues (maybe) (#38, #32) 2018-11-19 11:51:46 +01:00
utils fix split_ids (returned only left square!!) 2019-02-07 16:43:47 +09:00
.gitignore Added CLI for predict, cleaned up code, updated README 2017-11-30 06:45:19 +01:00
dice_loss.py Another error in diceloss 2018-10-28 14:54:14 +01:00
eval.py index + 1 in average 2019-01-09 15:01:42 +03:00
LICENSE Create LICENSE 2017-11-30 08:23:15 +01:00
MODEL.pth.REMOVED.git-id Added simple trained model 2017-08-23 17:38:55 +02:00
predict.py Move the sigmoid activation to the model itself 2018-11-10 23:11:52 +00:00
README.md Add note to use python 3 to README 2018-12-28 17:56:15 +01:00
requirements.txt Fixed some details on predict + requirements 2018-06-23 19:57:53 +02:00
submit.py Migration to PyTorch 0.4, code cleanup 2018-06-08 19:27:32 +02:00
train.py Move the sigmoid activation to the model itself 2018-11-10 23:11:52 +00:00

Pytorch-UNet

input and output for a random image in the test dataset

Customized implementation of the U-Net in Pytorch for Kaggle's Carvana Image Masking Challenge from a high definition image. This was used with only one output class but it can be scaled easily.

This model was trained from scratch with 5000 images (no data augmentation) and scored a dice coefficient of 0.988423 (511 out of 735) on over 100k test images. This score is not quite good but could be improved with more training, data augmentation, fine tuning, playing with CRF post-processing, and applying more weights on the edges of the masks.

The model used for the last submission is stored in the MODEL.pth file, if you wish to play with it. The data is available on the Kaggle website.

Usage

Note : Use Python 3

Prediction

You can easily test the output masks on your images via the CLI.

To see all options: python predict.py -h

To predict a single image and save it:

python predict.py -i image.jpg -o output.jpg

To predict a multiple images and show them without saving them:

python predict.py -i image1.jpg image2.jpg --viz --no-save

You can use the cpu-only version with --cpu.

You can specify which model file to use with --model MODEL.pth.

Training

python train.py -h should get you started. A proper CLI is yet to be added.

Warning

In order to process the image, it is split into two squares (a left on and a right one), and each square is passed into the net. The two square masks are then merged again to produce the final image. As a consequence, the height of the image must be strictly superior than half the width. Make sure the width is even too.

Dependencies

This package depends on pydensecrf, available via pip install.

Notes on memory

The model has be trained from scratch on a GTX970M 3GB. Predicting images of 1918*1280 takes 1.5GB of memory. Training takes approximately 3GB, so if you are a few MB shy of memory, consider turning off all graphical displays. This assumes you use bilinear up-sampling, and not transposed convolution in the model.