2021-08-16 04:01:29 +00:00
# U-Net: Semantic segmentation with PyTorch
2020-07-24 00:04:38 +00:00
2021-08-17 21:09:50 +00:00
< a href = "https://hub.docker.com/r/milesial/unet" > < img src = "https://img.shields.io/badge/docker%20image-available-blue?logo=Docker&style=for-the-badge" / > < / a >
< a href = "https://pytorch.org/" > < img src = "https://img.shields.io/badge/PyTorch%20version-v1.9.0-red.svg?logo=PyTorch&style=for-the-badge" / > < / a >
< a href = "https://choosealicense.com/licenses/gpl-3.0/" > < img src = "https://img.shields.io/github/license/milesial/PyTorch-UNet?color=green&style=for-the-badge" / > < / a >
< a href = "#" > < img src = "https://img.shields.io/badge/python-v3.6+-blue.svg?logo=python&style=for-the-badge" / > < / a >
2020-07-24 00:04:38 +00:00
2021-01-21 13:43:08 +00:00
![input and output for a random image in the test dataset ](https://i.imgur.com/GD8FcB7.png )
2017-11-30 06:44:34 +00:00
2019-10-24 19:37:21 +00:00
Customized implementation of the [U-Net ](https://arxiv.org/abs/1505.04597 ) in PyTorch for Kaggle's [Carvana Image Masking Challenge ](https://www.kaggle.com/c/carvana-image-masking-challenge ) from high definition images.
2017-11-30 02:44:29 +00:00
2021-08-17 15:18:19 +00:00
- [Quick start using Docker ](#quick-start-using-docker )
- [Description ](#description )
- [Usage ](#usage )
- [Docker ](#docker )
- [Training ](#training )
- [Prediction ](#prediction )
- [Weights & Biases ](#weights--biases )
- [Pretrained model ](#pretrained-model )
- [Data ](#data )
2021-08-16 14:54:06 +00:00
## Quick start using Docker
1. [Install Docker 19.03 or later: ](https://docs.docker.com/get-docker/ )
2021-08-17 20:25:24 +00:00
```bash
2021-08-16 14:54:06 +00:00
curl https://get.docker.com | sh & & sudo systemctl --now enable docker
```
2. [Install the NVIDIA container toolkit: ](https://docs.nvidia.com/datacenter/cloud-native/container-toolkit/install-guide.html )
2021-08-17 20:25:24 +00:00
```bash
2021-08-16 14:54:06 +00:00
distribution=$(. /etc/os-release;echo $ID$VERSION_ID) \
& & curl -s -L https://nvidia.github.io/nvidia-docker/gpgkey | sudo apt-key add - \
& & curl -s -L https://nvidia.github.io/nvidia-docker/$distribution/nvidia-docker.list | sudo tee /etc/apt/sources.list.d/nvidia-docker.list
sudo apt-get update
sudo apt-get install -y nvidia-docker2
sudo systemctl restart docker
```
3. [Download and run the image: ](https://hub.docker.com/repository/docker/milesial/unet )
2021-08-17 20:25:24 +00:00
```bash
sudo docker run --rm --gpus all -it milesial/unet
2021-08-16 14:54:06 +00:00
```
4. Download the data and run training:
2021-08-17 20:25:24 +00:00
```bash
2021-08-16 14:54:06 +00:00
bash scripts/download_data.sh
python train.py --amp
```
## Description
2021-08-16 04:21:40 +00:00
This model was trained from scratch with 5000 images (no data augmentation) and scored a [dice coefficient ](https://en.wikipedia.org/wiki/S%C3%B8rensen%E2%80%93Dice_coefficient ) of 0.988423 on over 100k test images. This score could be improved with more training, data augmentation, fine-tuning, CRF post-processing, and applying more weights on the edges of the masks.
2017-11-30 02:44:29 +00:00
2021-08-16 04:01:29 +00:00
2017-11-30 02:44:29 +00:00
2017-11-30 05:45:19 +00:00
## Usage
2020-05-22 06:34:03 +00:00
**Note : Use Python 3.6 or newer**
2021-08-16 04:01:29 +00:00
### Docker
A docker image containing the code and the dependencies is available on [DockerHub ](https://hub.docker.com/repository/docker/milesial/unet ).
2021-08-16 04:21:40 +00:00
You can **download and jump in the container** with ([docker >=19.03](https://docs.docker.com/get-docker/)):
2021-08-16 04:01:29 +00:00
2021-08-17 20:25:24 +00:00
```console
2021-08-16 04:01:29 +00:00
docker run -it --rm --gpus all milesial/unet
```
### Training
2021-08-17 20:25:24 +00:00
```console
2021-08-16 04:01:29 +00:00
> python train.py -h
usage: train.py [-h] [--epochs E] [--batch-size B] [--learning-rate LR]
[--load LOAD] [--scale SCALE] [--validation VAL] [--amp]
Train the UNet on images and target masks
optional arguments:
-h, --help show this help message and exit
--epochs E, -e E Number of epochs
--batch-size B, -b B Batch size
--learning-rate LR, -l LR
Learning rate
--load LOAD, -f LOAD Load model from a .pth file
--scale SCALE, -s SCALE
Downscaling factor of the images
--validation VAL, -v VAL
Percent of the data that is used as validation (0-100)
--amp Use mixed precision
```
By default, the `scale` is 0.5, so if you wish to obtain better results (but use more memory), set it to 1.
2021-08-16 14:54:06 +00:00
Automatic mixed precision is also available with the `--amp` flag. [Mixed precision ](https://arxiv.org/abs/1710.03740 ) allows the model to use less memory and to be faster on recent GPUs by using FP16 arithmetic.
2017-11-30 06:44:34 +00:00
### Prediction
2017-11-30 02:44:29 +00:00
2021-08-16 04:01:29 +00:00
After training your model and saving it to `MODEL.pth` , you can easily test the output masks on your images via the CLI.
2017-11-30 07:30:38 +00:00
2017-11-30 05:45:19 +00:00
To predict a single image and save it:
2017-11-30 06:19:52 +00:00
2018-06-08 17:28:46 +00:00
`python predict.py -i image.jpg -o output.jpg`
2017-11-30 05:45:19 +00:00
To predict a multiple images and show them without saving them:
2017-11-30 06:19:52 +00:00
2017-11-30 05:45:19 +00:00
`python predict.py -i image1.jpg image2.jpg --viz --no-save`
2021-08-17 20:25:24 +00:00
```console
2019-10-24 19:37:21 +00:00
> python predict.py -h
2021-08-16 04:01:29 +00:00
usage: predict.py [-h] [--model FILE] --input INPUT [INPUT ...]
2019-10-24 19:37:21 +00:00
[--output INPUT [INPUT ...]] [--viz] [--no-save]
[--mask-threshold MASK_THRESHOLD] [--scale SCALE]
Predict masks from input images
optional arguments:
-h, --help show this help message and exit
--model FILE, -m FILE
Specify the file in which the model is stored
--input INPUT [INPUT ...], -i INPUT [INPUT ...]
2021-08-16 04:01:29 +00:00
Filenames of input images
2019-10-24 19:37:21 +00:00
--output INPUT [INPUT ...], -o INPUT [INPUT ...]
2021-08-16 04:01:29 +00:00
Filenames of output images
--viz, -v Visualize the images as they are processed
--no-save, -n Do not save the output masks
2019-10-24 19:37:21 +00:00
--mask-threshold MASK_THRESHOLD, -t MASK_THRESHOLD
2021-08-16 04:01:29 +00:00
Minimum probability value to consider a mask pixel white
2019-10-24 19:37:21 +00:00
--scale SCALE, -s SCALE
2021-08-16 04:01:29 +00:00
Scale factor for the input images
2019-10-24 19:37:21 +00:00
```
2017-11-30 05:45:19 +00:00
You can specify which model file to use with `--model MODEL.pth` .
2017-11-30 02:44:29 +00:00
2021-08-17 15:18:19 +00:00
## Weights & Biases
2017-11-30 07:30:38 +00:00
2021-08-16 05:22:02 +00:00
The training progress can be visualized in real-time using [Weights & Biases ](https://wandb.ai/ ). Loss curves, validation curves, weights and gradient histograms, as well as predicted masks are logged to the platform.
2019-10-24 19:37:21 +00:00
2021-08-16 04:01:29 +00:00
When launching a training, a link will be printed in the console. Click on it to go to your dashboard. If you have an existing W& B account, you can link it
by setting the `WANDB_API_KEY` environment variable.
2019-10-24 19:37:21 +00:00
2021-08-17 15:18:19 +00:00
## Pretrained model
2020-08-12 07:42:01 +00:00
A [pretrained model ](https://github.com/milesial/Pytorch-UNet/releases/tag/v1.0 ) is available for the Carvana dataset. It can also be loaded from torch.hub:
```python
net = torch.hub.load('milesial/Pytorch-UNet', 'unet_carvana')
```
The training was done with a 100% scale and bilinear upsampling.
2021-08-16 04:01:29 +00:00
## Data
The Carvana data is available on the [Kaggle website ](https://www.kaggle.com/c/carvana-image-masking-challenge/data ).
2017-11-30 06:44:34 +00:00
2021-08-17 15:18:19 +00:00
You can also download it using the helper script:
2017-11-30 17:50:25 +00:00
2021-08-17 20:25:24 +00:00
```
2021-08-17 15:18:19 +00:00
bash scripts/download_data.sh
2021-08-16 04:01:29 +00:00
```
2020-03-16 05:37:20 +00:00
2021-08-16 04:21:40 +00:00
The input images and target masks should be in the `data/imgs` and `data/masks` folders respectively. For Carvana, images are RGB and masks are black and white.
2020-07-24 00:04:38 +00:00
2021-08-16 04:21:40 +00:00
You can also use your own dataset as long as you make sure it is loaded properly in `utils/data_loading.py` .
2020-07-24 00:04:38 +00:00
2019-10-24 19:37:21 +00:00
---
2021-08-16 04:21:40 +00:00
Original paper by Olaf Ronneberger, Philipp Fischer, Thomas Brox:
[U-Net: Convolutional Networks for Biomedical Image Segmentation ](https://arxiv.org/abs/1505.04597 )
2019-10-24 19:37:21 +00:00
![network architecture ](https://i.imgur.com/jeDVpqF.png )