The dataset configuration scripts and model architecture used in this repository are borrowed from here.
The lane segmentation model used in this repository is trained on the open-source TuSimple dataset, obtained from here. This repository contains the necessary code to work with the LaneNet instance segmentation model using the DeepLabV3+ encoder.
In addition to training the model on the TuSimple dataset, we incorporate custom images (with a custom label generation implementation) of poor lane-marked rural roads and Speedway. The objective is to train LaneNet on a combination of the TuSimple and custom data and evaluate its performance on unseen samples across these categories.
- Download the TuSimple dataset and unzip it.
- Once the TuSimple dataset has been extracted, add the custom dataset we created (e.g. Speedway) and ensure that the path to the datasets resembles the following structure: path/to/your/unzipped/file should like this:
|--dataset
|----clips
|----label_data_0313.json
|----label_data_0531.json
|----label_data_0601.json
|----label_custom.json
The custom data labels can be generated and appended to the main .json labels file using the generate_label_json()
algorithm in util.py
. In this function, edit the file path to be one of the training .json files shown above. This will append the custom training data to the TuSimple training data.
- Next, run the following commands to generate the training, validation, and test sets:
python tusimple_transform.py --src_dir path/to/your/unzipped/file --val True --test True
The environment for training and evaluation:
python=3.6
torch>=1.2
numpy=1.7
torchvision>=0.4.0
matplotlib
opencv-python
pandas
If you have Conda installed on your machine, you can create a conda environment using the yaml file.
conda env create -f environment.yml
The model weights trained with the dataset configuration discussed above can be accessed here. We trained the model, with a DeepLabV3+ architecture, on a Quadro RTX 6000 GPU for about 2 hours.
If you wish to re-run the training yourself, use our model_train.py script to train the model.
- In lines 16-17, edit the paths to the training and validation .txt files
- In line 20, adjust the number of epochs for training as desired
- Execute the script
Use the model_test.py script to test the model's output on an image
- Edit line 12 to the path for the saved best-performing model weights from training
- Set the path to test image on line 14
- Execute the script. The results will be stored in the /Test_Outputs directory
Example results are shown below. More results can be seen in the /Test_Outputs
directory.
DeepLabv3+ Encoder and DeepLabv3+ decoder refer from https://github.com/YudeWang/deeplabv3plus-pytorch
TuSimple dataset acquired from https://www.kaggle.com/datasets/manideep1108/tusimple?resource=download