Skip to content

Commit

Permalink
🎉 code publicly release
Browse files Browse the repository at this point in the history
  • Loading branch information
x.ji committed Jan 25, 2021
1 parent 1b75f65 commit ab2d0ef
Show file tree
Hide file tree
Showing 29 changed files with 1,155 additions and 1,259 deletions.
126 changes: 71 additions & 55 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,18 +1,22 @@
# Multi-Channel Multi-Part Network for Person Re-identification
# Lightweight Multi-Branch Network for Person Re-Identification

This repo support
Pytorch implementation for the paper [Lightweight Multi-Branch Network for Person Re-Identification]
<!-- (https://arxiv.org/). -->

![](/utils/LightMBN.png)

This repo supports
- [x] easy dataset preparation, including Market-1501, DukeMTMC-ReID, CUHK03, MOT17...
- [x] sota deep neural networks and various options(tricks) for reid
- [x] easy combination of different kinds of loss function
- [x] end-to-end training and evaluation
- [x] less package requirements


List of functions
- Warm up cosine annealing learning rate
- Random erasing augmentation
- Cutout augmentation
- Batch Drop Block and Batch Erasing
- Drop Block and Batch Erasing
- Label smoothing(Cross Entropy loss)
- Triplet loss
- Multi-Simulatity loss
Expand All @@ -24,28 +28,35 @@ List of functions
- BNNeck

Inplemented networks:
- Multi-Channel Multi-Part Network, which we proposed
- Lightweight Multi-Branch Network(LightMBN), which we proposed
- PCB [[link]](https://arxiv.org/pdf/1711.09349.pdf)
- MGN [[link]](https://arxiv.org/abs/1804.01438)
- Bag of tricks [[link]](http://openaccess.thecvf.com/content_CVPRW_2019/papers/TRMTMCT/Luo_Bag_of_Tricks_and_a_Strong_Baseline_for_Deep_Person_CVPRW_2019_paper.pdf)
- OSNet [[link]](https://arxiv.org/abs/1905.00953)
- Batch Drop Block(BDB) for Person ReID [[link]](https://arxiv.org/abs/1811.07130)


## Getting Started
The designed code architecture is concise and easy explicable, where the file engine.py defines the train/ test process and main.py controls the overall epochs, and the folders model, loss, optimizer including respective parts of neural network.

The user-friendly command-line module argparse helps us indicate different datasets, networks, loss functions, and tricks as we need,
the detailed options/configurations are described in the bottom of this page.
The user-friendly command-line module argparse helps us indicate different datasets, networks, loss functions, and tricks as we need, the detailed options/configurations are described in the bottom of this page.

If you don't have any dataset yet, run `git clone https://github.com/jixunbo/ReIDataset.git` to download Market-1501, DukeMTMC, and MOT17.
If you don't have any dataset yet, run
```
git clone https://github.com/jixunbo/ReIDataset.git
```
to download Market-1501, DukeMTMC, CUHK03 and MOT17.

To inplement Multi-Parts Multi-Channels Network with Multi-Similarity loss, run
To inplement our Lightweight Multi-Branch Network with Multi-Similarity loss, run

`python [path to repo]/main.py --datadir [path to datasets] --data_train dukemtmc --data_test dukemtmc --model MCMP_n --batchid 8 --batchimage 6 --batchtest 32 --test_every 20 --epochs 110 --loss 0.5*CrossEntropy+0.5*MSLoss --margin 0.7 --nGPU 1 --lr 6e-4 --optimizer ADAM --random_erasing --feats 512 --pool avg --save '' --if_labelsmooth --w_cosine_annealing`
```
python [path to repo]/main.py --datadir [path to datasets] --data_train market1501 --data_test market1501 --model LMBN_n --batchid 6 --batchimage 8 --batchtest 32 --test_every 20 --epochs 110 --loss 0.5*CrossEntropy+0.5*MSLoss --margin 0.7 --nGPU 1 --lr 6e-4 --optimizer ADAM --random_erasing --feats 512 --save '' --if_labelsmooth --w_cosine_annealing
```

Also, using pre-defined config file
`python [path to repo]/main.py --config [path to repo]/mcmp_config.yaml --save ''`

````
python [path to repo]/main.py --config [path to repo]/lmbn_config.yaml --save ''
````

All logs, results and parameters will be saved in folder 'experiment'.

Expand All @@ -57,40 +68,58 @@ Note that, the option '--datadir' is the dataset root, which contains folder Mar

'--epochs' is the epochs we'd like to train, while '--test_every 10' means evaluation will be excuted in every 10 epochs, the parameters of network and optimizer are updated after every every evaluation.

Actually, for the MCMP model we have two kinds of backbone, MCMP_r we use ResNet 50 or ResNet 50 IBN as backbone, while MCMP_n is OSNet, OSNet contrains much less parameters but could achieve a little bit better performance than ResNet50.

If you would like to re-inplement Bag of Tricks, run

`python [path to repo]/main.py --datadir [path to datasets] --data_train Market1501 --data_test Market1501 --model ResNet50 --batchid 16 --batchimage 4 --batchtest 32 --test_every 10 --epochs 120 --save '' --decay_type step_40_70 --loss 0.5*CrossEntropy+0.5*Triplet --margin 0.3 --nGPU 1 --lr 3.5e-4 --optimizer ADAM --random_erasing --warmup 'linear' --if_labelsmooth`
Actually, for the LightMBN model we have two kinds of backbone, LMBN_r we use ResNet50 as backbone, while LMBN_n is OSNet, OSNet contrains much less parameters but could achieve a little bit better performance than ResNet50.

or

`python [path to repo]/main.py --config [path to repo]/bag_of_tricks_config.yaml --save`
### Results
| Model | Market1501 | DukeMTMC-reID | CUHK03-D | CUHK03-L |
| --- | -- | -- | --- | --- |
| LightMBN(OSNet) | 96.3 (91.5) | 92.1 (83.7) | 85.4(82.6) | 87.2(85.1) |
| LightMBN(ResNet) | 96.1 (90.4) | 90.5 (82.2) | 81.0(79.2) | 85.2(83.5) |
| BoT | 94.2 (85.4) | 86.7 (75.8) | | |
| PCB | 95.1 (86.3) | 87.6 (76.6) | | |
| MGN | 94.7 (87.5) | 88.7 (79.4) | | |

If you would like to re-inplement PCB with powerful training tricks, run
Note, Rank-1(mAP), the results are produced by our repo without re-ranking, models and configurations may differ from original paper.

`python [path to repo]/main.py --datadir [path to datasets] --data_train Market1501 --data_test Market1501 --model PCB --batchid 8 --batchimage 8 --batchtest 32 --test_every 10 --epochs 120 --save '' --decay_type step_50_80_110 --loss 0.5*CrossEntropy+0.5*MSLoss --margin 0.7 --nGPU 1 --lr 5e-3 --optimizer ADAM --random_erasing --warmup 'linear' --if_labelsmooth --bnneck --parts 3`
Additionally, the evaluation metric method is the same as bag of tricks [repo](https://github.com/michuanhaohao/reid-strong-baseline/blob/master/utils/reid_metric.py).

Note that, the option '--parts' is used to set the number of stripes to be devided, original paper set 6.

And also, for MGN model run

`python [path to repo]/main.py --datadir [path to datasets] --data_train Market1501 --data_test Market1501 --model MGN --batchid 16 --batchimage 4 --batchtest 32 --test_every 10 --epochs 120 --save '' --decay_type step_50_80_110 --loss 0.5*CrossEntropy+0.5*Triplet --margin 1.2 --nGPU 1 --lr 2e-4 --optimizer ADAM --random_erasing --warmup 'linear' --if_labelsmooth`
### Pre-trained models
and correpondent config files can be found [here](https://1drv.ms/u/s!Ap1wlV4d0agrao4DxXe8loc_k30?e=I9PJXP) .

If you have pretrained model and config file, run
```
python [path to repo]/main.py --test_only --config [path to repo]/lmbn_config.yaml --pre_train [path to pretrained model]
```
to see the performance of the model.

`python [path to repo]/main.py --test_only --config [path to repo]/mcmp_config.yaml --pre_train [path to pretrained model]` to see the performance of the model.

If you would like to re-inplement Bag of Tricks, run
```
python [path to repo]/main.py --datadir [path to datasets] --data_train market1501 --data_test market1501 --model ResNet50 --batchid 16 --batchimage 4 --batchtest 32 --test_every 10 --epochs 120 --save '' --decay_type step_40_70 --loss 0.5*CrossEntropy+0.5*Triplet --margin 0.3 --nGPU 1 --lr 3.5e-4 --optimizer ADAM --random_erasing --warmup 'linear' --if_labelsmooth
```
or
```
python [path to repo]/main.py --config [path to repo]/bag_of_tricks_config.yaml --save ''
```

If you would like to re-inplement PCB with powerful training tricks, run
```
python [path to repo]/main.py --datadir [path to datasets] --data_train Market1501 --data_test Market1501 --model PCB --batchid 8 --batchimage 8 --batchtest 32 --test_every 10 --epochs 120 --save '' --decay_type step_50_80_110 --loss 0.5*CrossEntropy+0.5*MSLoss --margin 0.7 --nGPU 1 --lr 5e-3 --optimizer SGD --random_erasing --warmup 'linear' --if_labelsmooth --bnneck --parts 3
```

[here](https://drive.google.com/open?id=1dIsI0b9kgytd02tl5cPLMBON7eHlyIA5) is the MCMP **pre-trained model** and config file.
Note that, the option '--parts' is used to set the number of stripes to be devided, original paper set 6.

And also, for MGN model run
```
python [path to repo]/main.py --datadir [path to datasets] --data_train Market1501 --data_test Market1501 --model MGN --batchid 16 --batchimage 4 --batchtest 32 --test_every 10 --epochs 120 --save '' --decay_type step_50_80_110 --loss 0.5*CrossEntropy+0.5*Triplet --margin 1.2 --nGPU 1 --lr 2e-4 --optimizer ADAM --random_erasing --warmup 'linear' --if_labelsmooth
```

###Resume Training

If you want to resume training process, we assume you have the checkpoint file 'model-latest.pth', run

`python [path to repo]/main.py --config [path to repo]/mcmp_config.yaml --load [path to checkpoint]`

```
python [path to repo]/main.py --config [path to repo]/lmbn_config.yaml --load [path to checkpoint]
```
Of course, you can also set options individually using argparse command-line without config file.

## Easy Inplementation
Expand All @@ -101,34 +130,21 @@ Open this [notebook](https://colab.research.google.com/drive/14aRebdOqJSfNlwXiI5
Please be sure that your are using Google's powerful GPU(Tesla P100 or T4).

The whole training process(120 epochs) takes ~9 hours.
If you are hard-core player ^ ^ and you'd like to try different models or options, see Get Started as above.

If you are hard-core player ^ ^ and you'd like to try different models or options, see Get Started as follows.

### Results
| Model | Market1501 | DukeMTMC-reID |
| --- | -- | -- |
| MCMP_n | 96.3 (91.3) | 92.0 (83.2) |
| MCMP_r | 95.8 (90.4) | 90.5 (82.0) |
| BoT | 94.2 (85.4) | 86.7 (75.8) |
| PCB | 95.1 (86.3) | 87.6 (76.6) |
| MGN | 94.7 (87.5) | 88.7 (79.4) |

Note, Rank-1(mAP), the results are produced by our repo without re-ranking, models and configurations may differ from original paper.

Additionally, the evaluation metric method is the same as bag of tricks [repo](https://github.com/michuanhaohao/reid-strong-baseline/blob/master/utils/reid_metric.py).

### Option Description
## Option Description
'--nThread': type=int, default=4, number of threads for data loading.

'--cpu', action='store_true', if raise, use cpu only.

'--nGPU', type=int, default=1, number of GPUs.

''--config', type=str, default="", config path,if you have config file,use to set options, you don't need to input any option again.
--config', type=str, default="", config path,if you have config file,use to set options, you don't need to input any option again.

'--datadir', type=str, is the dataset root, which contains folder Market-1501, DukeMTMC-ReID etw..
'--datadir', type=str, is the dataset root, which contains folder Market-1501, DukeMTMC-ReID etw..

'--data_train' and '--data_test', type=str, specify the name of train/test dataset, which we can train on one dataset but test on another dataset, supported options: Market1501, DukeMTMC, MOT17, CUHK03.
'--data_train' and '--data_test', type=str, specify the name of train/test dataset, which we can train on one dataset but test on another dataset, supported options: market1501, dukemtmc, MOT17, cuhk03_spilited(767/700 protocol).

'--batchid 6' and '--batchimage 8': type=int, indicate that each batch contrains 6 persons, each person has 8 different images, totally 48 images.

Expand All @@ -146,7 +162,7 @@ Additionally, the evaluation metric method is the same as bag of tricks [repo](h

'--epochs', type=int, is the epochs we'd like to train, while '--test_every 10' means evaluation will be excuted in every 10 epochs, the parameters of network and optimizer are updated after every every evaluation.

'--model', default='MGN', name of model, options: MCMP_n, MCMP_r, ResNet50, PCB, MGN.
'--model', default='LMBN_n', name of model, options: LMBN_n, LMBN_r, ResNet50, PCB, MGN, etw..

'--loss', type=str, default='0.5\*CrossEntropy+0.5\*Triplet', you can combine different loss functions and corresponding weights, you can use only one loss function or 2 and more functions, e.g. '1\*CrossEntropy', '0.5\*CrossEntropy+0.5\*MSLoss+0.0005\*CenterLoss', options: CrossEntropy, Triplet, MSLoss, CenterLoss, Focal, GroupLoss.

Expand All @@ -166,9 +182,9 @@ Additionally, the evaluation metric method is the same as bag of tricks [repo](h

''--width', type=int, default=128, width of the input image.

'--num_classes', type=int, default=751, number of classes of train dataset, but normally you don't need to set it, it'll be automatically setted.
'--num_classes', type=int, default=751, number of classes of train dataset, but normally you don't need to set it, it'll be automatically setted depend on the dataset.

'--lr', type=float, default=2e-4, initial learning rate.
'--lr', type=float, default=6e-4, initial learning rate.

'--gamma', type=float, default=0.1,learning rate decay factor for step decay.

Expand All @@ -190,7 +206,7 @@ Additionally, the evaluation metric method is the same as bag of tricks [repo](h

'--cutout', action='store_true', if raise, use cutout augmentation.

'--random_erasing', action='store_true', use random erasing augmentation.
'--random_erasing', action='store_true', if raise, use random erasing augmentation.

'--probability', type=float, default=0.5, probability of random erasing.

Expand All @@ -199,4 +215,4 @@ Additionally, the evaluation metric method is the same as bag of tricks [repo](h
'--num_anchors', type=int, default=1, number of iterations of computing group loss.

### Acknowledgments
The codes are expanded from [deep-person-reid](https://github.com/KaiyangZhou/deep-person-reid) and [MGN-pytorch](https://github.com/seathiefwang/MGN-pytorch).
The codes was built on the top of [deep-person-reid](https://github.com/KaiyangZhou/deep-person-reid) and [MGN-pytorch](https://github.com/seathiefwang/MGN-pytorch) , We thank the authors for sharing their code publicly.
2 changes: 1 addition & 1 deletion data_v2/datamanager.py
Original file line number Diff line number Diff line change
Expand Up @@ -150,7 +150,7 @@ def __init__(self, args):
batch_size_test = args.batchtest
workers = args.nThread
train_sampler = 'random'
cuhk03_labeled = False
cuhk03_labeled = args.cuhk03_labeled
cuhk03_classic_split = False
market1501_500k = False

Expand Down
Loading

0 comments on commit ab2d0ef

Please sign in to comment.