The code is built with reid-strong-baseline. Thanks for their great work.
The codes are expanded on a ReID-baseline , which is open sourced by our co-first author Xingyu Liao.
The designed architecture follows this guide PyTorch-Project-Template, you can check each folder's purpose by yourself.
-
cd DIVOTrack/Cross_view_Tracking/StrongReID/
-
Install dependencies:
- pytorch>=0.4
- torchvision
- ignite=0.1.2 (Note: V0.2.0 may result in an error)
- yacs
-
Prepare dataset
DIVOTrack
└——————datasets
| └——————DIVO
| |——————ReID_format
└——————${ROOT}
- Prepare pretrained model Put the model in ./models/. You can obtain the model from Google Drive
python3 tools/train.py --config_file='configs/softmax_triplet_with_center.yml' MODEL.DEVICE_ID "('your device id')" DATASETS.NAMES "('ReID_format')" OUTPUT_DIR "('your path to save checkpoints and logs')"
Download our final model and put it into ./models
.
python3 tools/test.py --config_file='configs/softmax_triplet_with_center.yml' TEST.NECK_FEAT "('after')" TEST.FEAT_NORM "('yes')" MODEL.PRETRAIN_CHOICE "('self')" TEST.RE_RANKING "('yes')" TEST.WEIGHT "('your train model path')"
The test will generate rsb_divo.npy to DIVOTrack/Cross_view_Tracking/StrongReID. If you want to change its name, modify in tools/test.py
Format of rsb_divo.npy
{
circleRegion:{
Drone:[[fid,pid,lx,ly,w,h,1,0,0,0,feature],...],
View1:[...],
View2:[...]
},
innerShop:{
Drone:[[fid,pid,lx,ly,w,h,1,0,0,0,feature],...],
View1:[...],
View2:[...]
},
...
}
Please refer to Multi_view_Tracking