Stacked Pooling: Improving Crowd Counting by Boosting Scale Invariance.
Pytorch implementation of paper "Stacked Pooling: Improving Crowd Counting by Boosting Scale Invariance".
- Python 2.7
- PyTorch 0.4.0
- Download ShanghaiTech Dataset from
Baidu Disk: http://pan.baidu.com/s/1nuAYslz
- Create Directory
- Save "part_A_final" under ./data/original/shanghaitech/
Save "part_B_final" under ./data/original/shanghaitech/
create_gt_test_set_shtech.min matlab to create ground truth files for test data
create_training_set_shtech.min matlab to create training and validataion set along with ground truth files
To train Deep Net+vanilla pooling on ShanghaiTechA, edit configurations in
pool = pools
To train Deep Net+stacked pooling on ShanghaiTechA, edit configurations in
pool = pools
python train.pyrespectively to start training
- Follow step 1 of Train to edit corresponding
test.pyusing the best checkpoint on validation set (output by training process)
python test.pyrespectively to compare them!
To try pooling methods (vanilla pooling, stacked pooling, and multi-kernel pooling) described in our paper:
To evaluate on datasets (ShanghaiTechA, ShanghaiTechB) or backbone models (Base Net, Wide-Net, Deep-Net) described in our paper:
Subscribe to Python Awesome
Get the latest posts delivered right to your inbox