Pruning Self-attentions into Convolutional Layers in Single Path

This is the official repository for our paper: Pruning Self-attentions into Convolutional Layers in Single Path by Haoyu He, Jing liu, Zizheng Pan, Jianfei Cai, Jing Zhang, Dacheng Tao and Bohan Zhuang.


Introduction:

To reduce the massive computational resource consumption for ViTs and add convolutional inductive bias, our SPViT prunes pre-trained ViT models into accurate and compact hybrid models by pruning self-attentions into convolutional layers. Thanks to the proposed weight-sharing scheme between self-attention and convolutional layers that cast the search problem as finding which subset of parameters to use, our SPViT has significantly reduced search cost.

Getting started:

In this repository, we provide code for pruning two representative ViT models.


If you find our paper useful, please consider cite:

@article{he2021Pruning,
  title={Pruning Self-attentions into Convolutional Layersin Single Path},
  author={He, Haoyu and Liu, Jing and Pan, Zizheng and Cai, Jianfei and Zhang, Jing and Tao, Dacheng and Zhuang, Bohan},
  journal={arXiv preprint arXiv:2111.11802},
  year={2021}
}

GitHub

View Github