This repository contains code to reproduce results from the paper:

Boosting Adversarial Attacks with Enhanced Momentum (BMVC 2021)

Xiaosen Wang, Jiadong Lin, Han Hu, Jingdong Wang, Kun He


  • Python >= 3.6.5
  • Tensorflow >= 1.12.0
  • Numpy >= 1.15.4
  • opencv >= 3.4.2
  • scipy >= 1.1.0
  • pandas >= 1.0.1
  • imageio >= 2.6.1

Qucik Start

Preparing data and models

You should download the data and pretrained models and place the data and pretrained models in dev_data/ and models/, respectively.


All the provided codes generate adversarial examples on inception_v3 model. If you want to attack other models, replace the model in graph and batch_grad function and load such models in main function.

Runing attack

Taking Admix attack for example, you can run this attack as following:

CUDA_VISIBLE_DEVICES=gpuid python epi_fgsm.py 

Evaluating the attack

The generated adversarial examples would be stored in directory ./outputs. Then run the file simple_eval.py to evaluate the success rate of each model used in the paper:

CUDA_VISIBLE_DEVICES=gpuid python simple_eval.py


Code refers to SI-NI-FGSM.


Questions and suggestions can be sent to [email protected].


View Github