Breaking the Chain of Gradient Leakage in Vision Transformers
[arXiv] | [Codes] Yahui Liu1, Bin Ren1, Yue Song1, Wei Bi2, Nicu Sebe1 and Wei Wang1 1University of Trento, Italy, 2Tencent AI Lab, China
Visual comparisons on image recovery with gradient attacks.
Datasets
Dataset | Download Link |
---|---|
ImageNet | train,val |
- Download the datasets by using codes in the
scripts
folder.
dataset_name
|__train
| |__category1
| | |__xxx.jpg
| | |__...
| |__category2
| | |__xxx.jpg
| | |__...
| |__...
|__val
|__category1
| |__xxx.jpg
| |__...
|__category2
| |__xxx.jpg
| |__...
|__...
Training
After prepare the datasets, we can simply start the training with 8 NVIDIA V100 GPUs:
$ sh train.sh
Evaluation
- Accuracy on Masked Jigsaw Puzzle
$ python3 eval.py
- Consistency on Masked Jigsaw Puzzle
$ python3 consistency.py
- Evaluations on image reconstructions
See the codes MSE
, PSNR/SSIM
, FFT2D
, LPIPS
.
Gradient Attack
We refer to the public repo: JonasGeiping/breaching.
Acknowledgement
This repo is built on several existing projects:
Citation
If you take use of our code, please cite our papers:
@article{liu2022breaking,
author = {Liu, Yahui and Ren, Bin and Song, Yue and Bi, Wei and Sebe, Nicu and Wang, Wei},
title = {Breaking the Chain of Gradient Leakage in Vision Transformers},
journal = {arXiv:2205.12551},
year = {2022}
}
If you have any questions, please contact me without hesitation (yahui.liu AT unitn.it).