TVT
Code of TVT: Transferable Vision Transformer for Unsupervised Domain Adaptation

Pretrained ViT
- Download ViT-B_16.npz and put it at
checkpoint/ViT-B_16.npz
Datasets:
- Download data and replace the current
data/
- Download images from Office-31, Office-Home, VisDA-2017 and put them under
data/
. For example, images ofOffice-31
should be located atdata/office/domain_adaptation_images/
Training:
All commands can be found in script.txt
. An example:
python3 main.py --train_batch_size 64 --dataset office --name wa \
--source_list data/office/webcam_list.txt --target_list data/office/amazon_list.txt \
--test_list data/office/amazon_list.txt --num_classes 31 --model_type ViT-B_16 \
--pretrained_dir checkpoint/ViT-B_16.npz --num_steps 5000 --img_size 256 \
--beta 0.1 --gamma 0.01 --use_im --theta 0.1
Citation:
@article{yang2021tvt,
title={TVT: Transferable Vision Transformer for Unsupervised Domain Adaptation},
author={Yang, Jinyu and Liu, Jingjing and Xu, Ning and Huang, Junzhou},
journal={arXiv preprint arXiv:2108.05988},
year={2021}
}
Our code is largely borrowed from CDAN and ViT-pytorch
GitHub - uta-smile/TVT at pythonawesome.com
Code of TVT: Transferable Vision Transformer for Unsupervised Domain Adaptation - GitHub - uta-smile/TVT at pythonawesome.com