VT-UNet

This repo contains the supported pytorch code and configuration files to reproduce 3D medical image segmentaion results of VT-UNet.

VT-UNet Architecture

Environment

Prepare an environment with python=3.8, and then run the command “pip install -r requirements.txt” for the dependencies.

Data Preparation

  • For experiments we used four datasets:

  • File structure

     BRATS2021
      |---Data
      |   |--- RSNA_ASNR_MICCAI_BraTS2021_TrainingData
      |   |   |--- BraTS2021_00000
      |   |   |   |--- BraTS2021_00000_flair...
      |   
      |              
      |   
      |
     VT-UNet
      |---train.py
      |---test.py
      |---pretrained_ckpt
      |---saved_model
      ...
    

Pre-Trained Weights

Pre-Trained Base Model For BraTS 2021

Train/Test

  • Train : Run the train script on BraTS 2021 Training Dataset with Base model Configurations.

python train.py --cfg configs/vt_unet_base.yaml --num_classes 3 --epochs 350
  • Test : Run the test script on BraTS 2021 Training Dataset.

python test.py --cfg configs/vt_unet_base.yaml --num_classes 3

Acknowledgements

This repository makes liberal use of code from open_brats2020, Swin Transformer, Video Swin Transformer and Swin-Unet

References

Citing VT-UNet

    @misc{peiris2021volumetric,
      title={A Volumetric Transformer for Accurate 3D Tumor Segmentation}, 
      author={Himashi Peiris and Munawar Hayat and Zhaolin Chen and Gary Egan and Mehrtash Harandi},
      year={2021},
      eprint={2111.13300},
      archivePrefix={arXiv},
      primaryClass={eess.IV}
    }

GitHub

View Github