fastMRI
Accelerating Magnetic Resonance Imaging (MRI) by acquiring fewer measurements has the potential to reduce medical costs, minimize stress to patients and make MR imaging possible in applications where it is currently prohibitively slow or expensive.
fastMRI is a collaborative research project from Facebook AI Research (FAIR) and NYU Langone Health to investigate the use of AI to make MRI scans faster. NYU Langone Health has released fully anonymized knee and brain MRI datasets that can be downloaded from the fastMRI dataset page. Publications associated with the fastMRI project can be found at the end of this README.
This repository contains convenient PyTorch data loaders, subsampling functions, evaluation metrics, and reference implementations of simple baseline methods. It also contains implementations for methods in some of the publications of the fastMRI project.
Documentation
Documentation for the fastMRI dataset and baseline reconstruction performance can be found in our paper on arXiv. The paper is updated on an ongoing basis for dataset additions and new baselines.
For code documentation, most functions and classes have accompanying docstrings that you can access via the help
function in IPython. For example:
from fastmri.data import SliceDataset
help(SliceDataset)
Dependencies and Installation
We have tested this code using:
- Ubuntu 18.04
- Python 3.8
- CUDA 10.1
- CUDNN 7.6.5
First install PyTorch according to the directions at the PyTorch Website for your operating system and CUDA setup. Then, run
pip install fastmri
pip
will handle all package dependencies. After this you should be able to run most of the code in the repository.
Installing Directly from Source
If you want to install directly from the GitHub source, clone the repository, navigate to the fastmri
root directory and run
pip install -e .
Package Structure & Usage
The repository is centered around the fastmri
module. The following breaks down the basic structure:
fastmri
: Contains a number of basic tools for complex number math, coil combinations, etc.
fastmri.data
: Contains data utility functions from originaldata
folder that can be used to create sampling masks and submission files.fastmri.models
: Contains reconstruction models, such as the U-Net and VarNet.fastmri.pl_modules
: PyTorch Lightning modules for data loading, training, and logging.
Examples and Reproducibility
The fastmri_examples
and banding_removal
folders include code for reproducibility. The baseline models were used in the arXiv paper:
A brief summary of implementions based on papers with links to code follows. For completeness we also mention work on active acquisition, which is hosted in another repository.
- Baseline Models
- Sampling, Reconstruction and Artifact Correction
- Active Acquisition (external repository)
Testing
Run pytest tests
. By default integration tests that use the fastMRI data are skipped. If you would like to run these tests, set SKIP_INTEGRATIONS
to False
in the conftest.
Training a model
The data README has a bare-bones example for how to load data and incorporate data transforms. This jupyter notebook contains a simple tutorial explaining how to get started working with the data.
Please look at this U-Net demo script for an example of how to train a model using the PyTorch Lightning framework.
Submitting to the Leaderboard
Run your model on the provided test data and create a zip file containing your predictions. fastmri
has a save_reconstructions
function that saves the data in the correct format.
Upload the zip file to any publicly accessible cloud storage (e.g. Amazon S3, Dropbox etc). Submit a link to the zip file on the challenge website. You will need to create an account before submitting.
License
fastMRI is MIT licensed, as found in the LICENSE file.
Cite
If you use the fastMRI data or code in your project, please cite the arXiv paper:
@inproceedings{zbontar2018fastMRI,
title={{fastMRI}: An Open Dataset and Benchmarks for Accelerated {MRI}},
author={Jure Zbontar and Florian Knoll and Anuroop Sriram and Tullie Murrell and Zhengnan Huang and Matthew J. Muckley and Aaron Defazio and Ruben Stern and Patricia Johnson and Mary Bruno and Marc Parente and Krzysztof J. Geras and Joe Katsnelson and Hersh Chandarana and Zizhao Zhang and Michal Drozdzal and Adriana Romero and Michael Rabbat and Pascal Vincent and Nafissa Yakubova and James Pinkerton and Duo Wang and Erich Owens and C. Lawrence Zitnick and Michael P. Recht and Daniel K. Sodickson and Yvonne W. Lui},
journal = {ArXiv e-prints},
archivePrefix = "arXiv",
eprint = {1811.08839},
year={2018}
}