This package contains deep learning models and related scripts to run RoseTTAFold.
This repository is the official implementation of RoseTTAFold: Accurate prediction of protein structures and interactions using a 3-track network.
- Clone the package
git clone https://github.com/RosettaCommons/RoseTTAFold.git cd RoseTTAFold
- Create conda environment using
folding-linux.ymlfile. The latter is required to run a pyrosetta version only (run_pyrosetta_ver.sh).
# create conda environment for RoseTTAFold # If your NVIDIA driver compatible with cuda11 conda env create -f RoseTTAFold-linux.yml # If not (but compatible with cuda10) conda env create -f RoseTTAFold-linux-cu101.yml # create conda environment for pyRosetta folding & running DeepAccNet conda env create -f folding-linux.yml
- Download network weights (under Rosetta-DL Software license — please see below)
While the code is licensed under the MIT License, the trained weights and data for RoseTTAFold are made available for non-commercial use only under the terms of the Rosetta-DL Software license. You can find details at https://files.ipd.uw.edu/pub/RoseTTAFold/Rosetta-DL_LICENSE.txt
[Update Nov/02/2021] It’s now including the weights (RF2t.pt) for RoseTTAFold-2track model used for yeast PPI screening. If you want to use it, please re-download weights. The original RoseTTAFold weights are not changed.
wget https://files.ipd.uw.edu/pub/RoseTTAFold/weights.tar.gz tar xfz weights.tar.gz
- Download and install third-party software.
- Download sequence and structure databases
# uniref30 [46G] wget http://wwwuser.gwdg.de/~compbiol/uniclust/2020_06/UniRef30_2020_06_hhsuite.tar.gz mkdir -p UniRef30_2020_06 tar xfz UniRef30_2020_06_hhsuite.tar.gz -C ./UniRef30_2020_06 # BFD [272G] wget https://bfd.mmseqs.com/bfd_metaclust_clu_complete_id30_c90_final_seq.sorted_opt.tar.gz mkdir -p bfd tar xfz bfd_metaclust_clu_complete_id30_c90_final_seq.sorted_opt.tar.gz -C ./bfd # structure templates (including *_a3m.ffdata, *_a3m.ffindex) [over 100G] wget https://files.ipd.uw.edu/pub/RoseTTAFold/pdb100_2021Mar03.tar.gz tar xfz pdb100_2021Mar03.tar.gz # for CASP14 benchmarks, we used this one: https://files.ipd.uw.edu/pub/RoseTTAFold/pdb100_2020Mar11.tar.gz
- Obtain a PyRosetta licence and install the package in the newly created
foldingconda environment (link).
# For monomer structure prediction cd example ../run_[pyrosetta, e2e]_ver.sh input.fa . # For complex modeling # please see README file under example/complex_modeling/README for details. python network/predict_complex.py -i paired.a3m -o complex -Ls 218 310 # For PPI screening using faster 2-track version (example input and output are at example/complex_2track) python network_2track/predict_msa.py -msa [paired MSA file in a3m format] -npz [output npz file name] -L1 [Length of first chain] e.g. python network_2track/predict_msa.py -msa input.a3m -npz complex.npz -L1 218
For the pyrosetta version, user will get five final models having estimated CA rms error at the B-factor column (model/model_[1-5].crderr.pdb).
For the end-to-end version, there will be a single PDB output having estimated residue-wise CA-lddt at the B-factor column (t000_.e2e.pdb).
Segmentation fault while running hhblits/hhsearch
For easy install, we used a statically compiled version of hhsuite (installed through conda). Currently, we’re not sure what exactly causes segmentation fault error in some cases, but we found that it might be resolved if you compile hhsuite from source and use this compiled version instead of conda version. For installation of hhsuite, please see here.
Submitting jobs to computing nodes
The modeling pipeline provided here (run_pyrosetta_ver.sh/run_e2e_ver.sh) is a kind of guidelines to show how RoseTTAFold works. For more efficient use of computing resources, you might want to modify the provided bash script to submit separate jobs with proper dependencies for each of steps (more cpus/memory for hhblits/hhsearch, using gpus only for running the networks, etc).
- Robetta server (RoseTTAFold option)
- RoseTTAFold models for CASP14 targets [input MSA and hhsearch files are included]
Credit to performer-pytorch and SE(3)-Transformer codes
The code in the network/performer_pytorch.py is strongly based on this repo which is pytorch implementation of Performer architecture.
The codes in network/equivariant_attention is from the original SE(3)-Transformer repo which accompanies the paper ‘SE(3)-Transformers: 3D Roto-Translation Equivariant Attention Networks’ by Fabian et al.
M. Baek, et al., Accurate prediction of protein structures and interactions using a three-tracki neural network, Science (2021). link
I.R. Humphreys, J. Pei, M. Baek, A. Krishnakumar, et al, Structures of core eukaryotic protein complexes, bioRxiv (2021). link