An Open-Source Framework for Paramter Efficient Tuning.
OpenDelta is a toolkit for parameter efficient methods (we dub it as delta tuning), by which users could flexibly assign (or add) a small amount parameters to update while keeping the most paramters frozen. By using OpenDelta, users could easily implement prefix-tuning, adapters, Lora, or any other types of delta tuning with preferred PTMs.
Our repo is tested on Python 3.8 and PyTorch 1.9.0. Lower version may also be supported.
A demo of using Opendelta to modify the PLM (E.g., BART).
create a virtualenv (optional)
conda create -n opendelta_env python=3.8 conda activate opendelta_env
Install OpenDelta using pip as follows:
pip install opendelta
To play with the latest features, you can also install OpenDelta from the source.
Build from Source
git clone https://github.com/thunlp/OpenDelta.git cd OpenDelta
Option 1: If you won't modify the code, run
python setup.py install
Option 2: If you want to modify the code, run
python setup.py develop
from transformers import AutoModelForSeq2SeqLM t5 = AutoModelForSeq2SeqLM.from_pretrained("t5-base") from opendelta import AutoDeltaModel delta = AutoDeltaModel.from_finetuned("DeltaHub/lora_t5-base_mrpc", backbone_model=t5) delta.log()
Verified Supported Models
You can try to use OpenDelta on any backbone models based on PyTorch.
However, with small chances thatThe interface of the submodules of the backbone model is not supported. Therefore we verified some commonly
used models that OpenDelta are sure to support.
We will keep testing more and more emerging models.
Pull requests are welcomed when you successfully apply OpenDelta on your own backbone model.
Performance Checked Combination
Google sheet here
Subject to change at any moment.