Regression Free Model Update
Code for the paper:
Regression Bugs Are In Your Model! Measuring, Reducing and Analyzing Regressions In NLP Model Updates [Paper]
Modified from the Hugging Face Transformers examples of text-classification. [Original code]
Haven’t tested under this environment, and document needs completion.
If you have any question, please contact me through email:[email protected]
Text classification examples
Based on the script
Fine-tuning the library models for sequence classification on the GLUE benchmark: General Language Understanding
Evaluation. This script can fine-tune any of the models on the hub
and can also be used for a dataset hosted on our hub or your own data in a csv or a JSON file
(the script might need some tweaks in that case, refer to the comments inside for help).
GLUE is made up of a total of 9 different tasks. Here is how to run the script on one of them:
export TASK_NAME=mrpc python run_glue.py \ --model_name_or_path bert-base-cased \ --task_name $TASK_NAME \ --do_train \ --do_eval \ --max_seq_length 128 \ --per_device_train_batch_size 32 \ --learning_rate 2e-5 \ --num_train_epochs 3 \ --output_dir /tmp/$TASK_NAME/
where task name can be one of cola, sst2, mrpc, stsb, qqp, mnli, qnli, rte, wnli.
We get the following results on the dev set of the benchmark with the previous commands (with an exception for MRPC and
WNLI which are tiny and where we used 5 epochs instead of 3). Trainings are seeded so you should obtain the same
results with PyTorch 1.6.0 (and close results with different versions), training times are given for information (a
single Titan RTX was used):
|MNLI||Matched acc./Mismatched acc.||83.91/84.10||2:35:23|
Some of these results are significantly different from the ones reported on the test set of GLUE benchmark on the
website. For QQP and WNLI, please refer to FAQ #12 on the website.
The following example fine-tunes BERT on the
imdb dataset hosted on our hub:
python run_glue.py \ --model_name_or_path bert-base-cased \ --dataset_name imdb \ --do_train \ --do_predict \ --max_seq_length 128 \ --per_device_train_batch_size 32 \ --learning_rate 2e-5 \ --num_train_epochs 3 \ --output_dir /tmp/imdb/
Mixed precision training
If you have a GPU with mixed precision capabilities (architecture Pascal or more recent), you can use mixed precision
training with PyTorch 1.6.0 or latest, or by installing the Apex library for previous
versions. Just add the flag
--fp16 to your command launching one of the scripts mentioned above!
Using mixed precision training usually results in 2x-speedup for training with the same final results:
|Task||Metric||Result||Training time||Result (FP16)||Training time (FP16)|
|MNLI||Matched acc./Mismatched acc.||83.91/84.10||2:35:23||84.04/84.06||1:17:06|
PyTorch version, no Trainer
Based on the script
run_glue.py, this script allows you to fine-tune any of the models on the hub on a
text classification task, either a GLUE task or your own data in a csv or a JSON file. The main difference is that this
script exposes the bare training loop, to allow you to quickly experiment and add any customization you would like.
It offers less options than the script with
Trainer (for instance you can easily change the options for the optimizer
or the dataloaders directly in the script) but still run in a distributed setup, on TPU and supports mixed precision by
the mean of the ?
Accelerate library. You can use the script normally
after installing it:
pip install accelerate
export TASK_NAME=mrpc python run_glue_no_trainer.py \ --model_name_or_path bert-base-cased \ --task_name $TASK_NAME \ --max_length 128 \ --per_device_train_batch_size 32 \ --learning_rate 2e-5 \ --num_train_epochs 3 \ --output_dir /tmp/$TASK_NAME/
You can then use your usual launchers to run in it in a distributed environment, but the easiest way is to run
and reply to the questions asked. Then
that will check everything is ready for training. Finally, you can launch training with
export TASK_NAME=mrpc accelerate launch run_glue_no_trainer.py \ --model_name_or_path bert-base-cased \ --task_name $TASK_NAME \ --max_length 128 \ --per_device_train_batch_size 32 \ --learning_rate 2e-5 \ --num_train_epochs 3 \ --output_dir /tmp/$TASK_NAME/
This command is the same and will work for:
- a CPU-only setup
- a setup with one GPU
- a distributed training with several GPUs (single or multi node)
- a training on TPUs
Note that this library is in alpha release so your feedback is more than welcome if you encounter any problem using it.