UnifiedSKG?: Unifying Structured Knowledge Grounding with Text-to-Text Language Models








Open In Colab

The code for paper UnifiedSKG: Unifying and Multi-Tasking Structured Knowledge Grounding with Text-to-Text Language Models, a Unified Framework and Analysis for Structured Knowledge Grounding

Structured knowledge grounding (SKG) leverages structured knowledge to complete user requests, such as semantic parsing over databases and question answering over knowledge bases. Since the inputs and outputs of SKG tasks are heterogeneous, they were historically studied in separate by different communities, which limits systematic and compatible research on SKG. In this paper, we overcome this limitation by proposing the UNIFIEDSKG framework, which unifies 21 SKG tasks into the text-to-text format, aiming to promote systematic SKG research, instead of being exclusive to a single task, domain, or dataset. We show that large language models like T5, with simple modification when necessary, achieve state-of-the-art performance on all 21 tasks. UNIFIEDSKG facilitates the investigation of multi-task, zero-shot, and few-shot learning. We demonstrate that multi-task prefix-tuning with UNIFIEDSKG improves the performance on most tasks and show that T0, GPT-3, and Codex struggle in zero-shot and few-shot learning for SKG. UNIFIEDSKG also enables a series of controlled experiments on structured knowledge encoding variants across SKG tasks. We find that T5’s sensitivity to structured knowledge encoding variations varies across tasks.

UNIFIEDSKG is easily extensible to more tasks. We encourage the researchers that want to promote their fantastic work to the community to make pull request to update their datasets, metrics, models!

Updates

Content

Cloning this Repo

In order to include third-party dependencies in this repository, make sure to clone recursively, e.g.:

git clone --recurse-submodules [email protected]:HKUNLP/UnifiedSKG.git

Dependencies

To establish the environment run this code in the shell (the third line is for CUDA11.1):

conda env create -f py3.7pytorch1.8.yaml
conda activate py3.7pytorch1.8new
pip install datasets==1.14.0
# The following line to be replaced depending on your cuda version.
pip install torch==1.8.0+cu111 torchvision==0.9.0+cu111 torchaudio==0.8.0 -f https://download.pytorch.org/whl/torch_stable.html

That will create the environment py3.7pytorch1.8 we used.

Usage

Environment setup

Activate the environment by running

conda activate py3.7pytorch1.8

WandB setup

Setup WandB for logging (registration needed):

export WANDB_ENTITY=YOUR_WANDB_USERNAME
export WANDB_API_KEY=YOUR_WANDB_API_KEY
export WANDB_PROJECT=YOUR_PROJECT_NAME

Training

T5-base finetuning on WikiTQ (4 GPUs, 128 effective batch size)

python -m torch.distributed.launch --nproc_per_node 4 --master_port 1234 train.py --seed 2 --cfg Salesforce/T5_base_finetune_wikitq.cfg --run_name T5_base_finetune_wikitq --logging_strategy steps --logging_first_step true --logging_steps 4 --evaluation_strategy steps --eval_steps 500 --metric_for_best_model avr --greater_is_better true --save_strategy steps --save_steps 500 --save_total_limit 1 --load_best_model_at_end --gradient_accumulation_steps 8 --num_train_epochs 400 --adafactor true --learning_rate 5e-5 --do_train --do_eval --do_predict --predict_with_generate --output_dir output/T5_base_finetune_wikitq --overwrite_output_dir --per_device_train_batch_size 4 --per_device_eval_batch_size 16 --generation_num_beams 4 --generation_max_length 128 --input_max_length 1024 --ddp_find_unused_parameters true

If you want to resume training, remove the --overwrite_output_dir flag from the above command:

python -m torch.distributed.launch --nproc_per_node 4 --master_port 1234 train.py --seed 2 --cfg Salesforce/T5_base_finetune_wikitq.cfg --run_name T5_base_finetune_wikitq --logging_strategy steps --logging_first_step true --logging_steps 4 --evaluation_strategy steps --eval_steps 500 --metric_for_best_model avr --greater_is_better true --save_strategy steps --save_steps 500 --save_total_limit 1 --load_best_model_at_end --gradient_accumulation_steps 8 --num_train_epochs 400 --adafactor true --learning_rate 5e-5 --do_train --do_eval --do_predict --predict_with_generate --output_dir output/T5_base_finetune_wikitq --per_device_train_batch_size 4 --per_device_eval_batch_size 16 --generation_num_beams 4 --generation_max_length 128 --input_max_length 1024 --ddp_find_unused_parameters true

T5-base prefix-tuning on WikiTQ (4 GPUs, 128 effective batch size)

python -m torch.distributed.launch --nproc_per_node 4 --master_port 1234 train.py --seed 2 --cfg Salesforce/T5_base_prefix_wikitq.cfg --run_name T5_base_prefix_wikitq --logging_strategy steps --logging_first_step true --logging_steps 4 --evaluation_strategy steps --eval_steps 500 --metric_for_best_model avr --greater_is_better true --save_strategy steps --save_steps 500 --save_total_limit 1 --load_best_model_at_end --gradient_accumulation_steps 8 --num_train_epochs 400 --adafactor true --learning_rate 5e-5 --do_train --do_eval --do_predict --predict_with_generate --output_dir output/T5_base_prefix_wikitq --overwrite_output_dir --per_device_train_batch_size 4 --per_device_eval_batch_size 16 --generation_num_beams 4 --generation_max_length 128 --input_max_length 1024 --ddp_find_unused_parameters true

T5-3b finetuning on WikiTQ (8 GPUs, 128 effective batch size)

deepspeed train.py --deepspeed deepspeed/ds_config_zero2.json --seed 2 --cfg Salesforce/T5_3b_finetune_wikitq.cfg --run_name T5_3b_finetune_wikitq --logging_strategy steps --logging_first_step true --logging_steps 4 --evaluation_strategy steps --eval_steps 500 --metric_for_best_model avr --greater_is_better true --save_strategy steps --save_steps 500 --save_total_limit 1 --load_best_model_at_end --gradient_accumulation_steps 16 --num_train_epochs 50 --adafactor false --learning_rate 5e-5 --do_train --do_eval --do_predict --predict_with_generate --output_dir output/T5_3b_finetune_wikitq --overwrite_output_dir --per_device_train_batch_size 1 --per_device_eval_batch_size 1 --generation_num_beams 4 --generation_max_length 128 --input_max_length 1024 --ddp_find_unused_parameters true

Load Weight

See
Open In Colab

The overview file structure of this Unified Framework

.
├── configure                          # Code for configuration of different tasks/settings
│   ├── META_TUNING # meta config for each task, controls how we construct it to seq2seq data
│   └── Salesforce # configs of our experiments settings, also thanks to ElementAI, Rui Zhang
│
├── metrics                       # code for evaluating the prediction of our model
│   └── ...                     # each file contains a evaluator of the corresponding task
├── models                       # code for models
│   ├── adapter # the overwritten hugginface transformers, where we realized adapter in T5 and BART.
│   ├── prompt # the overwritten hugginface transformers, where we realized prefix-tuning in T5 and BART.
│   └── unified
│           ├── base.py # base model for better push to huggingface arxiv(so called PushToHubFriendlyModel)
│           ├── finetune.py # model of the bare finetune
│           ├── adaptertuning.py # model of adapter-tuning,
│           ├── prefixtuning.py # model of the prefix-tuning, the prompt getting methods followed one of BART in original paper
│           └── multitask_prefixtuning.py # model of the mult-task prefix-tuning, support load from different single task prefix and make interaction on them(separate, concat...), but we didn't get satisfying result by doing so. Multi-task in our paper analysis section is SPoT-like.
│
├── seq2seq_construction                       # Code for wrap the raw data into seq_in and seq_out and add them
│    └──  ... # check the README in the ./seq2seq_construction
│
├── tasks                       # Code for encoder-decoder architecture
│    └──  ... # check the README in the ./tasks
│
├── third_party                       # packages from the third party
│    └──  ...  # if you use any github repo from others, try to put them in this dir, and note the links in the .submodules for us to make them easier to added by the recursive clone of git.
│
├── utils                       # Code for some useful(or not) stuff
│       ├── processor           # adopted from Tapex, processor for handling structured knowledge like table(truncation, linearized etc.) 
│       ├── configure.py           # the util for parsing the cfg file in ./configure, will get nested args object which is human friendly.
│       ├── dataset.py               # wrap the seq2seq dataset constructed, tokenize the seq_in and seq_out for feed into the trainer.
│       ├── tool.py         # Use the reflection to help loading model, seq2seq constructor and evaluator
│       └── trainer.py                  # we changed the original trainer into the EvaluationFriendlyTrainer in this file, for easier eval, also we controled the sequence of trainer to be in original order, and add description in it, if you want make any modifications in forward of the models, you may need to change something in here.
│       └── training_arguments.py              # wrapped training arguments for seq2seq
│
├── .gitignore              # use to ignored some tast or debug files in your desktop
├── .gitmodules           # use the recursive clone of the git, will be used to create files in ./third_party at last
├── py3.7pytorch1.8.yaml     # help you clone the anaconda env
├── README.md         # As you can see, is the README hhhhh
└── train.py              # The start of the code, control the train, eval, test, store and log and 

How to unify a new task into the framework

(README in ./tasks, ./seq2seq_construction, ./metrics, ./configure can also be useful)

  • step 1, Add the “Loader” of raw data in ./tasks, (you can search in huggingface dataset website firstly to find whether there is already a usable script, if not, that’s great because you can be the contributor of both this project and huggingface community.

  • step 2, Add the “Wrapper” which construct “seq_in”(“user request input” & “structured knowledge input”) and “seq_out” from and add to the raw_data for seq2seq unification.

  • step 3, Add the “Evaluator”(for task) in ./metrics. if any third_party repo are used, please add them into .gitmodules.

  • step 3.5(optional), You can always add new “Model” into the ./models/ if you like, change the path in config files to drive new model.

  • step 4, Add the “Config” file to drive your task or all the tasks we have by finetune/multi-task-finetune/pretrain/prefix-tuning/multi-task-prefix-tuning… or other ways.

And this is all for it ! =)

Contributors








Ackonwledgement

We would like to thank Yifei Min and Libo Qin for early stage discussion, Qian Liu for TAPEX code and advice on Question Answering tasks, Ice Pasupat for reviewing this paper, wandb for free logging, and OpenAI for free Codex usage.

GitHub

View Github