Vision Guided Generative Pre-trained Language Models for Multimodal Abstractive Summarization

Paper Abstract

Multimodal abstractive summarization (MAS) models that summarize videos (vision modality) and their corresponding transcripts (text modality) are able to extract the essential information from massive multimodal data on the Internet. Recently, large-scale generative pre-trained language models (GPLMs) have been shown to be effective in text generation tasks. However, existing MAS models cannot leverage GPLMs' powerful generation ability. To fill this research gap, we aim to study two research questions: 1) how to inject visual information into GPLMs without hurting their generation ability; and 2) where is the optimal place in GPLMs to inject the visual information? In this paper, we present a simple yet effective method to construct vision guided (VG) GPLMs for the MAS task using attention-based add-on layers to incorporate visual information while maintaining their original text generation ability. Results show that our best model significantly surpasses the prior state-of-the-art model by 5.7 ROUGE-1, 5.3 ROUGE-2, and 5.1 ROUGE-L scores on the How2 dataset, and our visual guidance method contributes 83.6% of the overall improvement. Furthermore, we conduct thorough ablation studies to analyze the effectiveness of various modality fusion methods and fusion locations.

If you work is inspired by our paper or code, please cite it, thanks!

TODO

Evaluation

We release the generated summaries from different models in ./evaluation/results. All the evaluation metrics can be computed following ./evaluation/README.md.

Prepare dataset

You can go to How2 dataset Github to get the dataset. We recommend you to choose the (option 1): Download a pre-packaged version.

Run fine-tuning

  • make directory for saving lightning logs: mkdir lightning_logs
  • An example of running Bart text only model: ./scripts/Bart_text_only.sh
  • An example of running Bart multimodal model: ./scripts/Bart_multimodal.sh

Run inference

  • An example of running Bart multimodal model: ./scripts/test_Bart_multimodal.sh

GitHub

GitHub - HLTCHKUST/VG-GPLMs: The code repository for EMNLP 2021 paper “Vision Guided Generative Pre-trained Language Models for Multimodal Abstractive Summarization”.
The code repository for EMNLP 2021 paper "Vision Guided Generative Pre-trained Language Models for Multimodal Abstractive Summarization". - GitHub - HLTCHKUST/VG-GPLMs: The code repositor...