Sequence-to-Sequence Learning with Latent Neural Grammars

Code for the paper:
Sequence-to-Sequence Learning with Latent Neural Grammars
Yoon Kim
arXiv Preprint


The code was tested in python 3.7 and pytorch 1.5. We also use a slightly modified version of the Torch-Struct library, which is included in the repo and can be installed via:

cd pytorch-struct
python install


For convenience we include the datasets used in the paper in the data/ folder. Please cite the original papers when using the data (i.e. Lake and Baroni 2018 for SCAN/MT, and Lyu et al. 2021 for StylePTB).



To train the model on (for example) the length split:

python --train_file data/SCAN/tasks_train_length.txt --save_path

For prediction and evaluation:

python --data_file data/SCAN/tasks_test_length.txt --model_path

Style Transfer

To train on (for example) the active-to-passive task:

python --train_file data/StylePTB/ATP/train.tsv --dev_file data/StylePTB/ATP/valid.tsv --save_path

To predict:

python --data_file data/StylePTB/ATP/test.tsv --model_path 
--out_file styleptb-atp-pred.txt

We use the nlg-eval package to calculate the various metrics.

Machine Translation

To train on MT:

python --train_file_src data/MT/train.en --train_file_tgt data/MT/ 
--dev_file_src data/MT/dev.en --dev_file_tgt data/MT/ --save_path

To predict on the daxy test set:

python --data_file data/MT/test-daxy.en --model_path --out_file mt-pred-daxy.txt

For the regular test set:

python --data_file data/MT/test.en --model_path --out_file mt-pred.txt

We use the multi-bleu script to calculate BLEU.

Training Stability

We observed training to be unstable and the approach required several runs across different seeds to perform well. For reference we have posted logs of some example runs in the logs/ folder.