Hierarchical-Attention-Network

We know that documents have a hierarchical structure, words combine to form sentences and sentences combine to form documents. We can try to learn that structure or we can input this hierarchical structure into the model and see if it improves the performance of existing models. This paper exploits that structure to build a classification model.

This is a (close) implementation of the model in PyTorch.

Keypoints

  • The network uses Bidirectional GRU to capture the contextual information about a word.
  • There are two layers of attention, one Word level, and another Sentence level.
  • It uses word2vec for word embeddings.
  • Negative Log Likelihood is used as the loss function.
  • The dataset was divided in the ratio 8:1:1 for training, validation, and test respectively.

Note: If you are using NLLLoss from pytorch make sure to use the log_softmax function from the functional class and not softmax

Notebook

The notebook contains was trained on yelp dataset taken from here.

The best accuracy that I got was around ~ 64.6%. This dataset has only 10000 samples and 29 classes. Here is the training loss for the dataset.

loss

Here is the training accuracy for the process.

train_acc

Here is the validation accuracy for the process.

val_acc

GitHub