SANTM: Efficient Self-attention-driven Network for Text Matching

Published in ACM Transactions on Internet Technology, 2021

Recommended citation: P. Tiwari, A.K. Jaiswal, S. Garg, I. You , "SANTM: Efficient Self-attention-driven Network for Text Matching," in ACM Transactions on Internet Technology, doi: 10.1145/3426971. https://dl.acm.org/doi/abs/10.1145/3426971

Self-attention mechanisms have recently been embraced for a broad range of text-matching applications. Self-attention model takes only one sentence as an input with no extra information, i.e., one can utilize the final hidden state or pooling. However, text-matching problems can be interpreted either in symmetrical or asymmetrical scopes. For instance, paraphrase detection is an asymmetrical task, while textual entailment classification and question-answer matching are considered asymmetrical tasks. In this article, we leverage attractive properties of self-attention mechanism and proposes an attention-based network that incorporates three key components for inter-sequence attention: global pointwise features, preceding attentive features, and contextual features while updating the rest of the components. Our model follows evaluation on two benchmark datasets cover tasks of textual entailment and question-answer matching. The proposed efficient Self-attention-driven Network for Text Matching outperforms the state of the art on the Stanford Natural Language Inference and WikiQA datasets with much fewer parameters.

Download paper here

Recommended citation: P. Tiwari, A.K. Jaiswal, S. Garg, I. You , “SANTM: Efficient Self-attention-driven Network for Text Matching,” in ACM Transactions on Internet Technology, doi: 10.1145/3426971.