CLASP
The Centre for Linguistic Theory and Studies in Probability

Training a Neural Model to Reason with Implicatives

Implicative constructions, such as manage to and waste a chance, possess an underlying semantic property that we call the signature of the construction. Implicatives are ubiquitous and the compositionality of their signatures make them an important object of study for Natural Language Understanding. To this end, we introduced the Stanford Corpus of Implicatives (SCI). Drawing inspiration from other Natural Language Inference (NLI) corpora, SCI contains a set of triplets premise, hypothesis, and label. The label indicates the semantic relation between the premise and the hypothesis: entailment, contradiction or neither. The mission of SCI is two-fold: first, to provide a systematic coverage of the large set of implicative constructions in English; and second, to allow for the exploration of a new family of meta-learner models that strive for modular and compositional learning by taking advantage of the existence of the signatures. In particular, we introduced a new meta-learning model, the recursive routing networks (RRN), that efficiently learn to specialize to the fine-grained inferential signatures from the SCI corpus. We review the ability to generalize from seen constructions to similar unseen constructions, with special attention to meta-level properties of the implicatives.

Lauri Karttunen This is joint work with Ignacio Cases, building on our NAACL - 2019 presentation.