CLASP
The Centre for Linguistic Theory and Studies in Probability

Sentence Understanding with Neural Networks and Natural Language Inference

Artificial neural networks now represent the state of the art in most large-scale applied language understanding tasks. This talk presents a few methods and results, organized around the task of recognizing textual entailment, which measure the degree to which these models can or do learn something resembling compositional semantics. I discuss experiments on artificial data and on a hand-built million-example corpus of natural data (SNLI/MultiNLI), and report encouraging results.

ReferencesBowman, Samuel R., Christopher Potts, and Christopher D. Manning. “Recursive neural networks can learn logical semantics.” arXiv preprint arXiv:1406.1827 (2014).