CLASP
The Centre for Linguistic Theory and Studies in Probability

Monotonicity in Natural Language Inference: An Update on Theory and Practice

This talk reports on results in the last two years related to monotonicity in NLI. The starting point of this line of work was the suggestion by Johan van Benthem in the 1980’s that one could combine the syntactic approach of categorial grammar (CG) with the semantic idea of monotonicity. Later, I set his ideas on a firmer footing and adapted them from (plain) CG to Mark Steedman’s Combinatory CG (CCG). The move to CCG enables us to try the ideas on datasets of current interest, such as FraCaS and SICK, and to compare performance with tools coming from machine learning, such as BERT.

The talk will show that one can do a certain amount of automated NLI using parse trees and polarity algorithms. It is hard to precisely say what that ‘certain amount’ comes to, but we have some quantitative data on the matter. The talk details comparisons both with other systems that use logic in some form or other, and also with systems that use deep learning alone.

This is joint work with a number of people, especially Hai Hu.