CLASP
The Centre for Linguistic Theory and Studies in Probability

Neuro-Symbolic Models in NLP

Abstract

Many of the neuro-symbolic models used in NLP incorporate tree grammars, or semantic feature markers. These are either directly integrated into the structure of the model, or they are induced as a distributional bias, through distilliation learning. The experimental results that these hybrid systems have produced to date tend to be equivocal. They have not yielded strong evidence for the claim that the tree structure component of the model significantly improves its performance on a variety of NLP tasks, if at all. I will considder possible reasons for why this may be the case.

I will also look at different hybdir models applied to tasks in other areas of AI. These may hold out greater promise for significantly improved performance with neuro-symbolic systems.