Two Early Efforts toward Using Deep Learning in Syntax and Semantics
- Event: Seminar
- Lecturer: Sam Bowman
- Date: 12 March 2018
- Duration: 2 hours
- Venue: Gothenburg
This talk will present two ongoing projects that aim to lay the groundwork to use results from artificial neural networks research in NLP to inform research on core linguistic questions. The first project (based partially on WIlliams et al. 2017) concerns latent tree learning: efforts to discover the optimal tree structures for use in guiding semantic composition for applied language understanding tasks. The second concerns the evaluation of simple neural network models on the classic linguistic acceptability judgments task. This project (in progress, with Alex Warstadt) builds on Lau, Clark, and Lappin ‘16, and introduces a new dataset of expert acceptability judgments and a new suite of semi-supervised learning experiments with neural networks.
ReferencesWilliams, Adina, Andrew Drozdov, and Samuel R. Bowman. “Learning to parse from a semantic objective: It works. Is it syntax?.” arXiv preprint arXiv:1709.01121 (2017).
Lau, Jey Han, Alexander Clark, and Shalom Lappin. “Grammaticality, acceptability, and probability: a probabilistic view of linguistic knowledge.” Cognitive Science 41.5 (2017): 1202-1241.