Interpreting and Grounding Pre-trained Representations for Natural Language Processing
- Event: Seminar
- Lecturer: Richard Johansson and Lovisa Hagström
- Date: 02 June 2021
- Duration: 2 hours
- Venue: Gothenburg
Building computers that understand human language is one of the central goals in artificial intelligence. A recent breakthrough on the way towards this goal is the development of neural models that learn deep contextualized representations of language. However, while these models have substantially advanced the state of the art in NLP for a wide range of tasks, our understanding of the learned representations and our repertoire of techniques for integrating them with other knowledge representations and modalities remain limited.
This talk will give an introduction to the project “Interpreting and Grounding Pre-trained Representations for Natural Language Processing”, where we will develop new models that explore synergies between language representations and modalities of different types as well as analysis methods to investigate properties of learned representations. The project is funded by WASP and is a collaboration between Chalmers, Linköping University, and Recorded Future. We will give a high-level introduction to the overall goals of the project and then highlight some recent work where we investigate the effects of multimodal training of language models.