How Should Models of Language Meaning Learn?
- Event: Seminar
- Lecturer: Casey Kennington from Boise State University
- Date: 28 October 2022
- Duration: 2 hours
- Venue: Gothenburg and Online
Abstract
Distributional, grounded, and formal computational theories of how language is acquired, represented, and used are, it turns out, quite useful in many ways. Many aspects of language can be learned from just looking at a lot of text in a certain way, as evidenced by language models. Vision and language can come together to add world knowledge through grounded learning. Formal logics are useful for many things including inference. Are we at the point where computational models really “understand” natural language, and, if not, is more data and bigger models all we need? In my talk, I make an appeal to what is known about how human children learn language and how the progression of language learning matters for holistic language understanding.