Truth conditions at scale, and beyond
- Event: Seminar
- Lecturer: Guy Edward Toh Emerson from University of Cambridge
- Date: 24 January 2024
- Duration: 2 hours
- Venue: Gothenburg and online
Abstract: Truth-conditional semantics has been successful in explaining how the meaning of a sentence can be decomposed into the meanings of its parts, and how this allows people to understand new sentences. In this talk, I will first show how a truth-conditional model can be learnt in practice on large-scale datasets of various kinds (textual, visual, ontological), and how this provides empirical benefits compared to non-truth-conditional (vector-based) computational semantic models.
I will then take a step back to discuss the computational challenges of such an approach. I will argue it is (unfortunately) computationally intractable to reduce all kinds of language understanding to truth conditions, and so the truth-conditional account must be incomplete. To enable a more complete account, I will present a radically new approach to approximate Bayesian modelling, which rejects the computationally intractable ideal of using a single probabilistically coherent model. Instead, multiple component models are trained to mutually approximate each other. In particular, this provides a rigorous method for complementing a truth-conditional model with models for other kinds of language understanding. This new approach gives us a new set of tools for explaining how patterns of language use arise as a result of resource-bounded minds interacting with a computationally demanding world.