CLASP
The Centre for Linguistic Theory and Studies in Probability

Bayesian nets in probabilistic TTR

Abstract:

There is a fair amount of evidence indicating that language acquisition in general crucially relies on probabilistic learning. It is not clear how a reasonable account of semantic learning could be constructed on the basis of the categorical type systems that either classical or revised semantic theories assume. We present probabilistic TTR (Cooper et al 2014) that makes explicit the assumption, common to most probability theories used in AI, that probability is distributed over situation types, rather than over sets of worlds. Improving on and going beyond Cooper et al (2014), we formulate elementary Bayesian classifiers (which can be modelled as two-layer Bayesian networks) in probabilistic TTR and use these to illustrate how our type theory serves as an interface between perceptual judgement, semantic interpretation, and semantic leaning. We also show how this account can be extended to cover general Bayesian nets.

Lecturer:

Staffan Larsson is a professor of computational linguistics at CLASP.