ROM HT18, Representations of meaning, 7.5 HEC, Representationer av språklig betydelse, 7,5hp, part of Doctoral Degree in Computational Linguistics.

The course gives a survey of theory and computational implementations of representing and reasoning with meaning in natural languages from cognitive, linguistic and computational perspective. We will look at formal theories and computational implementations to model-theoretic semantics (lambda calculus), situated and grounded representations of meaning, semantic grammars (CCG, dependency grammar), distributional representations of lexical meaning and its compositional extensions, approaches to unsupervised machine learning of linguistic representations, and others. An emphasis of the course will be (i) on the nature of representations, (ii) how they satisfy the notion of compositionality, (iii) how they are used in inference or reasoning and (iv) what natural language processing applications are they useful for.

Course prerequisites:

  • General admission requirements for a doctoral degree in Computational Linguistics or equivalent.

In order to follow the course, the participants should at least have experience with one or several of the following fields at masters level:

  • Formal semantics and pragmatics
  • Natural language processing
  • Computational semantics
  • Machine learning
  • or equivalent skills and knowledge.

Course syllabus


Please read this document and talk to Simon.


  • Simon Dobnik (course organiser), office hours: by appointment

Course literature

For a list of suggested readings please see here. Individual readings will be suggested for each meeting.

Schedule and course materials

  • Interpretability of neural language models * 2021-02-11
    • ++++ Vig, J., & Belinkov, Y. (2019). Analyzing the Structure of Attention in a Transformer Language Model.. In Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP, pages 63-76, Florence, Italy, August 2019. (recommended by Nikolai, would like to read: Axel, Felix, Anna, Simon) 2021-02-12
    • Nikolai, Adam, Anna, Felix, Aram, and Simon
  • Integrating symbolic common-sense knowledge * 2021-01-29, Zoom
  • Learning embeddings accross languages * 2020-12-11, Zoom
  • Learning word embeddings across languages
    • 2020-10-16, Zoom
    • M. Artetxe, G. Labaka, and E. Agirre. Learning bilingual word embeddings with (almost) no bilingual data. In Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), pages 451–462, Vancouver, Canada, July 2017. Association for Computational Linguistics. (comes with a video and slides) (recommended by Axel and Adam) 2020-10-16
    • Axel, Adam, and Simon
  • Learning word embeddings across languages * 2020-06-26, Zoom
  • Probabilistic semantics and pragmatics * 2020-05-15, Zoom
  • Structure learning * 2020-04-03, Zoom
    • Tai, Kai & Socher, Richard & Manning, Christoper. (2015). Improved Semantic Representations From Tree-Structured Long Short-Term Memory Networks. 1. 10.3115/v1/P15-1150.
    • Axel (presenter), Robin, Tewodros, Bill, Maryam, and Simon (check)

    • Meaning representations
    • Vector spaces and lexical meaning
      • 2019-10-08 15-17 Dicksonsgatan 4
      • I. Vulić and N. Mrkšić. Specialising word vectors for lexical entailment. In Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers), pages 1134–1145. Association for Computational Linguistics, 2018
      • Bill (presenter), Vlad and Mehdi
    • Meaning in interaction
    • Language models and meaning representations
    • Multi-modal representations of concepts
      • 2019-11-16 10-12 Dicksonsgatan 4
      • B. Landau and R. Jackendoff. “what” and “where” in spatial language and spatial cognition. Behavioral and Brain Sciences, 16(2):217–238, 255–265, 1993. Background: B. Landau. Update on “what” and “where” in spatial language: A new division of labor for spatial terms. Cognitive Science, 41(2):321–350, 2016.
      • Mehdi (presenter), Vlad, Bill
    • Classification of semantic relations: prepositions
    • Interpretation of learned representations
      • 2019-02-22 10-12 Dicksonsgatan 4
      • Li, J., Chen, X., Hovy, E., & Jurafsky, D. (2016). Visualizing and Understanding Neural Models in NLP. In Proceedings of NAACL-HLT (pp. 681-691).
      • Felix (presenter), Mehdi, Robin, Bill and Simon
    • Embeddings beyond words
      • 2019-03-05 10-12 Dicksonsgatan 4
      • Pragst, Louisa, et al. On the Vector Representation of Utterances in Dialogue Context. Proceedings of the Eleventh International Conference on Language Resources and Evaluation (LREC-2018), European Language Resource Association, 2018.
      • Bill (presenter), Robin, Felix and Simon

Topics for individual readings

You can find some topics for individual reading here.

You can find an earlier version of this webpage here.